Mar 12 18:10:20.335258 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 12 18:10:20.959365 master-0 kubenswrapper[4051]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 18:10:20.959365 master-0 kubenswrapper[4051]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 12 18:10:20.959365 master-0 kubenswrapper[4051]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 18:10:20.959365 master-0 kubenswrapper[4051]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 18:10:20.959365 master-0 kubenswrapper[4051]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 12 18:10:20.959365 master-0 kubenswrapper[4051]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 18:10:20.960743 master-0 kubenswrapper[4051]: I0312 18:10:20.960384 4051 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 18:10:20.970299 master-0 kubenswrapper[4051]: W0312 18:10:20.970245 4051 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 18:10:20.970299 master-0 kubenswrapper[4051]: W0312 18:10:20.970287 4051 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 18:10:20.970299 master-0 kubenswrapper[4051]: W0312 18:10:20.970297 4051 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 18:10:20.970299 master-0 kubenswrapper[4051]: W0312 18:10:20.970308 4051 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 18:10:20.970535 master-0 kubenswrapper[4051]: W0312 18:10:20.970317 4051 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 18:10:20.970535 master-0 kubenswrapper[4051]: W0312 18:10:20.970326 4051 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 18:10:20.970535 master-0 kubenswrapper[4051]: W0312 18:10:20.970336 4051 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 18:10:20.970535 master-0 kubenswrapper[4051]: W0312 18:10:20.970344 4051 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 18:10:20.970535 master-0 kubenswrapper[4051]: W0312 18:10:20.970353 4051 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 18:10:20.970535 master-0 kubenswrapper[4051]: W0312 18:10:20.970363 4051 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 18:10:20.970535 master-0 kubenswrapper[4051]: W0312 18:10:20.970371 4051 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 18:10:20.970535 master-0 kubenswrapper[4051]: W0312 18:10:20.970380 4051 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 18:10:20.970535 master-0 kubenswrapper[4051]: W0312 18:10:20.970389 4051 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 18:10:20.970535 master-0 kubenswrapper[4051]: W0312 18:10:20.970400 4051 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 18:10:20.970535 master-0 kubenswrapper[4051]: W0312 18:10:20.970411 4051 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 18:10:20.970535 master-0 kubenswrapper[4051]: W0312 18:10:20.970420 4051 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 18:10:20.970535 master-0 kubenswrapper[4051]: W0312 18:10:20.970429 4051 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 18:10:20.970535 master-0 kubenswrapper[4051]: W0312 18:10:20.970438 4051 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 18:10:20.970535 master-0 kubenswrapper[4051]: W0312 18:10:20.970447 4051 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 18:10:20.970535 master-0 kubenswrapper[4051]: W0312 18:10:20.970455 4051 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 18:10:20.970535 master-0 kubenswrapper[4051]: W0312 18:10:20.970464 4051 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 18:10:20.970535 master-0 kubenswrapper[4051]: W0312 18:10:20.970472 4051 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 18:10:20.970535 master-0 kubenswrapper[4051]: W0312 18:10:20.970481 4051 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 18:10:20.970535 master-0 kubenswrapper[4051]: W0312 18:10:20.970501 4051 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 18:10:20.971411 master-0 kubenswrapper[4051]: W0312 18:10:20.970534 4051 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 18:10:20.971411 master-0 kubenswrapper[4051]: W0312 18:10:20.970543 4051 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 18:10:20.971411 master-0 kubenswrapper[4051]: W0312 18:10:20.970553 4051 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 18:10:20.971411 master-0 kubenswrapper[4051]: W0312 18:10:20.970563 4051 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 18:10:20.971411 master-0 kubenswrapper[4051]: W0312 18:10:20.970573 4051 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 18:10:20.971411 master-0 kubenswrapper[4051]: W0312 18:10:20.970582 4051 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 18:10:20.971411 master-0 kubenswrapper[4051]: W0312 18:10:20.970591 4051 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 18:10:20.971411 master-0 kubenswrapper[4051]: W0312 18:10:20.970599 4051 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 18:10:20.971411 master-0 kubenswrapper[4051]: W0312 18:10:20.970608 4051 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 18:10:20.971411 master-0 kubenswrapper[4051]: W0312 18:10:20.970620 4051 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 18:10:20.971411 master-0 kubenswrapper[4051]: W0312 18:10:20.970631 4051 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 18:10:20.971411 master-0 kubenswrapper[4051]: W0312 18:10:20.970642 4051 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 18:10:20.971411 master-0 kubenswrapper[4051]: W0312 18:10:20.970650 4051 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 18:10:20.971411 master-0 kubenswrapper[4051]: W0312 18:10:20.970660 4051 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 18:10:20.971411 master-0 kubenswrapper[4051]: W0312 18:10:20.970671 4051 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 18:10:20.971411 master-0 kubenswrapper[4051]: W0312 18:10:20.970680 4051 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 18:10:20.971411 master-0 kubenswrapper[4051]: W0312 18:10:20.970689 4051 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 18:10:20.971411 master-0 kubenswrapper[4051]: W0312 18:10:20.970700 4051 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 18:10:20.971411 master-0 kubenswrapper[4051]: W0312 18:10:20.970747 4051 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 18:10:20.972350 master-0 kubenswrapper[4051]: W0312 18:10:20.970759 4051 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 18:10:20.972350 master-0 kubenswrapper[4051]: W0312 18:10:20.970768 4051 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 18:10:20.972350 master-0 kubenswrapper[4051]: W0312 18:10:20.970777 4051 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 18:10:20.972350 master-0 kubenswrapper[4051]: W0312 18:10:20.970786 4051 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 18:10:20.972350 master-0 kubenswrapper[4051]: W0312 18:10:20.970795 4051 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 18:10:20.972350 master-0 kubenswrapper[4051]: W0312 18:10:20.970805 4051 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 18:10:20.972350 master-0 kubenswrapper[4051]: W0312 18:10:20.970814 4051 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 18:10:20.972350 master-0 kubenswrapper[4051]: W0312 18:10:20.970823 4051 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 18:10:20.972350 master-0 kubenswrapper[4051]: W0312 18:10:20.970832 4051 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 18:10:20.972350 master-0 kubenswrapper[4051]: W0312 18:10:20.970840 4051 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 18:10:20.972350 master-0 kubenswrapper[4051]: W0312 18:10:20.970851 4051 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 18:10:20.972350 master-0 kubenswrapper[4051]: W0312 18:10:20.970860 4051 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 18:10:20.972350 master-0 kubenswrapper[4051]: W0312 18:10:20.970868 4051 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 18:10:20.972350 master-0 kubenswrapper[4051]: W0312 18:10:20.970877 4051 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 18:10:20.972350 master-0 kubenswrapper[4051]: W0312 18:10:20.970887 4051 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 18:10:20.972350 master-0 kubenswrapper[4051]: W0312 18:10:20.970896 4051 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 18:10:20.972350 master-0 kubenswrapper[4051]: W0312 18:10:20.970907 4051 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 18:10:20.972350 master-0 kubenswrapper[4051]: W0312 18:10:20.970919 4051 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 18:10:20.972350 master-0 kubenswrapper[4051]: W0312 18:10:20.970928 4051 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 18:10:20.972350 master-0 kubenswrapper[4051]: W0312 18:10:20.970937 4051 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 18:10:20.973436 master-0 kubenswrapper[4051]: W0312 18:10:20.970949 4051 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 18:10:20.973436 master-0 kubenswrapper[4051]: W0312 18:10:20.970957 4051 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 18:10:20.973436 master-0 kubenswrapper[4051]: W0312 18:10:20.970965 4051 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 18:10:20.973436 master-0 kubenswrapper[4051]: W0312 18:10:20.970974 4051 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 18:10:20.973436 master-0 kubenswrapper[4051]: W0312 18:10:20.970983 4051 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 18:10:20.973436 master-0 kubenswrapper[4051]: W0312 18:10:20.970991 4051 feature_gate.go:330] unrecognized feature gate: Example Mar 12 18:10:20.973436 master-0 kubenswrapper[4051]: W0312 18:10:20.971000 4051 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 18:10:20.973436 master-0 kubenswrapper[4051]: W0312 18:10:20.971011 4051 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 18:10:20.973436 master-0 kubenswrapper[4051]: W0312 18:10:20.971021 4051 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 18:10:20.973436 master-0 kubenswrapper[4051]: I0312 18:10:20.971194 4051 flags.go:64] FLAG: --address="0.0.0.0" Mar 12 18:10:20.973436 master-0 kubenswrapper[4051]: I0312 18:10:20.971214 4051 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 12 18:10:20.973436 master-0 kubenswrapper[4051]: I0312 18:10:20.971231 4051 flags.go:64] FLAG: --anonymous-auth="true" Mar 12 18:10:20.973436 master-0 kubenswrapper[4051]: I0312 18:10:20.971244 4051 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 12 18:10:20.973436 master-0 kubenswrapper[4051]: I0312 18:10:20.971256 4051 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 12 18:10:20.973436 master-0 kubenswrapper[4051]: I0312 18:10:20.971266 4051 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 12 18:10:20.973436 master-0 kubenswrapper[4051]: I0312 18:10:20.971280 4051 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 12 18:10:20.973436 master-0 kubenswrapper[4051]: I0312 18:10:20.971292 4051 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 12 18:10:20.973436 master-0 kubenswrapper[4051]: I0312 18:10:20.971303 4051 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 12 18:10:20.973436 master-0 kubenswrapper[4051]: I0312 18:10:20.971313 4051 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 12 18:10:20.973436 master-0 kubenswrapper[4051]: I0312 18:10:20.971324 4051 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 12 18:10:20.973436 master-0 kubenswrapper[4051]: I0312 18:10:20.971334 4051 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 12 18:10:20.974378 master-0 kubenswrapper[4051]: I0312 18:10:20.971345 4051 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 12 18:10:20.974378 master-0 kubenswrapper[4051]: I0312 18:10:20.971355 4051 flags.go:64] FLAG: --cgroup-root="" Mar 12 18:10:20.974378 master-0 kubenswrapper[4051]: I0312 18:10:20.971365 4051 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 12 18:10:20.974378 master-0 kubenswrapper[4051]: I0312 18:10:20.971375 4051 flags.go:64] FLAG: --client-ca-file="" Mar 12 18:10:20.974378 master-0 kubenswrapper[4051]: I0312 18:10:20.971384 4051 flags.go:64] FLAG: --cloud-config="" Mar 12 18:10:20.974378 master-0 kubenswrapper[4051]: I0312 18:10:20.971405 4051 flags.go:64] FLAG: --cloud-provider="" Mar 12 18:10:20.974378 master-0 kubenswrapper[4051]: I0312 18:10:20.971416 4051 flags.go:64] FLAG: --cluster-dns="[]" Mar 12 18:10:20.974378 master-0 kubenswrapper[4051]: I0312 18:10:20.971428 4051 flags.go:64] FLAG: --cluster-domain="" Mar 12 18:10:20.974378 master-0 kubenswrapper[4051]: I0312 18:10:20.971438 4051 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 12 18:10:20.974378 master-0 kubenswrapper[4051]: I0312 18:10:20.971449 4051 flags.go:64] FLAG: --config-dir="" Mar 12 18:10:20.974378 master-0 kubenswrapper[4051]: I0312 18:10:20.971459 4051 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 12 18:10:20.974378 master-0 kubenswrapper[4051]: I0312 18:10:20.971470 4051 flags.go:64] FLAG: --container-log-max-files="5" Mar 12 18:10:20.974378 master-0 kubenswrapper[4051]: I0312 18:10:20.971482 4051 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 12 18:10:20.974378 master-0 kubenswrapper[4051]: I0312 18:10:20.971492 4051 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 12 18:10:20.974378 master-0 kubenswrapper[4051]: I0312 18:10:20.971502 4051 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 12 18:10:20.974378 master-0 kubenswrapper[4051]: I0312 18:10:20.971543 4051 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 12 18:10:20.974378 master-0 kubenswrapper[4051]: I0312 18:10:20.971555 4051 flags.go:64] FLAG: --contention-profiling="false" Mar 12 18:10:20.974378 master-0 kubenswrapper[4051]: I0312 18:10:20.971565 4051 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 12 18:10:20.974378 master-0 kubenswrapper[4051]: I0312 18:10:20.971574 4051 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 12 18:10:20.974378 master-0 kubenswrapper[4051]: I0312 18:10:20.971585 4051 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 12 18:10:20.974378 master-0 kubenswrapper[4051]: I0312 18:10:20.971594 4051 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 12 18:10:20.974378 master-0 kubenswrapper[4051]: I0312 18:10:20.971607 4051 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 12 18:10:20.974378 master-0 kubenswrapper[4051]: I0312 18:10:20.971618 4051 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 12 18:10:20.974378 master-0 kubenswrapper[4051]: I0312 18:10:20.971628 4051 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 12 18:10:20.974378 master-0 kubenswrapper[4051]: I0312 18:10:20.971638 4051 flags.go:64] FLAG: --enable-load-reader="false" Mar 12 18:10:20.975488 master-0 kubenswrapper[4051]: I0312 18:10:20.971648 4051 flags.go:64] FLAG: --enable-server="true" Mar 12 18:10:20.975488 master-0 kubenswrapper[4051]: I0312 18:10:20.971658 4051 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 12 18:10:20.975488 master-0 kubenswrapper[4051]: I0312 18:10:20.971672 4051 flags.go:64] FLAG: --event-burst="100" Mar 12 18:10:20.975488 master-0 kubenswrapper[4051]: I0312 18:10:20.971682 4051 flags.go:64] FLAG: --event-qps="50" Mar 12 18:10:20.975488 master-0 kubenswrapper[4051]: I0312 18:10:20.971695 4051 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 12 18:10:20.975488 master-0 kubenswrapper[4051]: I0312 18:10:20.971709 4051 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 12 18:10:20.975488 master-0 kubenswrapper[4051]: I0312 18:10:20.971722 4051 flags.go:64] FLAG: --eviction-hard="" Mar 12 18:10:20.975488 master-0 kubenswrapper[4051]: I0312 18:10:20.971738 4051 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 12 18:10:20.975488 master-0 kubenswrapper[4051]: I0312 18:10:20.971752 4051 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 12 18:10:20.975488 master-0 kubenswrapper[4051]: I0312 18:10:20.971764 4051 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 12 18:10:20.975488 master-0 kubenswrapper[4051]: I0312 18:10:20.971776 4051 flags.go:64] FLAG: --eviction-soft="" Mar 12 18:10:20.975488 master-0 kubenswrapper[4051]: I0312 18:10:20.971786 4051 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 12 18:10:20.975488 master-0 kubenswrapper[4051]: I0312 18:10:20.971801 4051 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 12 18:10:20.975488 master-0 kubenswrapper[4051]: I0312 18:10:20.971811 4051 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 12 18:10:20.975488 master-0 kubenswrapper[4051]: I0312 18:10:20.971821 4051 flags.go:64] FLAG: --experimental-mounter-path="" Mar 12 18:10:20.975488 master-0 kubenswrapper[4051]: I0312 18:10:20.971831 4051 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 12 18:10:20.975488 master-0 kubenswrapper[4051]: I0312 18:10:20.971840 4051 flags.go:64] FLAG: --fail-swap-on="true" Mar 12 18:10:20.975488 master-0 kubenswrapper[4051]: I0312 18:10:20.971851 4051 flags.go:64] FLAG: --feature-gates="" Mar 12 18:10:20.975488 master-0 kubenswrapper[4051]: I0312 18:10:20.971863 4051 flags.go:64] FLAG: --file-check-frequency="20s" Mar 12 18:10:20.975488 master-0 kubenswrapper[4051]: I0312 18:10:20.971873 4051 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 12 18:10:20.975488 master-0 kubenswrapper[4051]: I0312 18:10:20.971883 4051 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 12 18:10:20.975488 master-0 kubenswrapper[4051]: I0312 18:10:20.971893 4051 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 12 18:10:20.975488 master-0 kubenswrapper[4051]: I0312 18:10:20.971904 4051 flags.go:64] FLAG: --healthz-port="10248" Mar 12 18:10:20.975488 master-0 kubenswrapper[4051]: I0312 18:10:20.971915 4051 flags.go:64] FLAG: --help="false" Mar 12 18:10:20.975488 master-0 kubenswrapper[4051]: I0312 18:10:20.971925 4051 flags.go:64] FLAG: --hostname-override="" Mar 12 18:10:20.975488 master-0 kubenswrapper[4051]: I0312 18:10:20.971934 4051 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 12 18:10:20.976692 master-0 kubenswrapper[4051]: I0312 18:10:20.971944 4051 flags.go:64] FLAG: --http-check-frequency="20s" Mar 12 18:10:20.976692 master-0 kubenswrapper[4051]: I0312 18:10:20.971954 4051 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 12 18:10:20.976692 master-0 kubenswrapper[4051]: I0312 18:10:20.971965 4051 flags.go:64] FLAG: --image-credential-provider-config="" Mar 12 18:10:20.976692 master-0 kubenswrapper[4051]: I0312 18:10:20.971974 4051 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 12 18:10:20.976692 master-0 kubenswrapper[4051]: I0312 18:10:20.971984 4051 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 12 18:10:20.976692 master-0 kubenswrapper[4051]: I0312 18:10:20.971994 4051 flags.go:64] FLAG: --image-service-endpoint="" Mar 12 18:10:20.976692 master-0 kubenswrapper[4051]: I0312 18:10:20.972004 4051 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 12 18:10:20.976692 master-0 kubenswrapper[4051]: I0312 18:10:20.972014 4051 flags.go:64] FLAG: --kube-api-burst="100" Mar 12 18:10:20.976692 master-0 kubenswrapper[4051]: I0312 18:10:20.972024 4051 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 12 18:10:20.976692 master-0 kubenswrapper[4051]: I0312 18:10:20.972034 4051 flags.go:64] FLAG: --kube-api-qps="50" Mar 12 18:10:20.976692 master-0 kubenswrapper[4051]: I0312 18:10:20.972045 4051 flags.go:64] FLAG: --kube-reserved="" Mar 12 18:10:20.976692 master-0 kubenswrapper[4051]: I0312 18:10:20.972054 4051 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 12 18:10:20.976692 master-0 kubenswrapper[4051]: I0312 18:10:20.972064 4051 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 12 18:10:20.976692 master-0 kubenswrapper[4051]: I0312 18:10:20.972075 4051 flags.go:64] FLAG: --kubelet-cgroups="" Mar 12 18:10:20.976692 master-0 kubenswrapper[4051]: I0312 18:10:20.972084 4051 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 12 18:10:20.976692 master-0 kubenswrapper[4051]: I0312 18:10:20.972097 4051 flags.go:64] FLAG: --lock-file="" Mar 12 18:10:20.976692 master-0 kubenswrapper[4051]: I0312 18:10:20.972107 4051 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 12 18:10:20.976692 master-0 kubenswrapper[4051]: I0312 18:10:20.972117 4051 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 12 18:10:20.976692 master-0 kubenswrapper[4051]: I0312 18:10:20.972130 4051 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 12 18:10:20.976692 master-0 kubenswrapper[4051]: I0312 18:10:20.972145 4051 flags.go:64] FLAG: --log-json-split-stream="false" Mar 12 18:10:20.976692 master-0 kubenswrapper[4051]: I0312 18:10:20.972154 4051 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 12 18:10:20.976692 master-0 kubenswrapper[4051]: I0312 18:10:20.972165 4051 flags.go:64] FLAG: --log-text-split-stream="false" Mar 12 18:10:20.976692 master-0 kubenswrapper[4051]: I0312 18:10:20.972174 4051 flags.go:64] FLAG: --logging-format="text" Mar 12 18:10:20.976692 master-0 kubenswrapper[4051]: I0312 18:10:20.972184 4051 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 12 18:10:20.976692 master-0 kubenswrapper[4051]: I0312 18:10:20.972195 4051 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 12 18:10:20.977797 master-0 kubenswrapper[4051]: I0312 18:10:20.972204 4051 flags.go:64] FLAG: --manifest-url="" Mar 12 18:10:20.977797 master-0 kubenswrapper[4051]: I0312 18:10:20.972214 4051 flags.go:64] FLAG: --manifest-url-header="" Mar 12 18:10:20.977797 master-0 kubenswrapper[4051]: I0312 18:10:20.972227 4051 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 12 18:10:20.977797 master-0 kubenswrapper[4051]: I0312 18:10:20.972238 4051 flags.go:64] FLAG: --max-open-files="1000000" Mar 12 18:10:20.977797 master-0 kubenswrapper[4051]: I0312 18:10:20.972250 4051 flags.go:64] FLAG: --max-pods="110" Mar 12 18:10:20.977797 master-0 kubenswrapper[4051]: I0312 18:10:20.972260 4051 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 12 18:10:20.977797 master-0 kubenswrapper[4051]: I0312 18:10:20.972270 4051 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 12 18:10:20.977797 master-0 kubenswrapper[4051]: I0312 18:10:20.972279 4051 flags.go:64] FLAG: --memory-manager-policy="None" Mar 12 18:10:20.977797 master-0 kubenswrapper[4051]: I0312 18:10:20.972290 4051 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 12 18:10:20.977797 master-0 kubenswrapper[4051]: I0312 18:10:20.972300 4051 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 12 18:10:20.977797 master-0 kubenswrapper[4051]: I0312 18:10:20.972310 4051 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 12 18:10:20.977797 master-0 kubenswrapper[4051]: I0312 18:10:20.972320 4051 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 12 18:10:20.977797 master-0 kubenswrapper[4051]: I0312 18:10:20.972341 4051 flags.go:64] FLAG: --node-status-max-images="50" Mar 12 18:10:20.977797 master-0 kubenswrapper[4051]: I0312 18:10:20.972352 4051 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 12 18:10:20.977797 master-0 kubenswrapper[4051]: I0312 18:10:20.972363 4051 flags.go:64] FLAG: --oom-score-adj="-999" Mar 12 18:10:20.977797 master-0 kubenswrapper[4051]: I0312 18:10:20.972373 4051 flags.go:64] FLAG: --pod-cidr="" Mar 12 18:10:20.977797 master-0 kubenswrapper[4051]: I0312 18:10:20.972383 4051 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3" Mar 12 18:10:20.977797 master-0 kubenswrapper[4051]: I0312 18:10:20.972398 4051 flags.go:64] FLAG: --pod-manifest-path="" Mar 12 18:10:20.977797 master-0 kubenswrapper[4051]: I0312 18:10:20.972433 4051 flags.go:64] FLAG: --pod-max-pids="-1" Mar 12 18:10:20.977797 master-0 kubenswrapper[4051]: I0312 18:10:20.972445 4051 flags.go:64] FLAG: --pods-per-core="0" Mar 12 18:10:20.977797 master-0 kubenswrapper[4051]: I0312 18:10:20.972455 4051 flags.go:64] FLAG: --port="10250" Mar 12 18:10:20.977797 master-0 kubenswrapper[4051]: I0312 18:10:20.972465 4051 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 12 18:10:20.977797 master-0 kubenswrapper[4051]: I0312 18:10:20.972475 4051 flags.go:64] FLAG: --provider-id="" Mar 12 18:10:20.977797 master-0 kubenswrapper[4051]: I0312 18:10:20.972485 4051 flags.go:64] FLAG: --qos-reserved="" Mar 12 18:10:20.978883 master-0 kubenswrapper[4051]: I0312 18:10:20.972495 4051 flags.go:64] FLAG: --read-only-port="10255" Mar 12 18:10:20.978883 master-0 kubenswrapper[4051]: I0312 18:10:20.972507 4051 flags.go:64] FLAG: --register-node="true" Mar 12 18:10:20.978883 master-0 kubenswrapper[4051]: I0312 18:10:20.972549 4051 flags.go:64] FLAG: --register-schedulable="true" Mar 12 18:10:20.978883 master-0 kubenswrapper[4051]: I0312 18:10:20.972559 4051 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 12 18:10:20.978883 master-0 kubenswrapper[4051]: I0312 18:10:20.972577 4051 flags.go:64] FLAG: --registry-burst="10" Mar 12 18:10:20.978883 master-0 kubenswrapper[4051]: I0312 18:10:20.972587 4051 flags.go:64] FLAG: --registry-qps="5" Mar 12 18:10:20.978883 master-0 kubenswrapper[4051]: I0312 18:10:20.972597 4051 flags.go:64] FLAG: --reserved-cpus="" Mar 12 18:10:20.978883 master-0 kubenswrapper[4051]: I0312 18:10:20.972606 4051 flags.go:64] FLAG: --reserved-memory="" Mar 12 18:10:20.978883 master-0 kubenswrapper[4051]: I0312 18:10:20.972618 4051 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 12 18:10:20.978883 master-0 kubenswrapper[4051]: I0312 18:10:20.972628 4051 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 12 18:10:20.978883 master-0 kubenswrapper[4051]: I0312 18:10:20.972638 4051 flags.go:64] FLAG: --rotate-certificates="false" Mar 12 18:10:20.978883 master-0 kubenswrapper[4051]: I0312 18:10:20.972648 4051 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 12 18:10:20.978883 master-0 kubenswrapper[4051]: I0312 18:10:20.972658 4051 flags.go:64] FLAG: --runonce="false" Mar 12 18:10:20.978883 master-0 kubenswrapper[4051]: I0312 18:10:20.972667 4051 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 12 18:10:20.978883 master-0 kubenswrapper[4051]: I0312 18:10:20.972677 4051 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 12 18:10:20.978883 master-0 kubenswrapper[4051]: I0312 18:10:20.972688 4051 flags.go:64] FLAG: --seccomp-default="false" Mar 12 18:10:20.978883 master-0 kubenswrapper[4051]: I0312 18:10:20.972698 4051 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 12 18:10:20.978883 master-0 kubenswrapper[4051]: I0312 18:10:20.972707 4051 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 12 18:10:20.978883 master-0 kubenswrapper[4051]: I0312 18:10:20.972717 4051 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 12 18:10:20.978883 master-0 kubenswrapper[4051]: I0312 18:10:20.972728 4051 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 12 18:10:20.978883 master-0 kubenswrapper[4051]: I0312 18:10:20.972738 4051 flags.go:64] FLAG: --storage-driver-password="root" Mar 12 18:10:20.978883 master-0 kubenswrapper[4051]: I0312 18:10:20.972748 4051 flags.go:64] FLAG: --storage-driver-secure="false" Mar 12 18:10:20.978883 master-0 kubenswrapper[4051]: I0312 18:10:20.972758 4051 flags.go:64] FLAG: --storage-driver-table="stats" Mar 12 18:10:20.978883 master-0 kubenswrapper[4051]: I0312 18:10:20.972768 4051 flags.go:64] FLAG: --storage-driver-user="root" Mar 12 18:10:20.978883 master-0 kubenswrapper[4051]: I0312 18:10:20.972778 4051 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 12 18:10:20.979964 master-0 kubenswrapper[4051]: I0312 18:10:20.972789 4051 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 12 18:10:20.979964 master-0 kubenswrapper[4051]: I0312 18:10:20.972802 4051 flags.go:64] FLAG: --system-cgroups="" Mar 12 18:10:20.979964 master-0 kubenswrapper[4051]: I0312 18:10:20.972814 4051 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 12 18:10:20.979964 master-0 kubenswrapper[4051]: I0312 18:10:20.972834 4051 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 12 18:10:20.979964 master-0 kubenswrapper[4051]: I0312 18:10:20.972846 4051 flags.go:64] FLAG: --tls-cert-file="" Mar 12 18:10:20.979964 master-0 kubenswrapper[4051]: I0312 18:10:20.972858 4051 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 12 18:10:20.979964 master-0 kubenswrapper[4051]: I0312 18:10:20.972873 4051 flags.go:64] FLAG: --tls-min-version="" Mar 12 18:10:20.979964 master-0 kubenswrapper[4051]: I0312 18:10:20.972885 4051 flags.go:64] FLAG: --tls-private-key-file="" Mar 12 18:10:20.979964 master-0 kubenswrapper[4051]: I0312 18:10:20.972897 4051 flags.go:64] FLAG: --topology-manager-policy="none" Mar 12 18:10:20.979964 master-0 kubenswrapper[4051]: I0312 18:10:20.972913 4051 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 12 18:10:20.979964 master-0 kubenswrapper[4051]: I0312 18:10:20.972925 4051 flags.go:64] FLAG: --topology-manager-scope="container" Mar 12 18:10:20.979964 master-0 kubenswrapper[4051]: I0312 18:10:20.972938 4051 flags.go:64] FLAG: --v="2" Mar 12 18:10:20.979964 master-0 kubenswrapper[4051]: I0312 18:10:20.972953 4051 flags.go:64] FLAG: --version="false" Mar 12 18:10:20.979964 master-0 kubenswrapper[4051]: I0312 18:10:20.972972 4051 flags.go:64] FLAG: --vmodule="" Mar 12 18:10:20.979964 master-0 kubenswrapper[4051]: I0312 18:10:20.972984 4051 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 12 18:10:20.979964 master-0 kubenswrapper[4051]: I0312 18:10:20.972996 4051 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 12 18:10:20.979964 master-0 kubenswrapper[4051]: W0312 18:10:20.973301 4051 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 18:10:20.979964 master-0 kubenswrapper[4051]: W0312 18:10:20.973324 4051 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 18:10:20.979964 master-0 kubenswrapper[4051]: W0312 18:10:20.973335 4051 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 18:10:20.979964 master-0 kubenswrapper[4051]: W0312 18:10:20.973346 4051 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 18:10:20.979964 master-0 kubenswrapper[4051]: W0312 18:10:20.973355 4051 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 18:10:20.979964 master-0 kubenswrapper[4051]: W0312 18:10:20.973366 4051 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 18:10:20.979964 master-0 kubenswrapper[4051]: W0312 18:10:20.973376 4051 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 18:10:20.980958 master-0 kubenswrapper[4051]: W0312 18:10:20.973385 4051 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 18:10:20.980958 master-0 kubenswrapper[4051]: W0312 18:10:20.973394 4051 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 18:10:20.980958 master-0 kubenswrapper[4051]: W0312 18:10:20.973403 4051 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 18:10:20.980958 master-0 kubenswrapper[4051]: W0312 18:10:20.973411 4051 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 18:10:20.980958 master-0 kubenswrapper[4051]: W0312 18:10:20.973420 4051 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 18:10:20.980958 master-0 kubenswrapper[4051]: W0312 18:10:20.973428 4051 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 18:10:20.980958 master-0 kubenswrapper[4051]: W0312 18:10:20.973437 4051 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 18:10:20.980958 master-0 kubenswrapper[4051]: W0312 18:10:20.973446 4051 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 18:10:20.980958 master-0 kubenswrapper[4051]: W0312 18:10:20.973456 4051 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 18:10:20.980958 master-0 kubenswrapper[4051]: W0312 18:10:20.973465 4051 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 18:10:20.980958 master-0 kubenswrapper[4051]: W0312 18:10:20.973476 4051 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 18:10:20.980958 master-0 kubenswrapper[4051]: W0312 18:10:20.973488 4051 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 18:10:20.980958 master-0 kubenswrapper[4051]: W0312 18:10:20.973499 4051 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 18:10:20.980958 master-0 kubenswrapper[4051]: W0312 18:10:20.973508 4051 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 18:10:20.980958 master-0 kubenswrapper[4051]: W0312 18:10:20.973558 4051 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 18:10:20.980958 master-0 kubenswrapper[4051]: W0312 18:10:20.973568 4051 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 18:10:20.980958 master-0 kubenswrapper[4051]: W0312 18:10:20.973578 4051 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 18:10:20.980958 master-0 kubenswrapper[4051]: W0312 18:10:20.973588 4051 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 18:10:20.980958 master-0 kubenswrapper[4051]: W0312 18:10:20.973602 4051 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 18:10:20.980958 master-0 kubenswrapper[4051]: W0312 18:10:20.973612 4051 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 18:10:20.981902 master-0 kubenswrapper[4051]: W0312 18:10:20.973621 4051 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 18:10:20.981902 master-0 kubenswrapper[4051]: W0312 18:10:20.973631 4051 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 18:10:20.981902 master-0 kubenswrapper[4051]: W0312 18:10:20.973649 4051 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 18:10:20.981902 master-0 kubenswrapper[4051]: W0312 18:10:20.973658 4051 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 18:10:20.981902 master-0 kubenswrapper[4051]: W0312 18:10:20.973668 4051 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 18:10:20.981902 master-0 kubenswrapper[4051]: W0312 18:10:20.973677 4051 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 18:10:20.981902 master-0 kubenswrapper[4051]: W0312 18:10:20.973687 4051 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 18:10:20.981902 master-0 kubenswrapper[4051]: W0312 18:10:20.973696 4051 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 18:10:20.981902 master-0 kubenswrapper[4051]: W0312 18:10:20.973712 4051 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 18:10:20.981902 master-0 kubenswrapper[4051]: W0312 18:10:20.973723 4051 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 18:10:20.981902 master-0 kubenswrapper[4051]: W0312 18:10:20.973736 4051 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 18:10:20.981902 master-0 kubenswrapper[4051]: W0312 18:10:20.973746 4051 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 18:10:20.981902 master-0 kubenswrapper[4051]: W0312 18:10:20.973767 4051 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 18:10:20.981902 master-0 kubenswrapper[4051]: W0312 18:10:20.973777 4051 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 18:10:20.981902 master-0 kubenswrapper[4051]: W0312 18:10:20.973787 4051 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 18:10:20.981902 master-0 kubenswrapper[4051]: W0312 18:10:20.973798 4051 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 18:10:20.981902 master-0 kubenswrapper[4051]: W0312 18:10:20.973807 4051 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 18:10:20.981902 master-0 kubenswrapper[4051]: W0312 18:10:20.973816 4051 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 18:10:20.981902 master-0 kubenswrapper[4051]: W0312 18:10:20.973825 4051 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 18:10:20.982747 master-0 kubenswrapper[4051]: W0312 18:10:20.973834 4051 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 18:10:20.982747 master-0 kubenswrapper[4051]: W0312 18:10:20.973843 4051 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 18:10:20.982747 master-0 kubenswrapper[4051]: W0312 18:10:20.973852 4051 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 18:10:20.982747 master-0 kubenswrapper[4051]: W0312 18:10:20.973860 4051 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 18:10:20.982747 master-0 kubenswrapper[4051]: W0312 18:10:20.973869 4051 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 18:10:20.982747 master-0 kubenswrapper[4051]: W0312 18:10:20.973877 4051 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 18:10:20.982747 master-0 kubenswrapper[4051]: W0312 18:10:20.973886 4051 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 18:10:20.982747 master-0 kubenswrapper[4051]: W0312 18:10:20.973896 4051 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 18:10:20.982747 master-0 kubenswrapper[4051]: W0312 18:10:20.973905 4051 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 18:10:20.982747 master-0 kubenswrapper[4051]: W0312 18:10:20.973916 4051 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 18:10:20.982747 master-0 kubenswrapper[4051]: W0312 18:10:20.973927 4051 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 18:10:20.982747 master-0 kubenswrapper[4051]: W0312 18:10:20.973941 4051 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 18:10:20.982747 master-0 kubenswrapper[4051]: W0312 18:10:20.973951 4051 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 18:10:20.982747 master-0 kubenswrapper[4051]: W0312 18:10:20.973960 4051 feature_gate.go:330] unrecognized feature gate: Example Mar 12 18:10:20.982747 master-0 kubenswrapper[4051]: W0312 18:10:20.973969 4051 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 18:10:20.982747 master-0 kubenswrapper[4051]: W0312 18:10:20.973977 4051 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 18:10:20.982747 master-0 kubenswrapper[4051]: W0312 18:10:20.973986 4051 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 18:10:20.982747 master-0 kubenswrapper[4051]: W0312 18:10:20.973995 4051 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 18:10:20.982747 master-0 kubenswrapper[4051]: W0312 18:10:20.974003 4051 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 18:10:20.982747 master-0 kubenswrapper[4051]: W0312 18:10:20.974012 4051 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 18:10:20.983654 master-0 kubenswrapper[4051]: W0312 18:10:20.974020 4051 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 18:10:20.983654 master-0 kubenswrapper[4051]: W0312 18:10:20.974030 4051 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 18:10:20.983654 master-0 kubenswrapper[4051]: W0312 18:10:20.974041 4051 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 18:10:20.983654 master-0 kubenswrapper[4051]: W0312 18:10:20.974064 4051 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 18:10:20.983654 master-0 kubenswrapper[4051]: W0312 18:10:20.974082 4051 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 18:10:20.983654 master-0 kubenswrapper[4051]: W0312 18:10:20.974093 4051 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 18:10:20.983654 master-0 kubenswrapper[4051]: I0312 18:10:20.975371 4051 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 18:10:20.986543 master-0 kubenswrapper[4051]: I0312 18:10:20.986473 4051 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 12 18:10:20.986591 master-0 kubenswrapper[4051]: I0312 18:10:20.986566 4051 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 18:10:20.986914 master-0 kubenswrapper[4051]: W0312 18:10:20.986877 4051 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 18:10:20.986914 master-0 kubenswrapper[4051]: W0312 18:10:20.986905 4051 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 18:10:20.986914 master-0 kubenswrapper[4051]: W0312 18:10:20.986915 4051 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 18:10:20.987018 master-0 kubenswrapper[4051]: W0312 18:10:20.986926 4051 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 18:10:20.987018 master-0 kubenswrapper[4051]: W0312 18:10:20.986935 4051 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 18:10:20.987018 master-0 kubenswrapper[4051]: W0312 18:10:20.986944 4051 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 18:10:20.987018 master-0 kubenswrapper[4051]: W0312 18:10:20.986953 4051 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 18:10:20.987018 master-0 kubenswrapper[4051]: W0312 18:10:20.986962 4051 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 18:10:20.987018 master-0 kubenswrapper[4051]: W0312 18:10:20.986971 4051 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 18:10:20.987018 master-0 kubenswrapper[4051]: W0312 18:10:20.986979 4051 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 18:10:20.987018 master-0 kubenswrapper[4051]: W0312 18:10:20.986987 4051 feature_gate.go:330] unrecognized feature gate: Example Mar 12 18:10:20.987018 master-0 kubenswrapper[4051]: W0312 18:10:20.986995 4051 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 18:10:20.987018 master-0 kubenswrapper[4051]: W0312 18:10:20.987003 4051 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 18:10:20.987018 master-0 kubenswrapper[4051]: W0312 18:10:20.987011 4051 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 18:10:20.987018 master-0 kubenswrapper[4051]: W0312 18:10:20.987019 4051 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 18:10:20.987018 master-0 kubenswrapper[4051]: W0312 18:10:20.987028 4051 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 18:10:20.987390 master-0 kubenswrapper[4051]: W0312 18:10:20.987037 4051 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 18:10:20.987390 master-0 kubenswrapper[4051]: W0312 18:10:20.987046 4051 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 18:10:20.987390 master-0 kubenswrapper[4051]: W0312 18:10:20.987055 4051 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 18:10:20.987390 master-0 kubenswrapper[4051]: W0312 18:10:20.987064 4051 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 18:10:20.987390 master-0 kubenswrapper[4051]: W0312 18:10:20.987072 4051 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 18:10:20.987390 master-0 kubenswrapper[4051]: W0312 18:10:20.987083 4051 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 18:10:20.987390 master-0 kubenswrapper[4051]: W0312 18:10:20.987096 4051 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 18:10:20.987390 master-0 kubenswrapper[4051]: W0312 18:10:20.987106 4051 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 18:10:20.987390 master-0 kubenswrapper[4051]: W0312 18:10:20.987115 4051 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 18:10:20.987390 master-0 kubenswrapper[4051]: W0312 18:10:20.987124 4051 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 18:10:20.987390 master-0 kubenswrapper[4051]: W0312 18:10:20.987132 4051 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 18:10:20.987390 master-0 kubenswrapper[4051]: W0312 18:10:20.987140 4051 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 18:10:20.987390 master-0 kubenswrapper[4051]: W0312 18:10:20.987149 4051 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 18:10:20.987390 master-0 kubenswrapper[4051]: W0312 18:10:20.987159 4051 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 18:10:20.987390 master-0 kubenswrapper[4051]: W0312 18:10:20.987169 4051 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 18:10:20.987390 master-0 kubenswrapper[4051]: W0312 18:10:20.987177 4051 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 18:10:20.987390 master-0 kubenswrapper[4051]: W0312 18:10:20.987184 4051 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 18:10:20.987390 master-0 kubenswrapper[4051]: W0312 18:10:20.987194 4051 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 18:10:20.987390 master-0 kubenswrapper[4051]: W0312 18:10:20.987215 4051 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 18:10:20.987390 master-0 kubenswrapper[4051]: W0312 18:10:20.987223 4051 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 18:10:20.987947 master-0 kubenswrapper[4051]: W0312 18:10:20.987231 4051 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 18:10:20.987947 master-0 kubenswrapper[4051]: W0312 18:10:20.987239 4051 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 18:10:20.987947 master-0 kubenswrapper[4051]: W0312 18:10:20.987248 4051 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 18:10:20.987947 master-0 kubenswrapper[4051]: W0312 18:10:20.987255 4051 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 18:10:20.987947 master-0 kubenswrapper[4051]: W0312 18:10:20.987263 4051 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 18:10:20.987947 master-0 kubenswrapper[4051]: W0312 18:10:20.987271 4051 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 18:10:20.987947 master-0 kubenswrapper[4051]: W0312 18:10:20.987279 4051 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 18:10:20.987947 master-0 kubenswrapper[4051]: W0312 18:10:20.987287 4051 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 18:10:20.987947 master-0 kubenswrapper[4051]: W0312 18:10:20.987295 4051 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 18:10:20.987947 master-0 kubenswrapper[4051]: W0312 18:10:20.987303 4051 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 18:10:20.987947 master-0 kubenswrapper[4051]: W0312 18:10:20.987311 4051 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 18:10:20.987947 master-0 kubenswrapper[4051]: W0312 18:10:20.987319 4051 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 18:10:20.987947 master-0 kubenswrapper[4051]: W0312 18:10:20.987330 4051 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 18:10:20.987947 master-0 kubenswrapper[4051]: W0312 18:10:20.987342 4051 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 18:10:20.987947 master-0 kubenswrapper[4051]: W0312 18:10:20.987351 4051 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 18:10:20.987947 master-0 kubenswrapper[4051]: W0312 18:10:20.987359 4051 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 18:10:20.987947 master-0 kubenswrapper[4051]: W0312 18:10:20.987369 4051 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 18:10:20.987947 master-0 kubenswrapper[4051]: W0312 18:10:20.987379 4051 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 18:10:20.987947 master-0 kubenswrapper[4051]: W0312 18:10:20.987388 4051 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 18:10:20.988801 master-0 kubenswrapper[4051]: W0312 18:10:20.987397 4051 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 18:10:20.988801 master-0 kubenswrapper[4051]: W0312 18:10:20.987406 4051 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 18:10:20.988801 master-0 kubenswrapper[4051]: W0312 18:10:20.987415 4051 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 18:10:20.988801 master-0 kubenswrapper[4051]: W0312 18:10:20.987423 4051 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 18:10:20.988801 master-0 kubenswrapper[4051]: W0312 18:10:20.987431 4051 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 18:10:20.988801 master-0 kubenswrapper[4051]: W0312 18:10:20.987439 4051 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 18:10:20.988801 master-0 kubenswrapper[4051]: W0312 18:10:20.987447 4051 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 18:10:20.988801 master-0 kubenswrapper[4051]: W0312 18:10:20.987454 4051 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 18:10:20.988801 master-0 kubenswrapper[4051]: W0312 18:10:20.987462 4051 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 18:10:20.988801 master-0 kubenswrapper[4051]: W0312 18:10:20.987470 4051 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 18:10:20.988801 master-0 kubenswrapper[4051]: W0312 18:10:20.987478 4051 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 18:10:20.988801 master-0 kubenswrapper[4051]: W0312 18:10:20.987486 4051 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 18:10:20.988801 master-0 kubenswrapper[4051]: W0312 18:10:20.987494 4051 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 18:10:20.988801 master-0 kubenswrapper[4051]: W0312 18:10:20.987501 4051 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 18:10:20.988801 master-0 kubenswrapper[4051]: W0312 18:10:20.987509 4051 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 18:10:20.988801 master-0 kubenswrapper[4051]: W0312 18:10:20.987555 4051 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 18:10:20.988801 master-0 kubenswrapper[4051]: W0312 18:10:20.987564 4051 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 18:10:20.989264 master-0 kubenswrapper[4051]: I0312 18:10:20.987577 4051 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 18:10:20.989264 master-0 kubenswrapper[4051]: W0312 18:10:20.987855 4051 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 18:10:20.989264 master-0 kubenswrapper[4051]: W0312 18:10:20.987872 4051 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 18:10:20.989264 master-0 kubenswrapper[4051]: W0312 18:10:20.987881 4051 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 18:10:20.989264 master-0 kubenswrapper[4051]: W0312 18:10:20.987890 4051 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 18:10:20.989264 master-0 kubenswrapper[4051]: W0312 18:10:20.987899 4051 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 18:10:20.989264 master-0 kubenswrapper[4051]: W0312 18:10:20.987907 4051 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 18:10:20.989264 master-0 kubenswrapper[4051]: W0312 18:10:20.987916 4051 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 18:10:20.989264 master-0 kubenswrapper[4051]: W0312 18:10:20.987924 4051 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 18:10:20.989264 master-0 kubenswrapper[4051]: W0312 18:10:20.987933 4051 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 18:10:20.989264 master-0 kubenswrapper[4051]: W0312 18:10:20.987944 4051 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 18:10:20.989264 master-0 kubenswrapper[4051]: W0312 18:10:20.987956 4051 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 18:10:20.989264 master-0 kubenswrapper[4051]: W0312 18:10:20.987967 4051 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 18:10:20.989264 master-0 kubenswrapper[4051]: W0312 18:10:20.987978 4051 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 18:10:20.989264 master-0 kubenswrapper[4051]: W0312 18:10:20.987987 4051 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 18:10:20.989944 master-0 kubenswrapper[4051]: W0312 18:10:20.987996 4051 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 18:10:20.989944 master-0 kubenswrapper[4051]: W0312 18:10:20.988005 4051 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 18:10:20.989944 master-0 kubenswrapper[4051]: W0312 18:10:20.988014 4051 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 18:10:20.989944 master-0 kubenswrapper[4051]: W0312 18:10:20.988023 4051 feature_gate.go:330] unrecognized feature gate: Example Mar 12 18:10:20.989944 master-0 kubenswrapper[4051]: W0312 18:10:20.988031 4051 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 18:10:20.989944 master-0 kubenswrapper[4051]: W0312 18:10:20.988038 4051 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 18:10:20.989944 master-0 kubenswrapper[4051]: W0312 18:10:20.988046 4051 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 18:10:20.989944 master-0 kubenswrapper[4051]: W0312 18:10:20.988054 4051 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 18:10:20.989944 master-0 kubenswrapper[4051]: W0312 18:10:20.988063 4051 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 18:10:20.989944 master-0 kubenswrapper[4051]: W0312 18:10:20.988071 4051 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 18:10:20.989944 master-0 kubenswrapper[4051]: W0312 18:10:20.988081 4051 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 18:10:20.989944 master-0 kubenswrapper[4051]: W0312 18:10:20.988090 4051 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 18:10:20.989944 master-0 kubenswrapper[4051]: W0312 18:10:20.988099 4051 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 18:10:20.989944 master-0 kubenswrapper[4051]: W0312 18:10:20.988107 4051 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 18:10:20.989944 master-0 kubenswrapper[4051]: W0312 18:10:20.988115 4051 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 18:10:20.989944 master-0 kubenswrapper[4051]: W0312 18:10:20.988123 4051 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 18:10:20.989944 master-0 kubenswrapper[4051]: W0312 18:10:20.988131 4051 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 18:10:20.989944 master-0 kubenswrapper[4051]: W0312 18:10:20.988138 4051 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 18:10:20.989944 master-0 kubenswrapper[4051]: W0312 18:10:20.988146 4051 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 18:10:20.989944 master-0 kubenswrapper[4051]: W0312 18:10:20.988155 4051 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 18:10:20.990541 master-0 kubenswrapper[4051]: W0312 18:10:20.988164 4051 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 18:10:20.990541 master-0 kubenswrapper[4051]: W0312 18:10:20.988172 4051 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 18:10:20.990541 master-0 kubenswrapper[4051]: W0312 18:10:20.988179 4051 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 18:10:20.990541 master-0 kubenswrapper[4051]: W0312 18:10:20.988187 4051 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 18:10:20.990541 master-0 kubenswrapper[4051]: W0312 18:10:20.988195 4051 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 18:10:20.990541 master-0 kubenswrapper[4051]: W0312 18:10:20.988203 4051 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 18:10:20.990541 master-0 kubenswrapper[4051]: W0312 18:10:20.988212 4051 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 18:10:20.990541 master-0 kubenswrapper[4051]: W0312 18:10:20.988220 4051 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 18:10:20.990541 master-0 kubenswrapper[4051]: W0312 18:10:20.988228 4051 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 18:10:20.990541 master-0 kubenswrapper[4051]: W0312 18:10:20.988236 4051 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 18:10:20.990541 master-0 kubenswrapper[4051]: W0312 18:10:20.988244 4051 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 18:10:20.990541 master-0 kubenswrapper[4051]: W0312 18:10:20.988251 4051 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 18:10:20.990541 master-0 kubenswrapper[4051]: W0312 18:10:20.988259 4051 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 18:10:20.990541 master-0 kubenswrapper[4051]: W0312 18:10:20.988267 4051 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 18:10:20.990541 master-0 kubenswrapper[4051]: W0312 18:10:20.988275 4051 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 18:10:20.990541 master-0 kubenswrapper[4051]: W0312 18:10:20.988283 4051 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 18:10:20.990541 master-0 kubenswrapper[4051]: W0312 18:10:20.988293 4051 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 18:10:20.990541 master-0 kubenswrapper[4051]: W0312 18:10:20.988302 4051 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 18:10:20.990541 master-0 kubenswrapper[4051]: W0312 18:10:20.988312 4051 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 18:10:20.991120 master-0 kubenswrapper[4051]: W0312 18:10:20.988320 4051 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 18:10:20.991120 master-0 kubenswrapper[4051]: W0312 18:10:20.988329 4051 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 18:10:20.991120 master-0 kubenswrapper[4051]: W0312 18:10:20.988337 4051 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 18:10:20.991120 master-0 kubenswrapper[4051]: W0312 18:10:20.988346 4051 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 18:10:20.991120 master-0 kubenswrapper[4051]: W0312 18:10:20.988355 4051 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 18:10:20.991120 master-0 kubenswrapper[4051]: W0312 18:10:20.988365 4051 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 18:10:20.991120 master-0 kubenswrapper[4051]: W0312 18:10:20.988374 4051 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 18:10:20.991120 master-0 kubenswrapper[4051]: W0312 18:10:20.988383 4051 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 18:10:20.991120 master-0 kubenswrapper[4051]: W0312 18:10:20.988391 4051 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 18:10:20.991120 master-0 kubenswrapper[4051]: W0312 18:10:20.988400 4051 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 18:10:20.991120 master-0 kubenswrapper[4051]: W0312 18:10:20.988411 4051 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 18:10:20.991120 master-0 kubenswrapper[4051]: W0312 18:10:20.988420 4051 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 18:10:20.991120 master-0 kubenswrapper[4051]: W0312 18:10:20.988429 4051 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 18:10:20.991120 master-0 kubenswrapper[4051]: W0312 18:10:20.988438 4051 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 18:10:20.991120 master-0 kubenswrapper[4051]: W0312 18:10:20.988446 4051 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 18:10:20.991120 master-0 kubenswrapper[4051]: W0312 18:10:20.988455 4051 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 18:10:20.991120 master-0 kubenswrapper[4051]: W0312 18:10:20.988465 4051 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 18:10:20.991120 master-0 kubenswrapper[4051]: W0312 18:10:20.988474 4051 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 18:10:20.991120 master-0 kubenswrapper[4051]: W0312 18:10:20.988483 4051 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 18:10:20.991667 master-0 kubenswrapper[4051]: I0312 18:10:20.988495 4051 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 18:10:20.991667 master-0 kubenswrapper[4051]: I0312 18:10:20.989609 4051 server.go:940] "Client rotation is on, will bootstrap in background" Mar 12 18:10:20.993689 master-0 kubenswrapper[4051]: I0312 18:10:20.993647 4051 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 12 18:10:20.994980 master-0 kubenswrapper[4051]: I0312 18:10:20.994936 4051 server.go:997] "Starting client certificate rotation" Mar 12 18:10:20.995063 master-0 kubenswrapper[4051]: I0312 18:10:20.994985 4051 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 12 18:10:20.995350 master-0 kubenswrapper[4051]: I0312 18:10:20.995303 4051 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 12 18:10:21.023201 master-0 kubenswrapper[4051]: I0312 18:10:21.023117 4051 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 18:10:21.029153 master-0 kubenswrapper[4051]: I0312 18:10:21.029098 4051 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 18:10:21.031928 master-0 kubenswrapper[4051]: E0312 18:10:21.031833 4051 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 18:10:21.048652 master-0 kubenswrapper[4051]: I0312 18:10:21.048575 4051 log.go:25] "Validated CRI v1 runtime API" Mar 12 18:10:21.054265 master-0 kubenswrapper[4051]: I0312 18:10:21.054228 4051 log.go:25] "Validated CRI v1 image API" Mar 12 18:10:21.056362 master-0 kubenswrapper[4051]: I0312 18:10:21.056322 4051 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 12 18:10:21.068206 master-0 kubenswrapper[4051]: I0312 18:10:21.068152 4051 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 f6c40199-182a-4be5-87d7-87de18d890be:/dev/vda3] Mar 12 18:10:21.068272 master-0 kubenswrapper[4051]: I0312 18:10:21.068193 4051 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Mar 12 18:10:21.089274 master-0 kubenswrapper[4051]: I0312 18:10:21.089014 4051 manager.go:217] Machine: {Timestamp:2026-03-12 18:10:21.086012063 +0000 UTC m=+0.565138304 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654116352 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:14bcec6218994562885f2bb31137a053 SystemUUID:14bcec62-1899-4562-885f-2bb31137a053 BootID:8ea9dfaa-21ba-4398-883d-eae43b35536d Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827056128 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:b1:d2:12 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:2e:3d:5d Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:6a:b6:1d:75:96:b7 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654116352 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 12 18:10:21.089274 master-0 kubenswrapper[4051]: I0312 18:10:21.089232 4051 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 12 18:10:21.089420 master-0 kubenswrapper[4051]: I0312 18:10:21.089376 4051 manager.go:233] Version: {KernelVersion:5.14.0-427.111.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602172219-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 12 18:10:21.089719 master-0 kubenswrapper[4051]: I0312 18:10:21.089692 4051 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 12 18:10:21.089940 master-0 kubenswrapper[4051]: I0312 18:10:21.089867 4051 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 18:10:21.090164 master-0 kubenswrapper[4051]: I0312 18:10:21.089929 4051 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 18:10:21.090223 master-0 kubenswrapper[4051]: I0312 18:10:21.090177 4051 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 18:10:21.090223 master-0 kubenswrapper[4051]: I0312 18:10:21.090191 4051 container_manager_linux.go:303] "Creating device plugin manager" Mar 12 18:10:21.090223 master-0 kubenswrapper[4051]: I0312 18:10:21.090202 4051 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 12 18:10:21.091147 master-0 kubenswrapper[4051]: I0312 18:10:21.091113 4051 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 12 18:10:21.091858 master-0 kubenswrapper[4051]: I0312 18:10:21.091827 4051 state_mem.go:36] "Initialized new in-memory state store" Mar 12 18:10:21.091957 master-0 kubenswrapper[4051]: I0312 18:10:21.091930 4051 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 12 18:10:21.096537 master-0 kubenswrapper[4051]: I0312 18:10:21.096470 4051 kubelet.go:418] "Attempting to sync node with API server" Mar 12 18:10:21.096537 master-0 kubenswrapper[4051]: I0312 18:10:21.096497 4051 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 18:10:21.096537 master-0 kubenswrapper[4051]: I0312 18:10:21.096526 4051 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 12 18:10:21.096537 master-0 kubenswrapper[4051]: I0312 18:10:21.096540 4051 kubelet.go:324] "Adding apiserver pod source" Mar 12 18:10:21.096863 master-0 kubenswrapper[4051]: I0312 18:10:21.096577 4051 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 18:10:21.103435 master-0 kubenswrapper[4051]: I0312 18:10:21.103360 4051 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 12 18:10:21.103744 master-0 kubenswrapper[4051]: W0312 18:10:21.103657 4051 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 18:10:21.103744 master-0 kubenswrapper[4051]: W0312 18:10:21.103709 4051 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 18:10:21.103918 master-0 kubenswrapper[4051]: E0312 18:10:21.103786 4051 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 18:10:21.103918 master-0 kubenswrapper[4051]: E0312 18:10:21.103792 4051 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 18:10:21.107147 master-0 kubenswrapper[4051]: I0312 18:10:21.107104 4051 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 12 18:10:21.107502 master-0 kubenswrapper[4051]: I0312 18:10:21.107458 4051 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 12 18:10:21.107502 master-0 kubenswrapper[4051]: I0312 18:10:21.107494 4051 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 12 18:10:21.107681 master-0 kubenswrapper[4051]: I0312 18:10:21.107509 4051 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 12 18:10:21.107681 master-0 kubenswrapper[4051]: I0312 18:10:21.107554 4051 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 12 18:10:21.107681 master-0 kubenswrapper[4051]: I0312 18:10:21.107568 4051 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 12 18:10:21.107681 master-0 kubenswrapper[4051]: I0312 18:10:21.107581 4051 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 12 18:10:21.107681 master-0 kubenswrapper[4051]: I0312 18:10:21.107594 4051 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 12 18:10:21.107681 master-0 kubenswrapper[4051]: I0312 18:10:21.107606 4051 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 12 18:10:21.107681 master-0 kubenswrapper[4051]: I0312 18:10:21.107621 4051 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 12 18:10:21.107681 master-0 kubenswrapper[4051]: I0312 18:10:21.107634 4051 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 12 18:10:21.107681 master-0 kubenswrapper[4051]: I0312 18:10:21.107652 4051 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 12 18:10:21.109079 master-0 kubenswrapper[4051]: I0312 18:10:21.109040 4051 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 12 18:10:21.109967 master-0 kubenswrapper[4051]: I0312 18:10:21.109925 4051 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 12 18:10:21.110574 master-0 kubenswrapper[4051]: I0312 18:10:21.110548 4051 server.go:1280] "Started kubelet" Mar 12 18:10:21.112161 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 12 18:10:21.115959 master-0 kubenswrapper[4051]: I0312 18:10:21.115719 4051 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 18:10:21.115959 master-0 kubenswrapper[4051]: I0312 18:10:21.115683 4051 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 18:10:21.115959 master-0 kubenswrapper[4051]: I0312 18:10:21.115889 4051 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 12 18:10:21.117142 master-0 kubenswrapper[4051]: I0312 18:10:21.117031 4051 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 18:10:21.117142 master-0 kubenswrapper[4051]: I0312 18:10:21.117041 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 18:10:21.119228 master-0 kubenswrapper[4051]: I0312 18:10:21.119060 4051 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 12 18:10:21.119228 master-0 kubenswrapper[4051]: I0312 18:10:21.119138 4051 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 18:10:21.119962 master-0 kubenswrapper[4051]: E0312 18:10:21.119863 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:10:21.119962 master-0 kubenswrapper[4051]: I0312 18:10:21.119946 4051 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 12 18:10:21.119962 master-0 kubenswrapper[4051]: I0312 18:10:21.119960 4051 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 12 18:10:21.120248 master-0 kubenswrapper[4051]: I0312 18:10:21.120121 4051 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 12 18:10:21.120248 master-0 kubenswrapper[4051]: I0312 18:10:21.120242 4051 reconstruct.go:97] "Volume reconstruction finished" Mar 12 18:10:21.120394 master-0 kubenswrapper[4051]: I0312 18:10:21.120258 4051 reconciler.go:26] "Reconciler: start to sync state" Mar 12 18:10:21.120836 master-0 kubenswrapper[4051]: E0312 18:10:21.119566 4051 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189c2a71b3a11dfe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:21.110484478 +0000 UTC m=+0.589610739,LastTimestamp:2026-03-12 18:10:21.110484478 +0000 UTC m=+0.589610739,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:21.121262 master-0 kubenswrapper[4051]: I0312 18:10:21.121220 4051 factory.go:55] Registering systemd factory Mar 12 18:10:21.121262 master-0 kubenswrapper[4051]: I0312 18:10:21.121255 4051 factory.go:221] Registration of the systemd container factory successfully Mar 12 18:10:21.123015 master-0 kubenswrapper[4051]: E0312 18:10:21.122913 4051 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 12 18:10:21.123015 master-0 kubenswrapper[4051]: W0312 18:10:21.122956 4051 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 18:10:21.123301 master-0 kubenswrapper[4051]: E0312 18:10:21.123102 4051 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 18:10:21.123301 master-0 kubenswrapper[4051]: I0312 18:10:21.123174 4051 server.go:449] "Adding debug handlers to kubelet server" Mar 12 18:10:21.123833 master-0 kubenswrapper[4051]: I0312 18:10:21.123783 4051 factory.go:153] Registering CRI-O factory Mar 12 18:10:21.123833 master-0 kubenswrapper[4051]: I0312 18:10:21.123812 4051 factory.go:221] Registration of the crio container factory successfully Mar 12 18:10:21.123972 master-0 kubenswrapper[4051]: I0312 18:10:21.123905 4051 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 12 18:10:21.123972 master-0 kubenswrapper[4051]: I0312 18:10:21.123940 4051 factory.go:103] Registering Raw factory Mar 12 18:10:21.123972 master-0 kubenswrapper[4051]: I0312 18:10:21.123964 4051 manager.go:1196] Started watching for new ooms in manager Mar 12 18:10:21.124913 master-0 kubenswrapper[4051]: I0312 18:10:21.124874 4051 manager.go:319] Starting recovery of all containers Mar 12 18:10:21.131324 master-0 kubenswrapper[4051]: E0312 18:10:21.131257 4051 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 12 18:10:21.145733 master-0 kubenswrapper[4051]: I0312 18:10:21.145445 4051 manager.go:324] Recovery completed Mar 12 18:10:21.155266 master-0 kubenswrapper[4051]: I0312 18:10:21.155223 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:21.157549 master-0 kubenswrapper[4051]: I0312 18:10:21.157438 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:21.157667 master-0 kubenswrapper[4051]: I0312 18:10:21.157559 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:21.157667 master-0 kubenswrapper[4051]: I0312 18:10:21.157589 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:21.158702 master-0 kubenswrapper[4051]: I0312 18:10:21.158669 4051 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 12 18:10:21.158702 master-0 kubenswrapper[4051]: I0312 18:10:21.158694 4051 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 12 18:10:21.158861 master-0 kubenswrapper[4051]: I0312 18:10:21.158725 4051 state_mem.go:36] "Initialized new in-memory state store" Mar 12 18:10:21.162002 master-0 kubenswrapper[4051]: I0312 18:10:21.161956 4051 policy_none.go:49] "None policy: Start" Mar 12 18:10:21.162688 master-0 kubenswrapper[4051]: I0312 18:10:21.162660 4051 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 12 18:10:21.162838 master-0 kubenswrapper[4051]: I0312 18:10:21.162717 4051 state_mem.go:35] "Initializing new in-memory state store" Mar 12 18:10:21.220435 master-0 kubenswrapper[4051]: E0312 18:10:21.220368 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:10:21.247442 master-0 kubenswrapper[4051]: I0312 18:10:21.246894 4051 manager.go:334] "Starting Device Plugin manager" Mar 12 18:10:21.247442 master-0 kubenswrapper[4051]: I0312 18:10:21.246973 4051 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 12 18:10:21.247442 master-0 kubenswrapper[4051]: I0312 18:10:21.246990 4051 server.go:79] "Starting device plugin registration server" Mar 12 18:10:21.248718 master-0 kubenswrapper[4051]: I0312 18:10:21.247493 4051 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 18:10:21.248718 master-0 kubenswrapper[4051]: I0312 18:10:21.247573 4051 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 18:10:21.248718 master-0 kubenswrapper[4051]: I0312 18:10:21.247795 4051 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 12 18:10:21.248718 master-0 kubenswrapper[4051]: I0312 18:10:21.248070 4051 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 12 18:10:21.248718 master-0 kubenswrapper[4051]: I0312 18:10:21.248092 4051 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 18:10:21.250060 master-0 kubenswrapper[4051]: E0312 18:10:21.249995 4051 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 12 18:10:21.276856 master-0 kubenswrapper[4051]: I0312 18:10:21.276791 4051 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 12 18:10:21.293614 master-0 kubenswrapper[4051]: I0312 18:10:21.279987 4051 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 12 18:10:21.293614 master-0 kubenswrapper[4051]: I0312 18:10:21.280051 4051 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 12 18:10:21.293614 master-0 kubenswrapper[4051]: I0312 18:10:21.280082 4051 kubelet.go:2335] "Starting kubelet main sync loop" Mar 12 18:10:21.293614 master-0 kubenswrapper[4051]: E0312 18:10:21.280148 4051 kubelet.go:2359] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 12 18:10:21.293614 master-0 kubenswrapper[4051]: W0312 18:10:21.281294 4051 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 18:10:21.293614 master-0 kubenswrapper[4051]: E0312 18:10:21.281375 4051 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 18:10:21.324489 master-0 kubenswrapper[4051]: E0312 18:10:21.324412 4051 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 12 18:10:21.348711 master-0 kubenswrapper[4051]: I0312 18:10:21.348659 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:21.349742 master-0 kubenswrapper[4051]: I0312 18:10:21.349717 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:21.349805 master-0 kubenswrapper[4051]: I0312 18:10:21.349749 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:21.349805 master-0 kubenswrapper[4051]: I0312 18:10:21.349757 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:21.349805 master-0 kubenswrapper[4051]: I0312 18:10:21.349781 4051 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 18:10:21.350385 master-0 kubenswrapper[4051]: E0312 18:10:21.350361 4051 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 12 18:10:21.380531 master-0 kubenswrapper[4051]: I0312 18:10:21.380482 4051 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0"] Mar 12 18:10:21.380662 master-0 kubenswrapper[4051]: I0312 18:10:21.380578 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:21.381332 master-0 kubenswrapper[4051]: I0312 18:10:21.381285 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:21.381332 master-0 kubenswrapper[4051]: I0312 18:10:21.381317 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:21.381332 master-0 kubenswrapper[4051]: I0312 18:10:21.381328 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:21.381484 master-0 kubenswrapper[4051]: I0312 18:10:21.381434 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:21.381705 master-0 kubenswrapper[4051]: I0312 18:10:21.381678 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 18:10:21.381758 master-0 kubenswrapper[4051]: I0312 18:10:21.381717 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:21.382397 master-0 kubenswrapper[4051]: I0312 18:10:21.381965 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:21.382397 master-0 kubenswrapper[4051]: I0312 18:10:21.381981 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:21.382397 master-0 kubenswrapper[4051]: I0312 18:10:21.381989 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:21.382397 master-0 kubenswrapper[4051]: I0312 18:10:21.382049 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:21.382397 master-0 kubenswrapper[4051]: I0312 18:10:21.382167 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 18:10:21.382397 master-0 kubenswrapper[4051]: I0312 18:10:21.382201 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:21.382806 master-0 kubenswrapper[4051]: I0312 18:10:21.382469 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:21.382806 master-0 kubenswrapper[4051]: I0312 18:10:21.382486 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:21.382806 master-0 kubenswrapper[4051]: I0312 18:10:21.382498 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:21.382806 master-0 kubenswrapper[4051]: I0312 18:10:21.382583 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:21.382806 master-0 kubenswrapper[4051]: I0312 18:10:21.382770 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 12 18:10:21.382806 master-0 kubenswrapper[4051]: I0312 18:10:21.382809 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:21.382981 master-0 kubenswrapper[4051]: I0312 18:10:21.382964 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:21.383011 master-0 kubenswrapper[4051]: I0312 18:10:21.382983 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:21.383011 master-0 kubenswrapper[4051]: I0312 18:10:21.382996 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:21.383165 master-0 kubenswrapper[4051]: I0312 18:10:21.383138 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:21.383165 master-0 kubenswrapper[4051]: I0312 18:10:21.383158 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:21.383227 master-0 kubenswrapper[4051]: I0312 18:10:21.383167 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:21.383227 master-0 kubenswrapper[4051]: I0312 18:10:21.383145 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:21.383227 master-0 kubenswrapper[4051]: I0312 18:10:21.383210 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:21.383227 master-0 kubenswrapper[4051]: I0312 18:10:21.383222 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:21.383474 master-0 kubenswrapper[4051]: I0312 18:10:21.383349 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:21.383474 master-0 kubenswrapper[4051]: I0312 18:10:21.383456 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:10:21.383572 master-0 kubenswrapper[4051]: I0312 18:10:21.383481 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:21.383572 master-0 kubenswrapper[4051]: I0312 18:10:21.383530 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:21.383572 master-0 kubenswrapper[4051]: I0312 18:10:21.383556 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:21.383572 master-0 kubenswrapper[4051]: I0312 18:10:21.383565 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:21.383855 master-0 kubenswrapper[4051]: I0312 18:10:21.383826 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:21.383855 master-0 kubenswrapper[4051]: I0312 18:10:21.383847 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:21.383855 master-0 kubenswrapper[4051]: I0312 18:10:21.383855 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:21.383992 master-0 kubenswrapper[4051]: I0312 18:10:21.383911 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:21.383992 master-0 kubenswrapper[4051]: I0312 18:10:21.383954 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:21.383992 master-0 kubenswrapper[4051]: I0312 18:10:21.383965 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:21.383992 master-0 kubenswrapper[4051]: I0312 18:10:21.383975 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:10:21.383992 master-0 kubenswrapper[4051]: I0312 18:10:21.383991 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:21.384407 master-0 kubenswrapper[4051]: I0312 18:10:21.384378 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:21.384437 master-0 kubenswrapper[4051]: I0312 18:10:21.384407 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:21.384437 master-0 kubenswrapper[4051]: I0312 18:10:21.384420 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:21.422050 master-0 kubenswrapper[4051]: I0312 18:10:21.421948 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:10:21.422050 master-0 kubenswrapper[4051]: I0312 18:10:21.421997 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 18:10:21.422050 master-0 kubenswrapper[4051]: I0312 18:10:21.422038 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:10:21.422465 master-0 kubenswrapper[4051]: I0312 18:10:21.422140 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:10:21.422465 master-0 kubenswrapper[4051]: I0312 18:10:21.422183 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 18:10:21.422465 master-0 kubenswrapper[4051]: I0312 18:10:21.422210 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 18:10:21.422465 master-0 kubenswrapper[4051]: I0312 18:10:21.422233 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:10:21.422465 master-0 kubenswrapper[4051]: I0312 18:10:21.422252 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:10:21.422465 master-0 kubenswrapper[4051]: I0312 18:10:21.422288 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 18:10:21.422465 master-0 kubenswrapper[4051]: I0312 18:10:21.422330 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 18:10:21.422465 master-0 kubenswrapper[4051]: I0312 18:10:21.422362 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:10:21.422465 master-0 kubenswrapper[4051]: I0312 18:10:21.422390 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:10:21.422465 master-0 kubenswrapper[4051]: I0312 18:10:21.422414 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:10:21.422465 master-0 kubenswrapper[4051]: I0312 18:10:21.422430 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:10:21.422465 master-0 kubenswrapper[4051]: I0312 18:10:21.422446 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 18:10:21.422465 master-0 kubenswrapper[4051]: I0312 18:10:21.422463 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:10:21.422465 master-0 kubenswrapper[4051]: I0312 18:10:21.422481 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:10:21.523368 master-0 kubenswrapper[4051]: I0312 18:10:21.523284 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:10:21.523368 master-0 kubenswrapper[4051]: I0312 18:10:21.523358 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 18:10:21.523633 master-0 kubenswrapper[4051]: I0312 18:10:21.523393 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:10:21.523633 master-0 kubenswrapper[4051]: I0312 18:10:21.523414 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:10:21.523633 master-0 kubenswrapper[4051]: I0312 18:10:21.523453 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:10:21.523633 master-0 kubenswrapper[4051]: I0312 18:10:21.523503 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:10:21.523633 master-0 kubenswrapper[4051]: I0312 18:10:21.523547 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:10:21.523633 master-0 kubenswrapper[4051]: I0312 18:10:21.523498 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 18:10:21.523633 master-0 kubenswrapper[4051]: I0312 18:10:21.523570 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 18:10:21.523633 master-0 kubenswrapper[4051]: I0312 18:10:21.523604 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 18:10:21.523870 master-0 kubenswrapper[4051]: I0312 18:10:21.523636 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:10:21.523870 master-0 kubenswrapper[4051]: I0312 18:10:21.523652 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 18:10:21.523870 master-0 kubenswrapper[4051]: I0312 18:10:21.523666 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:10:21.523870 master-0 kubenswrapper[4051]: I0312 18:10:21.523685 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 18:10:21.523870 master-0 kubenswrapper[4051]: I0312 18:10:21.523700 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 18:10:21.523870 master-0 kubenswrapper[4051]: I0312 18:10:21.523731 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:10:21.523870 master-0 kubenswrapper[4051]: I0312 18:10:21.523748 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 18:10:21.523870 master-0 kubenswrapper[4051]: I0312 18:10:21.523749 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 18:10:21.523870 master-0 kubenswrapper[4051]: I0312 18:10:21.523817 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 18:10:21.523870 master-0 kubenswrapper[4051]: I0312 18:10:21.523837 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:10:21.523870 master-0 kubenswrapper[4051]: I0312 18:10:21.523862 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:10:21.524162 master-0 kubenswrapper[4051]: I0312 18:10:21.523869 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:10:21.524162 master-0 kubenswrapper[4051]: I0312 18:10:21.523881 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:10:21.524162 master-0 kubenswrapper[4051]: I0312 18:10:21.523917 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:10:21.524162 master-0 kubenswrapper[4051]: I0312 18:10:21.523926 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:10:21.524162 master-0 kubenswrapper[4051]: I0312 18:10:21.523959 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:10:21.524162 master-0 kubenswrapper[4051]: I0312 18:10:21.523959 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:10:21.524162 master-0 kubenswrapper[4051]: I0312 18:10:21.523986 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:10:21.524162 master-0 kubenswrapper[4051]: I0312 18:10:21.523999 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 18:10:21.524162 master-0 kubenswrapper[4051]: I0312 18:10:21.524035 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:10:21.524162 master-0 kubenswrapper[4051]: I0312 18:10:21.524050 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:10:21.524162 master-0 kubenswrapper[4051]: I0312 18:10:21.524068 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:10:21.524162 master-0 kubenswrapper[4051]: I0312 18:10:21.524083 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 18:10:21.524162 master-0 kubenswrapper[4051]: I0312 18:10:21.524103 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:10:21.551597 master-0 kubenswrapper[4051]: I0312 18:10:21.551430 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:21.552635 master-0 kubenswrapper[4051]: I0312 18:10:21.552584 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:21.552635 master-0 kubenswrapper[4051]: I0312 18:10:21.552635 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:21.552779 master-0 kubenswrapper[4051]: I0312 18:10:21.552654 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:21.552779 master-0 kubenswrapper[4051]: I0312 18:10:21.552716 4051 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 18:10:21.553579 master-0 kubenswrapper[4051]: E0312 18:10:21.553504 4051 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 12 18:10:21.726664 master-0 kubenswrapper[4051]: E0312 18:10:21.726420 4051 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 12 18:10:21.728600 master-0 kubenswrapper[4051]: I0312 18:10:21.728508 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 18:10:21.745104 master-0 kubenswrapper[4051]: I0312 18:10:21.745028 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 18:10:21.768492 master-0 kubenswrapper[4051]: I0312 18:10:21.768405 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 12 18:10:21.777228 master-0 kubenswrapper[4051]: I0312 18:10:21.777155 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:10:21.781449 master-0 kubenswrapper[4051]: I0312 18:10:21.781376 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:10:21.954496 master-0 kubenswrapper[4051]: I0312 18:10:21.954380 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:21.955914 master-0 kubenswrapper[4051]: I0312 18:10:21.955852 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:21.955914 master-0 kubenswrapper[4051]: I0312 18:10:21.955913 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:21.956150 master-0 kubenswrapper[4051]: I0312 18:10:21.955937 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:21.956150 master-0 kubenswrapper[4051]: I0312 18:10:21.956017 4051 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 18:10:21.957126 master-0 kubenswrapper[4051]: E0312 18:10:21.957075 4051 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 12 18:10:22.118712 master-0 kubenswrapper[4051]: I0312 18:10:22.118661 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 18:10:22.226224 master-0 kubenswrapper[4051]: W0312 18:10:22.226085 4051 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 18:10:22.226481 master-0 kubenswrapper[4051]: E0312 18:10:22.226231 4051 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 18:10:22.474385 master-0 kubenswrapper[4051]: W0312 18:10:22.474230 4051 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod354f29997baa583b6238f7de9108ee10.slice/crio-64955db5addf9b64f24ce95166a3106b2564db66c223ef67752e78909dc304ef WatchSource:0}: Error finding container 64955db5addf9b64f24ce95166a3106b2564db66c223ef67752e78909dc304ef: Status 404 returned error can't find the container with id 64955db5addf9b64f24ce95166a3106b2564db66c223ef67752e78909dc304ef Mar 12 18:10:22.483400 master-0 kubenswrapper[4051]: I0312 18:10:22.483346 4051 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 18:10:22.498442 master-0 kubenswrapper[4051]: W0312 18:10:22.498315 4051 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f77c8e18b751d90bc0dfe2d4e304050.slice/crio-c5d8b743e37e43da0e4ff17c103781e7e406b75fffac74b32a3f5490d58a4481 WatchSource:0}: Error finding container c5d8b743e37e43da0e4ff17c103781e7e406b75fffac74b32a3f5490d58a4481: Status 404 returned error can't find the container with id c5d8b743e37e43da0e4ff17c103781e7e406b75fffac74b32a3f5490d58a4481 Mar 12 18:10:22.503420 master-0 kubenswrapper[4051]: W0312 18:10:22.503371 4051 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9add8df47182fc2eaf8cd78016ebe72.slice/crio-427d9fb4da53770ccc67c12bc3aaae3e507cd75207365e40c1611eeaaf274b84 WatchSource:0}: Error finding container 427d9fb4da53770ccc67c12bc3aaae3e507cd75207365e40c1611eeaaf274b84: Status 404 returned error can't find the container with id 427d9fb4da53770ccc67c12bc3aaae3e507cd75207365e40c1611eeaaf274b84 Mar 12 18:10:22.522659 master-0 kubenswrapper[4051]: W0312 18:10:22.522481 4051 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1a56802af72ce1aac6b5077f1695ac0.slice/crio-91bbe32b85272f0f9f735ba2a67b1085ea37b3016231eb6d6938a08eed1a3b9d WatchSource:0}: Error finding container 91bbe32b85272f0f9f735ba2a67b1085ea37b3016231eb6d6938a08eed1a3b9d: Status 404 returned error can't find the container with id 91bbe32b85272f0f9f735ba2a67b1085ea37b3016231eb6d6938a08eed1a3b9d Mar 12 18:10:22.528008 master-0 kubenswrapper[4051]: E0312 18:10:22.527948 4051 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 12 18:10:22.539455 master-0 kubenswrapper[4051]: W0312 18:10:22.539337 4051 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 18:10:22.539942 master-0 kubenswrapper[4051]: E0312 18:10:22.539891 4051 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 18:10:22.692412 master-0 kubenswrapper[4051]: W0312 18:10:22.692213 4051 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 18:10:22.692412 master-0 kubenswrapper[4051]: E0312 18:10:22.692340 4051 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 18:10:22.733889 master-0 kubenswrapper[4051]: W0312 18:10:22.733683 4051 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 18:10:22.733889 master-0 kubenswrapper[4051]: E0312 18:10:22.733797 4051 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 18:10:22.758311 master-0 kubenswrapper[4051]: I0312 18:10:22.758215 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:22.759772 master-0 kubenswrapper[4051]: I0312 18:10:22.759728 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:22.759887 master-0 kubenswrapper[4051]: I0312 18:10:22.759874 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:22.760005 master-0 kubenswrapper[4051]: I0312 18:10:22.759994 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:22.760139 master-0 kubenswrapper[4051]: I0312 18:10:22.760127 4051 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 18:10:22.761411 master-0 kubenswrapper[4051]: E0312 18:10:22.761349 4051 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 12 18:10:23.050056 master-0 kubenswrapper[4051]: W0312 18:10:23.049996 4051 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf78c05e1499b533b83f091333d61f045.slice/crio-e54cdbac1902d131e189d00f45d878f018504994bb1378cf2e83bf1f9b2a651b WatchSource:0}: Error finding container e54cdbac1902d131e189d00f45d878f018504994bb1378cf2e83bf1f9b2a651b: Status 404 returned error can't find the container with id e54cdbac1902d131e189d00f45d878f018504994bb1378cf2e83bf1f9b2a651b Mar 12 18:10:23.115550 master-0 kubenswrapper[4051]: I0312 18:10:23.115363 4051 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 12 18:10:23.116928 master-0 kubenswrapper[4051]: E0312 18:10:23.116846 4051 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 18:10:23.118889 master-0 kubenswrapper[4051]: I0312 18:10:23.118795 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 18:10:23.287802 master-0 kubenswrapper[4051]: I0312 18:10:23.287277 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"e54cdbac1902d131e189d00f45d878f018504994bb1378cf2e83bf1f9b2a651b"} Mar 12 18:10:23.288835 master-0 kubenswrapper[4051]: I0312 18:10:23.288612 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"91bbe32b85272f0f9f735ba2a67b1085ea37b3016231eb6d6938a08eed1a3b9d"} Mar 12 18:10:23.290719 master-0 kubenswrapper[4051]: I0312 18:10:23.290638 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"427d9fb4da53770ccc67c12bc3aaae3e507cd75207365e40c1611eeaaf274b84"} Mar 12 18:10:23.291933 master-0 kubenswrapper[4051]: I0312 18:10:23.291761 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"c5d8b743e37e43da0e4ff17c103781e7e406b75fffac74b32a3f5490d58a4481"} Mar 12 18:10:23.293240 master-0 kubenswrapper[4051]: I0312 18:10:23.293193 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"64955db5addf9b64f24ce95166a3106b2564db66c223ef67752e78909dc304ef"} Mar 12 18:10:23.984959 master-0 kubenswrapper[4051]: W0312 18:10:23.984913 4051 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 18:10:23.984959 master-0 kubenswrapper[4051]: E0312 18:10:23.984967 4051 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 18:10:24.118532 master-0 kubenswrapper[4051]: I0312 18:10:24.118469 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 18:10:24.129222 master-0 kubenswrapper[4051]: E0312 18:10:24.129172 4051 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 12 18:10:24.362317 master-0 kubenswrapper[4051]: I0312 18:10:24.361842 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:24.362827 master-0 kubenswrapper[4051]: I0312 18:10:24.362774 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:24.362872 master-0 kubenswrapper[4051]: I0312 18:10:24.362831 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:24.362872 master-0 kubenswrapper[4051]: I0312 18:10:24.362842 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:24.362926 master-0 kubenswrapper[4051]: I0312 18:10:24.362900 4051 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 18:10:24.363677 master-0 kubenswrapper[4051]: E0312 18:10:24.363638 4051 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 12 18:10:25.313354 master-0 kubenswrapper[4051]: W0312 18:10:25.313083 4051 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 18:10:25.313949 master-0 kubenswrapper[4051]: W0312 18:10:25.313114 4051 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 18:10:25.313949 master-0 kubenswrapper[4051]: E0312 18:10:25.313394 4051 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 18:10:25.313949 master-0 kubenswrapper[4051]: E0312 18:10:25.313457 4051 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 18:10:25.313949 master-0 kubenswrapper[4051]: I0312 18:10:25.313123 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 18:10:25.341460 master-0 kubenswrapper[4051]: W0312 18:10:25.341370 4051 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 18:10:25.341460 master-0 kubenswrapper[4051]: E0312 18:10:25.341426 4051 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 18:10:26.119353 master-0 kubenswrapper[4051]: I0312 18:10:26.119298 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 18:10:26.316912 master-0 kubenswrapper[4051]: I0312 18:10:26.316815 4051 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="cfb72c7fed7776f25cec78c2b1f068a79f1aca1d681c7f12e196e29d22a04486" exitCode=0 Mar 12 18:10:26.316912 master-0 kubenswrapper[4051]: I0312 18:10:26.316867 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"cfb72c7fed7776f25cec78c2b1f068a79f1aca1d681c7f12e196e29d22a04486"} Mar 12 18:10:26.316912 master-0 kubenswrapper[4051]: I0312 18:10:26.316901 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:26.317673 master-0 kubenswrapper[4051]: I0312 18:10:26.317497 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:26.317673 master-0 kubenswrapper[4051]: I0312 18:10:26.317545 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:26.317673 master-0 kubenswrapper[4051]: I0312 18:10:26.317558 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:27.119106 master-0 kubenswrapper[4051]: I0312 18:10:27.119035 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 18:10:27.319902 master-0 kubenswrapper[4051]: I0312 18:10:27.319861 4051 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/0.log" Mar 12 18:10:27.320452 master-0 kubenswrapper[4051]: I0312 18:10:27.320307 4051 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="17491fe132cf9be66e8b3a4be24ac322bc848a7d5e1813fc0248631ec126f5be" exitCode=1 Mar 12 18:10:27.320452 master-0 kubenswrapper[4051]: I0312 18:10:27.320346 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"17491fe132cf9be66e8b3a4be24ac322bc848a7d5e1813fc0248631ec126f5be"} Mar 12 18:10:27.320452 master-0 kubenswrapper[4051]: I0312 18:10:27.320405 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:27.321304 master-0 kubenswrapper[4051]: I0312 18:10:27.321273 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:27.321342 master-0 kubenswrapper[4051]: I0312 18:10:27.321304 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:27.321342 master-0 kubenswrapper[4051]: I0312 18:10:27.321316 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:27.321620 master-0 kubenswrapper[4051]: I0312 18:10:27.321597 4051 scope.go:117] "RemoveContainer" containerID="17491fe132cf9be66e8b3a4be24ac322bc848a7d5e1813fc0248631ec126f5be" Mar 12 18:10:27.325488 master-0 kubenswrapper[4051]: I0312 18:10:27.325220 4051 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 12 18:10:27.326323 master-0 kubenswrapper[4051]: E0312 18:10:27.326288 4051 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 18:10:27.330104 master-0 kubenswrapper[4051]: E0312 18:10:27.330058 4051 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 12 18:10:27.563957 master-0 kubenswrapper[4051]: I0312 18:10:27.563877 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:27.564969 master-0 kubenswrapper[4051]: I0312 18:10:27.564883 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:27.564969 master-0 kubenswrapper[4051]: I0312 18:10:27.564916 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:27.564969 master-0 kubenswrapper[4051]: I0312 18:10:27.564928 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:27.564969 master-0 kubenswrapper[4051]: I0312 18:10:27.564971 4051 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 18:10:27.565698 master-0 kubenswrapper[4051]: E0312 18:10:27.565646 4051 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 12 18:10:27.800733 master-0 kubenswrapper[4051]: E0312 18:10:27.800606 4051 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189c2a71b3a11dfe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:21.110484478 +0000 UTC m=+0.589610739,LastTimestamp:2026-03-12 18:10:21.110484478 +0000 UTC m=+0.589610739,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:28.118653 master-0 kubenswrapper[4051]: I0312 18:10:28.118584 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 18:10:28.323838 master-0 kubenswrapper[4051]: I0312 18:10:28.323804 4051 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/1.log" Mar 12 18:10:28.324779 master-0 kubenswrapper[4051]: I0312 18:10:28.324707 4051 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/0.log" Mar 12 18:10:28.325096 master-0 kubenswrapper[4051]: I0312 18:10:28.325072 4051 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="1f5721d1474673f48d2f2fd384518823e4b1a13194287e15de08ed375d1a67dd" exitCode=1 Mar 12 18:10:28.325312 master-0 kubenswrapper[4051]: I0312 18:10:28.325178 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"1f5721d1474673f48d2f2fd384518823e4b1a13194287e15de08ed375d1a67dd"} Mar 12 18:10:28.325312 master-0 kubenswrapper[4051]: I0312 18:10:28.325190 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:28.325312 master-0 kubenswrapper[4051]: I0312 18:10:28.325226 4051 scope.go:117] "RemoveContainer" containerID="17491fe132cf9be66e8b3a4be24ac322bc848a7d5e1813fc0248631ec126f5be" Mar 12 18:10:28.327863 master-0 kubenswrapper[4051]: I0312 18:10:28.327807 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:28.327863 master-0 kubenswrapper[4051]: I0312 18:10:28.327860 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:28.327937 master-0 kubenswrapper[4051]: I0312 18:10:28.327869 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:28.328277 master-0 kubenswrapper[4051]: I0312 18:10:28.328249 4051 scope.go:117] "RemoveContainer" containerID="1f5721d1474673f48d2f2fd384518823e4b1a13194287e15de08ed375d1a67dd" Mar 12 18:10:28.328474 master-0 kubenswrapper[4051]: E0312 18:10:28.328407 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 12 18:10:28.329272 master-0 kubenswrapper[4051]: I0312 18:10:28.329230 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"5528dd5a4e01f059b975dba420b1e42f7d38d29b15865763037c06d90dade8e7"} Mar 12 18:10:28.329312 master-0 kubenswrapper[4051]: I0312 18:10:28.329272 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"b928346cccd3250ecd2b93ee414bcede71a1e719c241a242b6ebcdc2c000ed85"} Mar 12 18:10:28.329312 master-0 kubenswrapper[4051]: I0312 18:10:28.329298 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:28.329967 master-0 kubenswrapper[4051]: I0312 18:10:28.329931 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:28.329967 master-0 kubenswrapper[4051]: I0312 18:10:28.329959 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:28.329967 master-0 kubenswrapper[4051]: I0312 18:10:28.329968 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:28.471814 master-0 kubenswrapper[4051]: W0312 18:10:28.471607 4051 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 18:10:28.471814 master-0 kubenswrapper[4051]: E0312 18:10:28.471690 4051 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 18:10:28.688131 master-0 kubenswrapper[4051]: W0312 18:10:28.688037 4051 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 18:10:28.688131 master-0 kubenswrapper[4051]: E0312 18:10:28.688111 4051 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 18:10:29.118334 master-0 kubenswrapper[4051]: I0312 18:10:29.118276 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 18:10:29.324014 master-0 kubenswrapper[4051]: W0312 18:10:29.323862 4051 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 18:10:29.324571 master-0 kubenswrapper[4051]: E0312 18:10:29.324038 4051 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 18:10:29.332587 master-0 kubenswrapper[4051]: I0312 18:10:29.332493 4051 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/1.log" Mar 12 18:10:29.333136 master-0 kubenswrapper[4051]: I0312 18:10:29.333091 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:29.333844 master-0 kubenswrapper[4051]: I0312 18:10:29.333804 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:29.334692 master-0 kubenswrapper[4051]: I0312 18:10:29.334646 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:29.334754 master-0 kubenswrapper[4051]: I0312 18:10:29.334694 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:29.334754 master-0 kubenswrapper[4051]: I0312 18:10:29.334713 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:29.335211 master-0 kubenswrapper[4051]: I0312 18:10:29.335167 4051 scope.go:117] "RemoveContainer" containerID="1f5721d1474673f48d2f2fd384518823e4b1a13194287e15de08ed375d1a67dd" Mar 12 18:10:29.335479 master-0 kubenswrapper[4051]: E0312 18:10:29.335430 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 12 18:10:29.335678 master-0 kubenswrapper[4051]: I0312 18:10:29.335645 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:29.335720 master-0 kubenswrapper[4051]: I0312 18:10:29.335681 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:29.335720 master-0 kubenswrapper[4051]: I0312 18:10:29.335702 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:30.045405 master-0 kubenswrapper[4051]: W0312 18:10:30.045328 4051 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 18:10:30.045603 master-0 kubenswrapper[4051]: E0312 18:10:30.045422 4051 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 18:10:30.118699 master-0 kubenswrapper[4051]: I0312 18:10:30.118646 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 18:10:31.120626 master-0 kubenswrapper[4051]: I0312 18:10:31.120528 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 18:10:31.250227 master-0 kubenswrapper[4051]: E0312 18:10:31.250178 4051 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 12 18:10:32.118061 master-0 kubenswrapper[4051]: I0312 18:10:32.117980 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 18:10:33.118882 master-0 kubenswrapper[4051]: I0312 18:10:33.118837 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 18:10:33.731238 master-0 kubenswrapper[4051]: E0312 18:10:33.731127 4051 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="7s" Mar 12 18:10:33.966651 master-0 kubenswrapper[4051]: I0312 18:10:33.966584 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:33.967651 master-0 kubenswrapper[4051]: I0312 18:10:33.967618 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:33.967712 master-0 kubenswrapper[4051]: I0312 18:10:33.967663 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:33.967712 master-0 kubenswrapper[4051]: I0312 18:10:33.967674 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:33.967760 master-0 kubenswrapper[4051]: I0312 18:10:33.967719 4051 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 18:10:33.968753 master-0 kubenswrapper[4051]: E0312 18:10:33.968706 4051 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 12 18:10:34.118284 master-0 kubenswrapper[4051]: I0312 18:10:34.118234 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 18:10:35.119339 master-0 kubenswrapper[4051]: I0312 18:10:35.119217 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 18:10:36.062736 master-0 kubenswrapper[4051]: W0312 18:10:36.062612 4051 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 18:10:36.062736 master-0 kubenswrapper[4051]: E0312 18:10:36.062687 4051 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 18:10:36.078999 master-0 kubenswrapper[4051]: I0312 18:10:36.078881 4051 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 12 18:10:36.080557 master-0 kubenswrapper[4051]: E0312 18:10:36.080464 4051 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 18:10:36.119360 master-0 kubenswrapper[4051]: I0312 18:10:36.119236 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 18:10:37.119859 master-0 kubenswrapper[4051]: I0312 18:10:37.119421 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 18:10:37.351283 master-0 kubenswrapper[4051]: I0312 18:10:37.351189 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"d26315a3bde904b11ba5d9d409301a02b1633a540a8d5ec716c4b45d6b097f49"} Mar 12 18:10:37.353323 master-0 kubenswrapper[4051]: I0312 18:10:37.353224 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"7905195025f5bcb8de89213d12f95430c8990ba81078908548bc95e6c97e2325"} Mar 12 18:10:37.353323 master-0 kubenswrapper[4051]: I0312 18:10:37.353292 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:37.354407 master-0 kubenswrapper[4051]: I0312 18:10:37.354354 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:37.354407 master-0 kubenswrapper[4051]: I0312 18:10:37.354396 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:37.354407 master-0 kubenswrapper[4051]: I0312 18:10:37.354407 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:37.355678 master-0 kubenswrapper[4051]: I0312 18:10:37.355616 4051 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="424ff1cd728e6aca964e1aafeb2eb3f61c869370919f825ab26a7330b62524f0" exitCode=0 Mar 12 18:10:37.355839 master-0 kubenswrapper[4051]: I0312 18:10:37.355682 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerDied","Data":"424ff1cd728e6aca964e1aafeb2eb3f61c869370919f825ab26a7330b62524f0"} Mar 12 18:10:37.355914 master-0 kubenswrapper[4051]: I0312 18:10:37.355862 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:37.357548 master-0 kubenswrapper[4051]: I0312 18:10:37.357456 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:37.357548 master-0 kubenswrapper[4051]: I0312 18:10:37.357503 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:37.357785 master-0 kubenswrapper[4051]: I0312 18:10:37.357561 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:37.362664 master-0 kubenswrapper[4051]: I0312 18:10:37.362615 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:37.363649 master-0 kubenswrapper[4051]: I0312 18:10:37.363594 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:37.363649 master-0 kubenswrapper[4051]: I0312 18:10:37.363640 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:37.363925 master-0 kubenswrapper[4051]: I0312 18:10:37.363664 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:38.361743 master-0 kubenswrapper[4051]: I0312 18:10:38.361682 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"c0a8d4431acf000c36d5a8e20b8fbea835bbdf1fd7c8e5eab3ca1097edb9bbb4"} Mar 12 18:10:38.362950 master-0 kubenswrapper[4051]: I0312 18:10:38.362923 4051 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="d26315a3bde904b11ba5d9d409301a02b1633a540a8d5ec716c4b45d6b097f49" exitCode=1 Mar 12 18:10:38.363037 master-0 kubenswrapper[4051]: I0312 18:10:38.363020 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:38.363190 master-0 kubenswrapper[4051]: I0312 18:10:38.363142 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"d26315a3bde904b11ba5d9d409301a02b1633a540a8d5ec716c4b45d6b097f49"} Mar 12 18:10:38.364135 master-0 kubenswrapper[4051]: I0312 18:10:38.364098 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:38.364221 master-0 kubenswrapper[4051]: I0312 18:10:38.364142 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:38.364221 master-0 kubenswrapper[4051]: I0312 18:10:38.364153 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:38.896827 master-0 kubenswrapper[4051]: I0312 18:10:38.896785 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 18:10:38.897044 master-0 kubenswrapper[4051]: E0312 18:10:38.896908 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c2a71b3a11dfe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:21.110484478 +0000 UTC m=+0.589610739,LastTimestamp:2026-03-12 18:10:21.110484478 +0000 UTC m=+0.589610739,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:38.906747 master-0 kubenswrapper[4051]: E0312 18:10:38.906622 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c2a71b66ea4f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:21.157508337 +0000 UTC m=+0.636634618,LastTimestamp:2026-03-12 18:10:21.157508337 +0000 UTC m=+0.636634618,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:38.918440 master-0 kubenswrapper[4051]: E0312 18:10:38.917824 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" event="&Event{ObjectMeta:{master-0.189c2a71b66fb862 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:21.15757885 +0000 UTC m=+0.636705121,LastTimestamp:2026-03-12 18:10:21.15757885 +0000 UTC m=+0.636705121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:38.930553 master-0 kubenswrapper[4051]: E0312 18:10:38.930300 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]" event="&Event{ObjectMeta:{master-0.189c2a71b6700a06 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:21.15759975 +0000 UTC m=+0.636726021,LastTimestamp:2026-03-12 18:10:21.15759975 +0000 UTC m=+0.636726021,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:38.937309 master-0 kubenswrapper[4051]: E0312 18:10:38.937206 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c2a71bcc18987 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:21.263604103 +0000 UTC m=+0.742730354,LastTimestamp:2026-03-12 18:10:21.263604103 +0000 UTC m=+0.742730354,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:38.946338 master-0 kubenswrapper[4051]: E0312 18:10:38.946253 4051 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c2a71b66ea4f1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c2a71b66ea4f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:21.157508337 +0000 UTC m=+0.636634618,LastTimestamp:2026-03-12 18:10:21.349735231 +0000 UTC m=+0.828861462,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:38.950285 master-0 kubenswrapper[4051]: E0312 18:10:38.950179 4051 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c2a71b66fb862\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c2a71b66fb862 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:21.15757885 +0000 UTC m=+0.636705121,LastTimestamp:2026-03-12 18:10:21.349754062 +0000 UTC m=+0.828880293,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:38.954341 master-0 kubenswrapper[4051]: E0312 18:10:38.954229 4051 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c2a71b6700a06\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c2a71b6700a06 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:21.15759975 +0000 UTC m=+0.636726021,LastTimestamp:2026-03-12 18:10:21.349761502 +0000 UTC m=+0.828887733,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:38.958369 master-0 kubenswrapper[4051]: E0312 18:10:38.958287 4051 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c2a71b66ea4f1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c2a71b66ea4f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:21.157508337 +0000 UTC m=+0.636634618,LastTimestamp:2026-03-12 18:10:21.381303964 +0000 UTC m=+0.860430195,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:38.961444 master-0 kubenswrapper[4051]: E0312 18:10:38.961377 4051 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c2a71b66fb862\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c2a71b66fb862 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:21.15757885 +0000 UTC m=+0.636705121,LastTimestamp:2026-03-12 18:10:21.381324125 +0000 UTC m=+0.860450356,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:38.964938 master-0 kubenswrapper[4051]: E0312 18:10:38.964840 4051 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c2a71b6700a06\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c2a71b6700a06 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:21.15759975 +0000 UTC m=+0.636726021,LastTimestamp:2026-03-12 18:10:21.381334985 +0000 UTC m=+0.860461216,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:38.969006 master-0 kubenswrapper[4051]: E0312 18:10:38.968884 4051 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c2a71b66ea4f1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c2a71b66ea4f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:21.157508337 +0000 UTC m=+0.636634618,LastTimestamp:2026-03-12 18:10:21.381976137 +0000 UTC m=+0.861102358,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:38.975129 master-0 kubenswrapper[4051]: E0312 18:10:38.975044 4051 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c2a71b66fb862\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c2a71b66fb862 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:21.15757885 +0000 UTC m=+0.636705121,LastTimestamp:2026-03-12 18:10:21.381986068 +0000 UTC m=+0.861112299,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:38.979031 master-0 kubenswrapper[4051]: E0312 18:10:38.978956 4051 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c2a71b6700a06\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c2a71b6700a06 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:21.15759975 +0000 UTC m=+0.636726021,LastTimestamp:2026-03-12 18:10:21.381993988 +0000 UTC m=+0.861120219,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:38.982714 master-0 kubenswrapper[4051]: E0312 18:10:38.982613 4051 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c2a71b66ea4f1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c2a71b66ea4f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:21.157508337 +0000 UTC m=+0.636634618,LastTimestamp:2026-03-12 18:10:21.382481525 +0000 UTC m=+0.861607756,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:38.986961 master-0 kubenswrapper[4051]: E0312 18:10:38.986800 4051 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c2a71b66fb862\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c2a71b66fb862 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:21.15757885 +0000 UTC m=+0.636705121,LastTimestamp:2026-03-12 18:10:21.382492235 +0000 UTC m=+0.861618466,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:38.990374 master-0 kubenswrapper[4051]: E0312 18:10:38.990225 4051 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c2a71b6700a06\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c2a71b6700a06 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:21.15759975 +0000 UTC m=+0.636726021,LastTimestamp:2026-03-12 18:10:21.382503816 +0000 UTC m=+0.861630057,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:38.994698 master-0 kubenswrapper[4051]: E0312 18:10:38.994620 4051 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c2a71b66ea4f1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c2a71b66ea4f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:21.157508337 +0000 UTC m=+0.636634618,LastTimestamp:2026-03-12 18:10:21.382977642 +0000 UTC m=+0.862103873,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:38.998363 master-0 kubenswrapper[4051]: E0312 18:10:38.998296 4051 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c2a71b66fb862\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c2a71b66fb862 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:21.15757885 +0000 UTC m=+0.636705121,LastTimestamp:2026-03-12 18:10:21.382990843 +0000 UTC m=+0.862117074,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.002235 master-0 kubenswrapper[4051]: E0312 18:10:39.001789 4051 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c2a71b6700a06\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c2a71b6700a06 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:21.15759975 +0000 UTC m=+0.636726021,LastTimestamp:2026-03-12 18:10:21.383001903 +0000 UTC m=+0.862128134,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.005180 master-0 kubenswrapper[4051]: E0312 18:10:39.005118 4051 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c2a71b66ea4f1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c2a71b66ea4f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:21.157508337 +0000 UTC m=+0.636634618,LastTimestamp:2026-03-12 18:10:21.383154299 +0000 UTC m=+0.862280530,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.008680 master-0 kubenswrapper[4051]: E0312 18:10:39.008569 4051 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c2a71b66fb862\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c2a71b66fb862 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:21.15757885 +0000 UTC m=+0.636705121,LastTimestamp:2026-03-12 18:10:21.383163889 +0000 UTC m=+0.862290120,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.012315 master-0 kubenswrapper[4051]: E0312 18:10:39.012217 4051 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c2a71b6700a06\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c2a71b6700a06 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:21.15759975 +0000 UTC m=+0.636726021,LastTimestamp:2026-03-12 18:10:21.383173729 +0000 UTC m=+0.862299960,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.015592 master-0 kubenswrapper[4051]: E0312 18:10:39.015491 4051 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c2a71b66ea4f1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c2a71b66ea4f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:21.157508337 +0000 UTC m=+0.636634618,LastTimestamp:2026-03-12 18:10:21.38319163 +0000 UTC m=+0.862317861,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.019479 master-0 kubenswrapper[4051]: E0312 18:10:39.019387 4051 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c2a71b66fb862\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c2a71b66fb862 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:21.15757885 +0000 UTC m=+0.636705121,LastTimestamp:2026-03-12 18:10:21.383217511 +0000 UTC m=+0.862343742,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.024988 master-0 kubenswrapper[4051]: E0312 18:10:39.024860 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c2a7205737ffe openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:22.483226622 +0000 UTC m=+1.962352893,LastTimestamp:2026-03-12 18:10:22.483226622 +0000 UTC m=+1.962352893,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.030237 master-0 kubenswrapper[4051]: E0312 18:10:39.030143 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c2a7206b4c14d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:22.504280397 +0000 UTC m=+1.983406638,LastTimestamp:2026-03-12 18:10:22.504280397 +0000 UTC m=+1.983406638,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.034415 master-0 kubenswrapper[4051]: E0312 18:10:39.034310 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c2a7206d30de7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:22.506266087 +0000 UTC m=+1.985392328,LastTimestamp:2026-03-12 18:10:22.506266087 +0000 UTC m=+1.985392328,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.038568 master-0 kubenswrapper[4051]: E0312 18:10:39.038492 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189c2a72082dd7e2 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:22.52899325 +0000 UTC m=+2.008119531,LastTimestamp:2026-03-12 18:10:22.52899325 +0000 UTC m=+2.008119531,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.042389 master-0 kubenswrapper[4051]: E0312 18:10:39.042314 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c2a722753fefc kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:23.051587324 +0000 UTC m=+2.530713555,LastTimestamp:2026-03-12 18:10:23.051587324 +0000 UTC m=+2.530713555,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.046294 master-0 kubenswrapper[4051]: E0312 18:10:39.046233 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c2a7285ac13e7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" in 2.128s (2.128s including waiting). Image size: 465086330 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:24.634418151 +0000 UTC m=+4.113544382,LastTimestamp:2026-03-12 18:10:24.634418151 +0000 UTC m=+4.113544382,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.049872 master-0 kubenswrapper[4051]: E0312 18:10:39.049743 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c2a72b1f5dbf4 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:25.377450996 +0000 UTC m=+4.856577227,LastTimestamp:2026-03-12 18:10:25.377450996 +0000 UTC m=+4.856577227,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.053316 master-0 kubenswrapper[4051]: E0312 18:10:39.053249 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c2a72b2ba8179 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:25.390338425 +0000 UTC m=+4.869464656,LastTimestamp:2026-03-12 18:10:25.390338425 +0000 UTC m=+4.869464656,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.056871 master-0 kubenswrapper[4051]: E0312 18:10:39.056800 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c2a730a7c37e2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:26.862651362 +0000 UTC m=+6.341777593,LastTimestamp:2026-03-12 18:10:26.862651362 +0000 UTC m=+6.341777593,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.059782 master-0 kubenswrapper[4051]: E0312 18:10:39.059704 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c2a7311d7e8d3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\" in 4.502s (4.502s including waiting). Image size: 529324693 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:26.986100947 +0000 UTC m=+6.465227178,LastTimestamp:2026-03-12 18:10:26.986100947 +0000 UTC m=+6.465227178,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.062717 master-0 kubenswrapper[4051]: E0312 18:10:39.062651 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c2a7313c4b518 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:27.018396952 +0000 UTC m=+6.497523183,LastTimestamp:2026-03-12 18:10:27.018396952 +0000 UTC m=+6.497523183,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.065689 master-0 kubenswrapper[4051]: E0312 18:10:39.065631 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c2a73144c9868 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:27.027302504 +0000 UTC m=+6.506428735,LastTimestamp:2026-03-12 18:10:27.027302504 +0000 UTC m=+6.506428735,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.068920 master-0 kubenswrapper[4051]: E0312 18:10:39.068847 4051 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c2a730a7c37e2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c2a730a7c37e2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:26.862651362 +0000 UTC m=+6.341777593,LastTimestamp:2026-03-12 18:10:27.323439269 +0000 UTC m=+6.802565500,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.071964 master-0 kubenswrapper[4051]: E0312 18:10:39.071906 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c2a732c300c45 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:27.428084805 +0000 UTC m=+6.907211036,LastTimestamp:2026-03-12 18:10:27.428084805 +0000 UTC m=+6.907211036,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.075699 master-0 kubenswrapper[4051]: E0312 18:10:39.075639 4051 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c2a7313c4b518\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c2a7313c4b518 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:27.018396952 +0000 UTC m=+6.497523183,LastTimestamp:2026-03-12 18:10:27.576285591 +0000 UTC m=+7.055411822,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.079342 master-0 kubenswrapper[4051]: E0312 18:10:39.079274 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c2a73350c2e73 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:27.576729203 +0000 UTC m=+7.055855424,LastTimestamp:2026-03-12 18:10:27.576729203 +0000 UTC m=+7.055855424,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.102538 master-0 kubenswrapper[4051]: E0312 18:10:39.085342 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c2a7335296eb3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:27.578646195 +0000 UTC m=+7.057772426,LastTimestamp:2026-03-12 18:10:27.578646195 +0000 UTC m=+7.057772426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.119535 master-0 kubenswrapper[4051]: E0312 18:10:39.118353 4051 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c2a73144c9868\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c2a73144c9868 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:27.027302504 +0000 UTC m=+6.506428735,LastTimestamp:2026-03-12 18:10:27.676066965 +0000 UTC m=+7.155193196,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.134242 master-0 kubenswrapper[4051]: E0312 18:10:39.134118 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c2a734356270f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:27.816457999 +0000 UTC m=+7.295584230,LastTimestamp:2026-03-12 18:10:27.816457999 +0000 UTC m=+7.295584230,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.134618 master-0 kubenswrapper[4051]: I0312 18:10:39.134596 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 18:10:39.140220 master-0 kubenswrapper[4051]: E0312 18:10:39.140112 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c2a734400dd56 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:27.827645782 +0000 UTC m=+7.306772013,LastTimestamp:2026-03-12 18:10:27.827645782 +0000 UTC m=+7.306772013,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.146366 master-0 kubenswrapper[4051]: E0312 18:10:39.146255 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c2a7361d98917 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:28.328384791 +0000 UTC m=+7.807511022,LastTimestamp:2026-03-12 18:10:28.328384791 +0000 UTC m=+7.807511022,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.151344 master-0 kubenswrapper[4051]: E0312 18:10:39.151168 4051 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c2a7361d98917\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c2a7361d98917 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:28.328384791 +0000 UTC m=+7.807511022,LastTimestamp:2026-03-12 18:10:29.3353962 +0000 UTC m=+8.814522461,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.155744 master-0 kubenswrapper[4051]: E0312 18:10:39.155631 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189c2a753f7c79b8 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" in 13.812s (13.812s including waiting). Image size: 943837171 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:36.341795256 +0000 UTC m=+15.820921487,LastTimestamp:2026-03-12 18:10:36.341795256 +0000 UTC m=+15.820921487,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.159781 master-0 kubenswrapper[4051]: E0312 18:10:39.159662 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c2a754b35c343 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" in 13.486s (13.486s including waiting). Image size: 943837171 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:36.538487619 +0000 UTC m=+16.017613870,LastTimestamp:2026-03-12 18:10:36.538487619 +0000 UTC m=+16.017613870,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.163676 master-0 kubenswrapper[4051]: E0312 18:10:39.163578 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189c2a754bf42a5c kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:36.550965852 +0000 UTC m=+16.030092093,LastTimestamp:2026-03-12 18:10:36.550965852 +0000 UTC m=+16.030092093,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.167413 master-0 kubenswrapper[4051]: E0312 18:10:39.167309 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189c2a754ce5f88c kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:36.566812812 +0000 UTC m=+16.045939093,LastTimestamp:2026-03-12 18:10:36.566812812 +0000 UTC m=+16.045939093,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.171356 master-0 kubenswrapper[4051]: E0312 18:10:39.171268 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c2a75508e4f51 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" in 14.123s (14.123s including waiting). Image size: 943837171 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:36.628176721 +0000 UTC m=+16.107302972,LastTimestamp:2026-03-12 18:10:36.628176721 +0000 UTC m=+16.107302972,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.175745 master-0 kubenswrapper[4051]: E0312 18:10:39.175625 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c2a7556b9f5de kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:36.731700702 +0000 UTC m=+16.210826963,LastTimestamp:2026-03-12 18:10:36.731700702 +0000 UTC m=+16.210826963,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.179808 master-0 kubenswrapper[4051]: E0312 18:10:39.179717 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c2a7557892e77 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:36.745281143 +0000 UTC m=+16.224407394,LastTimestamp:2026-03-12 18:10:36.745281143 +0000 UTC m=+16.224407394,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.183732 master-0 kubenswrapper[4051]: E0312 18:10:39.183634 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c2a75579822e3 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:36.746261219 +0000 UTC m=+16.225387470,LastTimestamp:2026-03-12 18:10:36.746261219 +0000 UTC m=+16.225387470,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.187863 master-0 kubenswrapper[4051]: E0312 18:10:39.187776 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c2a755e039101 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:36.853965057 +0000 UTC m=+16.333091288,LastTimestamp:2026-03-12 18:10:36.853965057 +0000 UTC m=+16.333091288,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.191603 master-0 kubenswrapper[4051]: E0312 18:10:39.191540 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c2a755eb2f9cd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:36.865460685 +0000 UTC m=+16.344586916,LastTimestamp:2026-03-12 18:10:36.865460685 +0000 UTC m=+16.344586916,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.196008 master-0 kubenswrapper[4051]: E0312 18:10:39.195895 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c2a757c52f324 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:37.362484004 +0000 UTC m=+16.841610275,LastTimestamp:2026-03-12 18:10:37.362484004 +0000 UTC m=+16.841610275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.199801 master-0 kubenswrapper[4051]: E0312 18:10:39.199733 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c2a758a6a6b5e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:37.598903134 +0000 UTC m=+17.078029375,LastTimestamp:2026-03-12 18:10:37.598903134 +0000 UTC m=+17.078029375,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.204088 master-0 kubenswrapper[4051]: E0312 18:10:39.203973 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c2a758af980f3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:37.608280307 +0000 UTC m=+17.087406548,LastTimestamp:2026-03-12 18:10:37.608280307 +0000 UTC m=+17.087406548,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.223132 master-0 kubenswrapper[4051]: E0312 18:10:39.222999 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c2a758b08c0eb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:37.609279723 +0000 UTC m=+17.088405964,LastTimestamp:2026-03-12 18:10:37.609279723 +0000 UTC m=+17.088405964,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:39.228222 master-0 kubenswrapper[4051]: W0312 18:10:39.228183 4051 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 12 18:10:39.228378 master-0 kubenswrapper[4051]: E0312 18:10:39.228346 4051 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 12 18:10:40.121589 master-0 kubenswrapper[4051]: I0312 18:10:40.121531 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 18:10:40.148744 master-0 kubenswrapper[4051]: E0312 18:10:40.148522 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c2a76222142aa kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\" in 3.397s (3.397s including waiting). Image size: 505242594 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:40.144245418 +0000 UTC m=+19.623371649,LastTimestamp:2026-03-12 18:10:40.144245418 +0000 UTC m=+19.623371649,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:40.160490 master-0 kubenswrapper[4051]: E0312 18:10:40.160364 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c2a7622c7fe63 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" in 2.545s (2.545s including waiting). Image size: 514980169 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:40.155172451 +0000 UTC m=+19.634298682,LastTimestamp:2026-03-12 18:10:40.155172451 +0000 UTC m=+19.634298682,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:40.299367 master-0 kubenswrapper[4051]: E0312 18:10:40.299249 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c2a762b170054 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:40.29456802 +0000 UTC m=+19.773694251,LastTimestamp:2026-03-12 18:10:40.29456802 +0000 UTC m=+19.773694251,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:40.303558 master-0 kubenswrapper[4051]: E0312 18:10:40.303403 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c2a762b224585 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:40.295306629 +0000 UTC m=+19.774432860,LastTimestamp:2026-03-12 18:10:40.295306629 +0000 UTC m=+19.774432860,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:40.309548 master-0 kubenswrapper[4051]: E0312 18:10:40.309404 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c2a762bb4aeb0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:40.304901808 +0000 UTC m=+19.784028049,LastTimestamp:2026-03-12 18:10:40.304901808 +0000 UTC m=+19.784028049,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:40.313454 master-0 kubenswrapper[4051]: E0312 18:10:40.313393 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c2a762bc69c4c kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:40.306076748 +0000 UTC m=+19.785202979,LastTimestamp:2026-03-12 18:10:40.306076748 +0000 UTC m=+19.785202979,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:40.369806 master-0 kubenswrapper[4051]: I0312 18:10:40.369762 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"1b41120246139f832c6fce447150fed26bcd9a47dc2f49808aa8f04449aadbb6"} Mar 12 18:10:40.370001 master-0 kubenswrapper[4051]: I0312 18:10:40.369826 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:40.370482 master-0 kubenswrapper[4051]: I0312 18:10:40.370455 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:40.370482 master-0 kubenswrapper[4051]: I0312 18:10:40.370489 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:40.370590 master-0 kubenswrapper[4051]: I0312 18:10:40.370501 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:40.371796 master-0 kubenswrapper[4051]: I0312 18:10:40.371735 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"46264a9af82e695b6d4dff6d1f55cd807b8535bd3965ef4182b223f7b1e2e6f6"} Mar 12 18:10:40.371845 master-0 kubenswrapper[4051]: I0312 18:10:40.371805 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:40.372384 master-0 kubenswrapper[4051]: I0312 18:10:40.372362 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:40.372427 master-0 kubenswrapper[4051]: I0312 18:10:40.372384 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:40.372427 master-0 kubenswrapper[4051]: I0312 18:10:40.372396 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:40.372628 master-0 kubenswrapper[4051]: I0312 18:10:40.372609 4051 scope.go:117] "RemoveContainer" containerID="d26315a3bde904b11ba5d9d409301a02b1633a540a8d5ec716c4b45d6b097f49" Mar 12 18:10:40.381053 master-0 kubenswrapper[4051]: E0312 18:10:40.380925 4051 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c2a762fdd0114 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:40.374653204 +0000 UTC m=+19.853779435,LastTimestamp:2026-03-12 18:10:40.374653204 +0000 UTC m=+19.853779435,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:40.463912 master-0 kubenswrapper[4051]: I0312 18:10:40.463843 4051 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:10:40.463912 master-0 kubenswrapper[4051]: I0312 18:10:40.463916 4051 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:10:40.568361 master-0 kubenswrapper[4051]: E0312 18:10:40.568217 4051 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.189c2a7556b9f5de\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c2a7556b9f5de kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:36.731700702 +0000 UTC m=+16.210826963,LastTimestamp:2026-03-12 18:10:40.562924999 +0000 UTC m=+20.042051230,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:40.582423 master-0 kubenswrapper[4051]: E0312 18:10:40.582315 4051 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.189c2a7557892e77\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c2a7557892e77 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:36.745281143 +0000 UTC m=+16.224407394,LastTimestamp:2026-03-12 18:10:40.578004629 +0000 UTC m=+20.057130860,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:40.643258 master-0 kubenswrapper[4051]: I0312 18:10:40.643130 4051 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:10:40.746532 master-0 kubenswrapper[4051]: E0312 18:10:40.746462 4051 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 12 18:10:40.969385 master-0 kubenswrapper[4051]: I0312 18:10:40.969199 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:40.970384 master-0 kubenswrapper[4051]: I0312 18:10:40.970355 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:40.970459 master-0 kubenswrapper[4051]: I0312 18:10:40.970394 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:40.970459 master-0 kubenswrapper[4051]: I0312 18:10:40.970405 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:40.970459 master-0 kubenswrapper[4051]: I0312 18:10:40.970460 4051 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 18:10:40.974994 master-0 kubenswrapper[4051]: E0312 18:10:40.974965 4051 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 12 18:10:41.124462 master-0 kubenswrapper[4051]: I0312 18:10:41.124413 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 18:10:41.185758 master-0 kubenswrapper[4051]: W0312 18:10:41.185711 4051 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 12 18:10:41.186138 master-0 kubenswrapper[4051]: E0312 18:10:41.186081 4051 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 12 18:10:41.250801 master-0 kubenswrapper[4051]: E0312 18:10:41.250630 4051 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 12 18:10:41.280596 master-0 kubenswrapper[4051]: I0312 18:10:41.280510 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:41.281433 master-0 kubenswrapper[4051]: I0312 18:10:41.281405 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:41.281639 master-0 kubenswrapper[4051]: I0312 18:10:41.281617 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:41.281766 master-0 kubenswrapper[4051]: I0312 18:10:41.281747 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:41.282368 master-0 kubenswrapper[4051]: I0312 18:10:41.282345 4051 scope.go:117] "RemoveContainer" containerID="1f5721d1474673f48d2f2fd384518823e4b1a13194287e15de08ed375d1a67dd" Mar 12 18:10:41.292368 master-0 kubenswrapper[4051]: E0312 18:10:41.292254 4051 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c2a730a7c37e2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c2a730a7c37e2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:26.862651362 +0000 UTC m=+6.341777593,LastTimestamp:2026-03-12 18:10:41.285173788 +0000 UTC m=+20.764300019,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:41.379581 master-0 kubenswrapper[4051]: I0312 18:10:41.379450 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:41.380180 master-0 kubenswrapper[4051]: I0312 18:10:41.380138 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:41.380661 master-0 kubenswrapper[4051]: I0312 18:10:41.380607 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"95d38f1431066968104bb51ab17f5c680fc28063e6ba5f01ad252c4fc619c1e1"} Mar 12 18:10:41.381283 master-0 kubenswrapper[4051]: I0312 18:10:41.381230 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:41.381283 master-0 kubenswrapper[4051]: I0312 18:10:41.381278 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:41.381477 master-0 kubenswrapper[4051]: I0312 18:10:41.381310 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:41.381477 master-0 kubenswrapper[4051]: I0312 18:10:41.381323 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:41.381477 master-0 kubenswrapper[4051]: I0312 18:10:41.381287 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:41.381477 master-0 kubenswrapper[4051]: I0312 18:10:41.381404 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:41.531995 master-0 kubenswrapper[4051]: E0312 18:10:41.531870 4051 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c2a7313c4b518\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c2a7313c4b518 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:27.018396952 +0000 UTC m=+6.497523183,LastTimestamp:2026-03-12 18:10:41.526780594 +0000 UTC m=+21.005906825,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:41.539401 master-0 kubenswrapper[4051]: E0312 18:10:41.539310 4051 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c2a73144c9868\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c2a73144c9868 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:27.027302504 +0000 UTC m=+6.506428735,LastTimestamp:2026-03-12 18:10:41.535100979 +0000 UTC m=+21.014227210,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:41.751876 master-0 kubenswrapper[4051]: I0312 18:10:41.751688 4051 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:10:41.772457 master-0 kubenswrapper[4051]: W0312 18:10:41.772392 4051 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 12 18:10:41.772666 master-0 kubenswrapper[4051]: E0312 18:10:41.772463 4051 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 12 18:10:42.123240 master-0 kubenswrapper[4051]: I0312 18:10:42.123095 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 18:10:42.383079 master-0 kubenswrapper[4051]: I0312 18:10:42.382927 4051 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 12 18:10:42.383936 master-0 kubenswrapper[4051]: I0312 18:10:42.383760 4051 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/1.log" Mar 12 18:10:42.384345 master-0 kubenswrapper[4051]: I0312 18:10:42.384279 4051 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="6dd8882d32973842f51263a7b3bcb6e8d811fa6ee097841835e846b316ee2f90" exitCode=1 Mar 12 18:10:42.384345 master-0 kubenswrapper[4051]: I0312 18:10:42.384334 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"6dd8882d32973842f51263a7b3bcb6e8d811fa6ee097841835e846b316ee2f90"} Mar 12 18:10:42.384603 master-0 kubenswrapper[4051]: I0312 18:10:42.384389 4051 scope.go:117] "RemoveContainer" containerID="1f5721d1474673f48d2f2fd384518823e4b1a13194287e15de08ed375d1a67dd" Mar 12 18:10:42.384603 master-0 kubenswrapper[4051]: I0312 18:10:42.384502 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:42.384771 master-0 kubenswrapper[4051]: I0312 18:10:42.384530 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:42.385066 master-0 kubenswrapper[4051]: I0312 18:10:42.384641 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:42.385443 master-0 kubenswrapper[4051]: I0312 18:10:42.385409 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:42.385443 master-0 kubenswrapper[4051]: I0312 18:10:42.385440 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:42.385687 master-0 kubenswrapper[4051]: I0312 18:10:42.385452 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:42.385687 master-0 kubenswrapper[4051]: I0312 18:10:42.385479 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:42.385687 master-0 kubenswrapper[4051]: I0312 18:10:42.385498 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:42.385687 master-0 kubenswrapper[4051]: I0312 18:10:42.385509 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:42.385982 master-0 kubenswrapper[4051]: I0312 18:10:42.385786 4051 scope.go:117] "RemoveContainer" containerID="6dd8882d32973842f51263a7b3bcb6e8d811fa6ee097841835e846b316ee2f90" Mar 12 18:10:42.385982 master-0 kubenswrapper[4051]: E0312 18:10:42.385948 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 12 18:10:42.387001 master-0 kubenswrapper[4051]: I0312 18:10:42.386554 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:42.387001 master-0 kubenswrapper[4051]: I0312 18:10:42.386582 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:42.387001 master-0 kubenswrapper[4051]: I0312 18:10:42.386591 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:42.394738 master-0 kubenswrapper[4051]: E0312 18:10:42.394406 4051 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c2a7361d98917\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c2a7361d98917 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:28.328384791 +0000 UTC m=+7.807511022,LastTimestamp:2026-03-12 18:10:42.385914598 +0000 UTC m=+21.865040829,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:43.122371 master-0 kubenswrapper[4051]: I0312 18:10:43.122313 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 18:10:43.389303 master-0 kubenswrapper[4051]: I0312 18:10:43.389254 4051 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 12 18:10:43.389930 master-0 kubenswrapper[4051]: I0312 18:10:43.389890 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:43.391085 master-0 kubenswrapper[4051]: I0312 18:10:43.391051 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:43.391145 master-0 kubenswrapper[4051]: I0312 18:10:43.391092 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:43.391145 master-0 kubenswrapper[4051]: I0312 18:10:43.391109 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:44.022020 master-0 kubenswrapper[4051]: I0312 18:10:44.021882 4051 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:10:44.022292 master-0 kubenswrapper[4051]: I0312 18:10:44.022100 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:44.023427 master-0 kubenswrapper[4051]: I0312 18:10:44.023370 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:44.023427 master-0 kubenswrapper[4051]: I0312 18:10:44.023412 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:44.023427 master-0 kubenswrapper[4051]: I0312 18:10:44.023422 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:44.047175 master-0 kubenswrapper[4051]: I0312 18:10:44.047072 4051 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:10:44.127363 master-0 kubenswrapper[4051]: I0312 18:10:44.127290 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 18:10:44.392340 master-0 kubenswrapper[4051]: I0312 18:10:44.392236 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:44.393771 master-0 kubenswrapper[4051]: I0312 18:10:44.393687 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:44.393933 master-0 kubenswrapper[4051]: I0312 18:10:44.393807 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:44.393933 master-0 kubenswrapper[4051]: I0312 18:10:44.393830 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:44.400632 master-0 kubenswrapper[4051]: I0312 18:10:44.400596 4051 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:10:45.125684 master-0 kubenswrapper[4051]: I0312 18:10:45.125569 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 18:10:45.395206 master-0 kubenswrapper[4051]: I0312 18:10:45.395075 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:45.395847 master-0 kubenswrapper[4051]: I0312 18:10:45.395797 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:45.395925 master-0 kubenswrapper[4051]: I0312 18:10:45.395851 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:45.395925 master-0 kubenswrapper[4051]: I0312 18:10:45.395869 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:46.125189 master-0 kubenswrapper[4051]: I0312 18:10:46.125101 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 18:10:47.125759 master-0 kubenswrapper[4051]: I0312 18:10:47.125653 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 18:10:47.755369 master-0 kubenswrapper[4051]: E0312 18:10:47.755255 4051 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 12 18:10:47.975934 master-0 kubenswrapper[4051]: I0312 18:10:47.975823 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:47.977318 master-0 kubenswrapper[4051]: I0312 18:10:47.977247 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:47.977318 master-0 kubenswrapper[4051]: I0312 18:10:47.977308 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:47.977504 master-0 kubenswrapper[4051]: I0312 18:10:47.977328 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:47.977504 master-0 kubenswrapper[4051]: I0312 18:10:47.977398 4051 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 18:10:47.986998 master-0 kubenswrapper[4051]: E0312 18:10:47.986928 4051 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 12 18:10:48.091739 master-0 kubenswrapper[4051]: I0312 18:10:48.091637 4051 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:10:48.091901 master-0 kubenswrapper[4051]: I0312 18:10:48.091845 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:48.093388 master-0 kubenswrapper[4051]: I0312 18:10:48.093292 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:48.093388 master-0 kubenswrapper[4051]: I0312 18:10:48.093384 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:48.093663 master-0 kubenswrapper[4051]: I0312 18:10:48.093405 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:48.099215 master-0 kubenswrapper[4051]: I0312 18:10:48.099155 4051 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:10:48.125760 master-0 kubenswrapper[4051]: I0312 18:10:48.125663 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 18:10:48.403902 master-0 kubenswrapper[4051]: I0312 18:10:48.403753 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:48.403902 master-0 kubenswrapper[4051]: I0312 18:10:48.403826 4051 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:10:48.405003 master-0 kubenswrapper[4051]: I0312 18:10:48.404948 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:48.405082 master-0 kubenswrapper[4051]: I0312 18:10:48.405011 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:48.405082 master-0 kubenswrapper[4051]: I0312 18:10:48.405037 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:49.127008 master-0 kubenswrapper[4051]: I0312 18:10:49.126940 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 18:10:49.406406 master-0 kubenswrapper[4051]: I0312 18:10:49.406281 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:49.407546 master-0 kubenswrapper[4051]: I0312 18:10:49.407461 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:49.407546 master-0 kubenswrapper[4051]: I0312 18:10:49.407509 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:49.407756 master-0 kubenswrapper[4051]: I0312 18:10:49.407557 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:50.125556 master-0 kubenswrapper[4051]: I0312 18:10:50.125453 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 18:10:50.464965 master-0 kubenswrapper[4051]: I0312 18:10:50.464813 4051 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:10:50.465474 master-0 kubenswrapper[4051]: I0312 18:10:50.465400 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:50.466888 master-0 kubenswrapper[4051]: I0312 18:10:50.466836 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:50.466958 master-0 kubenswrapper[4051]: I0312 18:10:50.466899 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:50.466958 master-0 kubenswrapper[4051]: I0312 18:10:50.466917 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:50.471992 master-0 kubenswrapper[4051]: I0312 18:10:50.471943 4051 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:10:50.473284 master-0 kubenswrapper[4051]: I0312 18:10:50.473227 4051 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:10:51.125203 master-0 kubenswrapper[4051]: I0312 18:10:51.125112 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 18:10:51.251393 master-0 kubenswrapper[4051]: E0312 18:10:51.251317 4051 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 12 18:10:51.410795 master-0 kubenswrapper[4051]: I0312 18:10:51.410668 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:51.411672 master-0 kubenswrapper[4051]: I0312 18:10:51.411637 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:51.411742 master-0 kubenswrapper[4051]: I0312 18:10:51.411676 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:51.411742 master-0 kubenswrapper[4051]: I0312 18:10:51.411691 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:51.415826 master-0 kubenswrapper[4051]: I0312 18:10:51.415785 4051 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:10:52.125262 master-0 kubenswrapper[4051]: I0312 18:10:52.125204 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 18:10:52.413193 master-0 kubenswrapper[4051]: I0312 18:10:52.413046 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:52.414295 master-0 kubenswrapper[4051]: I0312 18:10:52.414235 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:52.414295 master-0 kubenswrapper[4051]: I0312 18:10:52.414288 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:52.414551 master-0 kubenswrapper[4051]: I0312 18:10:52.414306 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:52.872249 master-0 kubenswrapper[4051]: W0312 18:10:52.872128 4051 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 12 18:10:52.872249 master-0 kubenswrapper[4051]: E0312 18:10:52.872194 4051 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 12 18:10:53.126270 master-0 kubenswrapper[4051]: I0312 18:10:53.126122 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 18:10:53.280881 master-0 kubenswrapper[4051]: I0312 18:10:53.280784 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:53.282197 master-0 kubenswrapper[4051]: I0312 18:10:53.282154 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:53.282197 master-0 kubenswrapper[4051]: I0312 18:10:53.282207 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:53.282316 master-0 kubenswrapper[4051]: I0312 18:10:53.282219 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:53.282655 master-0 kubenswrapper[4051]: I0312 18:10:53.282626 4051 scope.go:117] "RemoveContainer" containerID="6dd8882d32973842f51263a7b3bcb6e8d811fa6ee097841835e846b316ee2f90" Mar 12 18:10:53.282851 master-0 kubenswrapper[4051]: E0312 18:10:53.282817 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 12 18:10:53.291178 master-0 kubenswrapper[4051]: E0312 18:10:53.290987 4051 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c2a7361d98917\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c2a7361d98917 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:10:28.328384791 +0000 UTC m=+7.807511022,LastTimestamp:2026-03-12 18:10:53.282783595 +0000 UTC m=+32.761909836,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:10:53.325749 master-0 kubenswrapper[4051]: I0312 18:10:53.325680 4051 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 12 18:10:53.344371 master-0 kubenswrapper[4051]: I0312 18:10:53.344314 4051 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 12 18:10:53.415298 master-0 kubenswrapper[4051]: I0312 18:10:53.415108 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:53.416204 master-0 kubenswrapper[4051]: I0312 18:10:53.416151 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:53.416337 master-0 kubenswrapper[4051]: I0312 18:10:53.416214 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:53.416337 master-0 kubenswrapper[4051]: I0312 18:10:53.416232 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:54.124946 master-0 kubenswrapper[4051]: I0312 18:10:54.124833 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 18:10:54.764631 master-0 kubenswrapper[4051]: E0312 18:10:54.764413 4051 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 12 18:10:54.987282 master-0 kubenswrapper[4051]: I0312 18:10:54.987185 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:10:54.988900 master-0 kubenswrapper[4051]: I0312 18:10:54.988844 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:10:54.989024 master-0 kubenswrapper[4051]: I0312 18:10:54.988912 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:10:54.989024 master-0 kubenswrapper[4051]: I0312 18:10:54.988936 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:10:54.989024 master-0 kubenswrapper[4051]: I0312 18:10:54.989013 4051 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 18:10:54.996454 master-0 kubenswrapper[4051]: E0312 18:10:54.996385 4051 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 12 18:10:55.125956 master-0 kubenswrapper[4051]: I0312 18:10:55.125866 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 18:10:56.123683 master-0 kubenswrapper[4051]: I0312 18:10:56.123347 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 18:10:57.121447 master-0 kubenswrapper[4051]: I0312 18:10:57.121399 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 18:10:58.123318 master-0 kubenswrapper[4051]: I0312 18:10:58.123253 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 18:10:59.121667 master-0 kubenswrapper[4051]: I0312 18:10:59.121610 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 18:11:00.123893 master-0 kubenswrapper[4051]: I0312 18:11:00.123772 4051 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 18:11:00.429691 master-0 kubenswrapper[4051]: W0312 18:11:00.429640 4051 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 12 18:11:00.429892 master-0 kubenswrapper[4051]: E0312 18:11:00.429684 4051 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 12 18:11:00.535939 master-0 kubenswrapper[4051]: W0312 18:11:00.535898 4051 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 12 18:11:00.536159 master-0 kubenswrapper[4051]: E0312 18:11:00.535948 4051 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 12 18:11:00.579867 master-0 kubenswrapper[4051]: W0312 18:11:00.579810 4051 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 12 18:11:00.579867 master-0 kubenswrapper[4051]: E0312 18:11:00.579863 4051 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 12 18:11:00.610418 master-0 kubenswrapper[4051]: I0312 18:11:00.610373 4051 csr.go:261] certificate signing request csr-2j2v8 is approved, waiting to be issued Mar 12 18:11:00.617626 master-0 kubenswrapper[4051]: I0312 18:11:00.617432 4051 csr.go:257] certificate signing request csr-2j2v8 is issued Mar 12 18:11:00.995870 master-0 kubenswrapper[4051]: I0312 18:11:00.995774 4051 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 12 18:11:01.126206 master-0 kubenswrapper[4051]: I0312 18:11:01.126169 4051 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 18:11:01.141578 master-0 kubenswrapper[4051]: I0312 18:11:01.141535 4051 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 18:11:01.198496 master-0 kubenswrapper[4051]: I0312 18:11:01.198449 4051 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 18:11:01.252232 master-0 kubenswrapper[4051]: E0312 18:11:01.252082 4051 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 12 18:11:01.460071 master-0 kubenswrapper[4051]: I0312 18:11:01.460029 4051 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 18:11:01.460071 master-0 kubenswrapper[4051]: E0312 18:11:01.460073 4051 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 12 18:11:01.482326 master-0 kubenswrapper[4051]: I0312 18:11:01.482278 4051 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 18:11:01.498605 master-0 kubenswrapper[4051]: I0312 18:11:01.498559 4051 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 18:11:01.555678 master-0 kubenswrapper[4051]: I0312 18:11:01.555623 4051 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 18:11:01.619465 master-0 kubenswrapper[4051]: I0312 18:11:01.619394 4051 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-13 18:02:20 +0000 UTC, rotation deadline is 2026-03-13 13:38:12.970917803 +0000 UTC Mar 12 18:11:01.619465 master-0 kubenswrapper[4051]: I0312 18:11:01.619447 4051 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 19h27m11.351474866s for next certificate rotation Mar 12 18:11:01.771302 master-0 kubenswrapper[4051]: E0312 18:11:01.771238 4051 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"master-0\" not found" node="master-0" Mar 12 18:11:01.820481 master-0 kubenswrapper[4051]: I0312 18:11:01.820357 4051 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 18:11:01.820481 master-0 kubenswrapper[4051]: E0312 18:11:01.820407 4051 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 12 18:11:01.920358 master-0 kubenswrapper[4051]: I0312 18:11:01.920315 4051 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 18:11:01.938323 master-0 kubenswrapper[4051]: I0312 18:11:01.938275 4051 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 18:11:01.996586 master-0 kubenswrapper[4051]: I0312 18:11:01.996502 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:11:01.998055 master-0 kubenswrapper[4051]: I0312 18:11:01.997991 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:11:01.998202 master-0 kubenswrapper[4051]: I0312 18:11:01.998111 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:11:01.998202 master-0 kubenswrapper[4051]: I0312 18:11:01.998144 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:11:01.998334 master-0 kubenswrapper[4051]: I0312 18:11:01.998232 4051 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 18:11:01.999747 master-0 kubenswrapper[4051]: I0312 18:11:01.999710 4051 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 18:11:02.010556 master-0 kubenswrapper[4051]: I0312 18:11:02.010456 4051 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 12 18:11:02.010556 master-0 kubenswrapper[4051]: E0312 18:11:02.010508 4051 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Mar 12 18:11:02.026223 master-0 kubenswrapper[4051]: E0312 18:11:02.026160 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:02.127200 master-0 kubenswrapper[4051]: E0312 18:11:02.127020 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:02.227769 master-0 kubenswrapper[4051]: E0312 18:11:02.227598 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:02.294167 master-0 kubenswrapper[4051]: I0312 18:11:02.294098 4051 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 12 18:11:02.305816 master-0 kubenswrapper[4051]: I0312 18:11:02.305762 4051 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 12 18:11:02.328674 master-0 kubenswrapper[4051]: E0312 18:11:02.328617 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:02.429635 master-0 kubenswrapper[4051]: E0312 18:11:02.429440 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:02.530743 master-0 kubenswrapper[4051]: E0312 18:11:02.530648 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:02.631733 master-0 kubenswrapper[4051]: E0312 18:11:02.631651 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:02.732558 master-0 kubenswrapper[4051]: E0312 18:11:02.732356 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:02.833239 master-0 kubenswrapper[4051]: E0312 18:11:02.833182 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:02.933928 master-0 kubenswrapper[4051]: E0312 18:11:02.933877 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:03.034847 master-0 kubenswrapper[4051]: E0312 18:11:03.034724 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:03.135641 master-0 kubenswrapper[4051]: E0312 18:11:03.135602 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:03.236413 master-0 kubenswrapper[4051]: E0312 18:11:03.236362 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:03.336957 master-0 kubenswrapper[4051]: E0312 18:11:03.336891 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:03.438099 master-0 kubenswrapper[4051]: E0312 18:11:03.438009 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:03.539034 master-0 kubenswrapper[4051]: E0312 18:11:03.538978 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:03.640072 master-0 kubenswrapper[4051]: E0312 18:11:03.639952 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:03.740759 master-0 kubenswrapper[4051]: E0312 18:11:03.740664 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:03.841558 master-0 kubenswrapper[4051]: E0312 18:11:03.841480 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:03.942896 master-0 kubenswrapper[4051]: E0312 18:11:03.942777 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:04.044331 master-0 kubenswrapper[4051]: E0312 18:11:04.044236 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:04.144768 master-0 kubenswrapper[4051]: E0312 18:11:04.144679 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:04.245744 master-0 kubenswrapper[4051]: E0312 18:11:04.245573 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:04.346107 master-0 kubenswrapper[4051]: E0312 18:11:04.345990 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:04.446305 master-0 kubenswrapper[4051]: E0312 18:11:04.446222 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:04.546958 master-0 kubenswrapper[4051]: E0312 18:11:04.546868 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:04.648014 master-0 kubenswrapper[4051]: E0312 18:11:04.647901 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:04.748804 master-0 kubenswrapper[4051]: E0312 18:11:04.748697 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:04.849686 master-0 kubenswrapper[4051]: E0312 18:11:04.849543 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:04.950548 master-0 kubenswrapper[4051]: E0312 18:11:04.950451 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:05.050736 master-0 kubenswrapper[4051]: E0312 18:11:05.050663 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:05.152005 master-0 kubenswrapper[4051]: E0312 18:11:05.151837 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:05.252942 master-0 kubenswrapper[4051]: E0312 18:11:05.252856 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:05.354002 master-0 kubenswrapper[4051]: E0312 18:11:05.353940 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:05.454658 master-0 kubenswrapper[4051]: E0312 18:11:05.454549 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:05.554900 master-0 kubenswrapper[4051]: E0312 18:11:05.554847 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:05.655190 master-0 kubenswrapper[4051]: E0312 18:11:05.655134 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:05.755856 master-0 kubenswrapper[4051]: E0312 18:11:05.755704 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:05.856296 master-0 kubenswrapper[4051]: E0312 18:11:05.856215 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:05.957336 master-0 kubenswrapper[4051]: E0312 18:11:05.957211 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:06.057721 master-0 kubenswrapper[4051]: E0312 18:11:06.057607 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:06.157971 master-0 kubenswrapper[4051]: E0312 18:11:06.157813 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:06.258106 master-0 kubenswrapper[4051]: E0312 18:11:06.258024 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:06.359334 master-0 kubenswrapper[4051]: E0312 18:11:06.359171 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:06.459616 master-0 kubenswrapper[4051]: E0312 18:11:06.459422 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:06.560118 master-0 kubenswrapper[4051]: E0312 18:11:06.560041 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:06.661252 master-0 kubenswrapper[4051]: E0312 18:11:06.661101 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:06.761988 master-0 kubenswrapper[4051]: E0312 18:11:06.761917 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:06.862669 master-0 kubenswrapper[4051]: E0312 18:11:06.862577 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:06.963580 master-0 kubenswrapper[4051]: E0312 18:11:06.963366 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:07.063950 master-0 kubenswrapper[4051]: E0312 18:11:07.063858 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:07.164420 master-0 kubenswrapper[4051]: E0312 18:11:07.164325 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:07.264595 master-0 kubenswrapper[4051]: E0312 18:11:07.264442 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:07.280946 master-0 kubenswrapper[4051]: I0312 18:11:07.280876 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:11:07.282182 master-0 kubenswrapper[4051]: I0312 18:11:07.282131 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:11:07.282273 master-0 kubenswrapper[4051]: I0312 18:11:07.282194 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:11:07.282273 master-0 kubenswrapper[4051]: I0312 18:11:07.282221 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:11:07.283029 master-0 kubenswrapper[4051]: I0312 18:11:07.282974 4051 scope.go:117] "RemoveContainer" containerID="6dd8882d32973842f51263a7b3bcb6e8d811fa6ee097841835e846b316ee2f90" Mar 12 18:11:07.364814 master-0 kubenswrapper[4051]: E0312 18:11:07.364765 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:07.465096 master-0 kubenswrapper[4051]: E0312 18:11:07.465040 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:07.566037 master-0 kubenswrapper[4051]: E0312 18:11:07.565954 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:07.666947 master-0 kubenswrapper[4051]: E0312 18:11:07.666865 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:07.767544 master-0 kubenswrapper[4051]: E0312 18:11:07.767474 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:07.868249 master-0 kubenswrapper[4051]: E0312 18:11:07.868165 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:07.968929 master-0 kubenswrapper[4051]: E0312 18:11:07.968828 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:08.069229 master-0 kubenswrapper[4051]: E0312 18:11:08.069153 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:08.169853 master-0 kubenswrapper[4051]: E0312 18:11:08.169690 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:08.270598 master-0 kubenswrapper[4051]: E0312 18:11:08.270548 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:08.371529 master-0 kubenswrapper[4051]: E0312 18:11:08.371469 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:08.467890 master-0 kubenswrapper[4051]: I0312 18:11:08.467783 4051 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/3.log" Mar 12 18:11:08.468865 master-0 kubenswrapper[4051]: I0312 18:11:08.468727 4051 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 12 18:11:08.469499 master-0 kubenswrapper[4051]: I0312 18:11:08.469439 4051 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="f4576abbe7a90b1b20fca15403fa958348685c536308da122d58a74078454e59" exitCode=1 Mar 12 18:11:08.469499 master-0 kubenswrapper[4051]: I0312 18:11:08.469494 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"f4576abbe7a90b1b20fca15403fa958348685c536308da122d58a74078454e59"} Mar 12 18:11:08.469718 master-0 kubenswrapper[4051]: I0312 18:11:08.469561 4051 scope.go:117] "RemoveContainer" containerID="6dd8882d32973842f51263a7b3bcb6e8d811fa6ee097841835e846b316ee2f90" Mar 12 18:11:08.469779 master-0 kubenswrapper[4051]: I0312 18:11:08.469726 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:11:08.470802 master-0 kubenswrapper[4051]: I0312 18:11:08.470771 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:11:08.470911 master-0 kubenswrapper[4051]: I0312 18:11:08.470815 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:11:08.470911 master-0 kubenswrapper[4051]: I0312 18:11:08.470828 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:11:08.471234 master-0 kubenswrapper[4051]: I0312 18:11:08.471206 4051 scope.go:117] "RemoveContainer" containerID="f4576abbe7a90b1b20fca15403fa958348685c536308da122d58a74078454e59" Mar 12 18:11:08.471566 master-0 kubenswrapper[4051]: E0312 18:11:08.471493 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 12 18:11:08.471770 master-0 kubenswrapper[4051]: E0312 18:11:08.471733 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:08.572773 master-0 kubenswrapper[4051]: E0312 18:11:08.572687 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:08.673245 master-0 kubenswrapper[4051]: E0312 18:11:08.673172 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:08.774389 master-0 kubenswrapper[4051]: E0312 18:11:08.774222 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:08.875424 master-0 kubenswrapper[4051]: E0312 18:11:08.875366 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:08.976307 master-0 kubenswrapper[4051]: E0312 18:11:08.976249 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:09.076894 master-0 kubenswrapper[4051]: E0312 18:11:09.076851 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:09.177483 master-0 kubenswrapper[4051]: E0312 18:11:09.177392 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:09.278041 master-0 kubenswrapper[4051]: E0312 18:11:09.277972 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:09.378930 master-0 kubenswrapper[4051]: E0312 18:11:09.378795 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:09.472540 master-0 kubenswrapper[4051]: I0312 18:11:09.472489 4051 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/3.log" Mar 12 18:11:09.479373 master-0 kubenswrapper[4051]: E0312 18:11:09.479302 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:09.579547 master-0 kubenswrapper[4051]: E0312 18:11:09.579445 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:09.680845 master-0 kubenswrapper[4051]: E0312 18:11:09.680654 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:09.782024 master-0 kubenswrapper[4051]: E0312 18:11:09.781884 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:09.882798 master-0 kubenswrapper[4051]: E0312 18:11:09.882678 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:09.983806 master-0 kubenswrapper[4051]: E0312 18:11:09.983596 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:10.084902 master-0 kubenswrapper[4051]: E0312 18:11:10.084831 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:10.185701 master-0 kubenswrapper[4051]: E0312 18:11:10.185649 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:10.286574 master-0 kubenswrapper[4051]: E0312 18:11:10.286392 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:10.387268 master-0 kubenswrapper[4051]: E0312 18:11:10.387191 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:10.487963 master-0 kubenswrapper[4051]: E0312 18:11:10.487916 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:10.588761 master-0 kubenswrapper[4051]: E0312 18:11:10.588735 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:10.689155 master-0 kubenswrapper[4051]: E0312 18:11:10.689096 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:10.790067 master-0 kubenswrapper[4051]: E0312 18:11:10.790019 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:10.890856 master-0 kubenswrapper[4051]: E0312 18:11:10.890749 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:10.991880 master-0 kubenswrapper[4051]: E0312 18:11:10.991838 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:11.063713 master-0 kubenswrapper[4051]: I0312 18:11:11.063672 4051 csr.go:261] certificate signing request csr-nggpw is approved, waiting to be issued Mar 12 18:11:11.072857 master-0 kubenswrapper[4051]: I0312 18:11:11.072816 4051 csr.go:257] certificate signing request csr-nggpw is issued Mar 12 18:11:11.092318 master-0 kubenswrapper[4051]: E0312 18:11:11.092248 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:11.192773 master-0 kubenswrapper[4051]: E0312 18:11:11.192639 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:11.252439 master-0 kubenswrapper[4051]: E0312 18:11:11.252399 4051 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 12 18:11:11.293563 master-0 kubenswrapper[4051]: E0312 18:11:11.293507 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:11.394552 master-0 kubenswrapper[4051]: E0312 18:11:11.394488 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:11.495626 master-0 kubenswrapper[4051]: E0312 18:11:11.495432 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:11.596259 master-0 kubenswrapper[4051]: E0312 18:11:11.596219 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:11.696714 master-0 kubenswrapper[4051]: E0312 18:11:11.696627 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:11.797817 master-0 kubenswrapper[4051]: E0312 18:11:11.797733 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:11.898883 master-0 kubenswrapper[4051]: E0312 18:11:11.898783 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:11.999875 master-0 kubenswrapper[4051]: E0312 18:11:11.999816 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:12.074999 master-0 kubenswrapper[4051]: I0312 18:11:12.074820 4051 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-13 18:02:20 +0000 UTC, rotation deadline is 2026-03-13 10:56:57.86614092 +0000 UTC Mar 12 18:11:12.074999 master-0 kubenswrapper[4051]: I0312 18:11:12.074894 4051 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 16h45m45.791251179s for next certificate rotation Mar 12 18:11:12.100300 master-0 kubenswrapper[4051]: E0312 18:11:12.100162 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:12.200421 master-0 kubenswrapper[4051]: E0312 18:11:12.200335 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:12.262360 master-0 kubenswrapper[4051]: E0312 18:11:12.262256 4051 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Mar 12 18:11:12.301035 master-0 kubenswrapper[4051]: E0312 18:11:12.300876 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:12.401381 master-0 kubenswrapper[4051]: E0312 18:11:12.401235 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:12.501830 master-0 kubenswrapper[4051]: E0312 18:11:12.501729 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:12.602634 master-0 kubenswrapper[4051]: E0312 18:11:12.602466 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:12.703409 master-0 kubenswrapper[4051]: E0312 18:11:12.703182 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:12.803968 master-0 kubenswrapper[4051]: E0312 18:11:12.803836 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:12.904889 master-0 kubenswrapper[4051]: E0312 18:11:12.904772 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:13.005968 master-0 kubenswrapper[4051]: E0312 18:11:13.005761 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:13.075873 master-0 kubenswrapper[4051]: I0312 18:11:13.075731 4051 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-13 18:02:20 +0000 UTC, rotation deadline is 2026-03-13 10:56:02.188391523 +0000 UTC Mar 12 18:11:13.075873 master-0 kubenswrapper[4051]: I0312 18:11:13.075767 4051 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 16h44m49.112626572s for next certificate rotation Mar 12 18:11:13.106190 master-0 kubenswrapper[4051]: E0312 18:11:13.106102 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:13.207186 master-0 kubenswrapper[4051]: E0312 18:11:13.207090 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:13.307585 master-0 kubenswrapper[4051]: E0312 18:11:13.307399 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:13.408672 master-0 kubenswrapper[4051]: E0312 18:11:13.408487 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:13.509328 master-0 kubenswrapper[4051]: E0312 18:11:13.509252 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:13.609622 master-0 kubenswrapper[4051]: E0312 18:11:13.609422 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:13.710438 master-0 kubenswrapper[4051]: E0312 18:11:13.710336 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:13.811693 master-0 kubenswrapper[4051]: E0312 18:11:13.811611 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:13.912899 master-0 kubenswrapper[4051]: E0312 18:11:13.912708 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:14.013685 master-0 kubenswrapper[4051]: E0312 18:11:14.013606 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:14.113942 master-0 kubenswrapper[4051]: E0312 18:11:14.113853 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:14.214427 master-0 kubenswrapper[4051]: E0312 18:11:14.214247 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:14.314790 master-0 kubenswrapper[4051]: E0312 18:11:14.314692 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:14.415358 master-0 kubenswrapper[4051]: E0312 18:11:14.415274 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:14.515872 master-0 kubenswrapper[4051]: E0312 18:11:14.515720 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:14.616641 master-0 kubenswrapper[4051]: E0312 18:11:14.616595 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:14.717085 master-0 kubenswrapper[4051]: E0312 18:11:14.716992 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:14.817814 master-0 kubenswrapper[4051]: E0312 18:11:14.817745 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:14.918802 master-0 kubenswrapper[4051]: E0312 18:11:14.918753 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:15.019610 master-0 kubenswrapper[4051]: E0312 18:11:15.019558 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:15.121141 master-0 kubenswrapper[4051]: E0312 18:11:15.120981 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:15.222151 master-0 kubenswrapper[4051]: E0312 18:11:15.222117 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:15.323411 master-0 kubenswrapper[4051]: E0312 18:11:15.323369 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:15.424543 master-0 kubenswrapper[4051]: E0312 18:11:15.424392 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:15.543251 master-0 kubenswrapper[4051]: E0312 18:11:15.542811 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:15.643437 master-0 kubenswrapper[4051]: E0312 18:11:15.643339 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:15.744188 master-0 kubenswrapper[4051]: E0312 18:11:15.744043 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:15.844321 master-0 kubenswrapper[4051]: E0312 18:11:15.844206 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:15.945271 master-0 kubenswrapper[4051]: E0312 18:11:15.945177 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:16.045758 master-0 kubenswrapper[4051]: E0312 18:11:16.045683 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:16.145885 master-0 kubenswrapper[4051]: E0312 18:11:16.145842 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:16.246397 master-0 kubenswrapper[4051]: E0312 18:11:16.246304 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:16.347617 master-0 kubenswrapper[4051]: E0312 18:11:16.347448 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:16.448566 master-0 kubenswrapper[4051]: E0312 18:11:16.448449 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:16.549054 master-0 kubenswrapper[4051]: E0312 18:11:16.548970 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:16.650325 master-0 kubenswrapper[4051]: E0312 18:11:16.650224 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:16.751176 master-0 kubenswrapper[4051]: E0312 18:11:16.751091 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:16.851977 master-0 kubenswrapper[4051]: E0312 18:11:16.851893 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:16.952697 master-0 kubenswrapper[4051]: E0312 18:11:16.952581 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:17.053818 master-0 kubenswrapper[4051]: E0312 18:11:17.053709 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:17.154725 master-0 kubenswrapper[4051]: E0312 18:11:17.154601 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:17.255191 master-0 kubenswrapper[4051]: E0312 18:11:17.255029 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:17.355640 master-0 kubenswrapper[4051]: E0312 18:11:17.355320 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:17.456358 master-0 kubenswrapper[4051]: E0312 18:11:17.456282 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:17.556525 master-0 kubenswrapper[4051]: E0312 18:11:17.556462 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:17.656715 master-0 kubenswrapper[4051]: E0312 18:11:17.656609 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:17.758135 master-0 kubenswrapper[4051]: E0312 18:11:17.757838 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:17.859195 master-0 kubenswrapper[4051]: E0312 18:11:17.858976 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:17.959378 master-0 kubenswrapper[4051]: E0312 18:11:17.959269 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:18.081590 master-0 kubenswrapper[4051]: E0312 18:11:18.060148 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:18.161282 master-0 kubenswrapper[4051]: E0312 18:11:18.161158 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:18.262262 master-0 kubenswrapper[4051]: E0312 18:11:18.262199 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:18.362696 master-0 kubenswrapper[4051]: E0312 18:11:18.362592 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:18.463796 master-0 kubenswrapper[4051]: E0312 18:11:18.463631 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:18.564674 master-0 kubenswrapper[4051]: E0312 18:11:18.564580 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:18.665780 master-0 kubenswrapper[4051]: E0312 18:11:18.665719 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:18.766664 master-0 kubenswrapper[4051]: E0312 18:11:18.766461 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:18.867381 master-0 kubenswrapper[4051]: E0312 18:11:18.867277 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:18.968425 master-0 kubenswrapper[4051]: E0312 18:11:18.968325 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:18.982139 master-0 kubenswrapper[4051]: I0312 18:11:18.982045 4051 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 12 18:11:19.069226 master-0 kubenswrapper[4051]: E0312 18:11:19.069128 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:19.169932 master-0 kubenswrapper[4051]: E0312 18:11:19.169835 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:19.271167 master-0 kubenswrapper[4051]: E0312 18:11:19.271036 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:19.372450 master-0 kubenswrapper[4051]: E0312 18:11:19.372246 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:19.473274 master-0 kubenswrapper[4051]: E0312 18:11:19.473173 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:19.573857 master-0 kubenswrapper[4051]: E0312 18:11:19.573794 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:19.674120 master-0 kubenswrapper[4051]: E0312 18:11:19.673991 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:19.775195 master-0 kubenswrapper[4051]: E0312 18:11:19.774866 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:19.875650 master-0 kubenswrapper[4051]: E0312 18:11:19.875551 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:19.976668 master-0 kubenswrapper[4051]: E0312 18:11:19.976413 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:20.076951 master-0 kubenswrapper[4051]: E0312 18:11:20.076843 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:20.177767 master-0 kubenswrapper[4051]: E0312 18:11:20.177679 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:20.278247 master-0 kubenswrapper[4051]: E0312 18:11:20.278031 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:20.280484 master-0 kubenswrapper[4051]: I0312 18:11:20.280414 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:11:20.281830 master-0 kubenswrapper[4051]: I0312 18:11:20.281758 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:11:20.281830 master-0 kubenswrapper[4051]: I0312 18:11:20.281805 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:11:20.281830 master-0 kubenswrapper[4051]: I0312 18:11:20.281814 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:11:20.282319 master-0 kubenswrapper[4051]: I0312 18:11:20.282144 4051 scope.go:117] "RemoveContainer" containerID="f4576abbe7a90b1b20fca15403fa958348685c536308da122d58a74078454e59" Mar 12 18:11:20.282415 master-0 kubenswrapper[4051]: E0312 18:11:20.282320 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 12 18:11:20.379098 master-0 kubenswrapper[4051]: E0312 18:11:20.378983 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:20.480158 master-0 kubenswrapper[4051]: E0312 18:11:20.480053 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:20.581226 master-0 kubenswrapper[4051]: E0312 18:11:20.581168 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:20.681625 master-0 kubenswrapper[4051]: E0312 18:11:20.681573 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:20.782651 master-0 kubenswrapper[4051]: E0312 18:11:20.782597 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:20.883484 master-0 kubenswrapper[4051]: E0312 18:11:20.883345 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:20.984566 master-0 kubenswrapper[4051]: E0312 18:11:20.984473 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:21.085088 master-0 kubenswrapper[4051]: E0312 18:11:21.084998 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:21.185579 master-0 kubenswrapper[4051]: E0312 18:11:21.185354 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:21.253530 master-0 kubenswrapper[4051]: E0312 18:11:21.253446 4051 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 12 18:11:21.285964 master-0 kubenswrapper[4051]: E0312 18:11:21.285902 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:21.387089 master-0 kubenswrapper[4051]: E0312 18:11:21.387009 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:21.487953 master-0 kubenswrapper[4051]: E0312 18:11:21.487733 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:21.588760 master-0 kubenswrapper[4051]: E0312 18:11:21.588673 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:21.689810 master-0 kubenswrapper[4051]: E0312 18:11:21.689708 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:21.790424 master-0 kubenswrapper[4051]: E0312 18:11:21.790255 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:21.891252 master-0 kubenswrapper[4051]: E0312 18:11:21.891155 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:21.992150 master-0 kubenswrapper[4051]: E0312 18:11:21.992068 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:22.092486 master-0 kubenswrapper[4051]: E0312 18:11:22.092429 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:22.192927 master-0 kubenswrapper[4051]: E0312 18:11:22.192868 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:22.293486 master-0 kubenswrapper[4051]: E0312 18:11:22.293406 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:22.340816 master-0 kubenswrapper[4051]: E0312 18:11:22.340696 4051 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Mar 12 18:11:22.394132 master-0 kubenswrapper[4051]: E0312 18:11:22.393954 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:22.494162 master-0 kubenswrapper[4051]: E0312 18:11:22.494065 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:22.595406 master-0 kubenswrapper[4051]: E0312 18:11:22.595319 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:22.696471 master-0 kubenswrapper[4051]: E0312 18:11:22.696345 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:22.797183 master-0 kubenswrapper[4051]: E0312 18:11:22.797126 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:22.898029 master-0 kubenswrapper[4051]: E0312 18:11:22.897956 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:22.998855 master-0 kubenswrapper[4051]: E0312 18:11:22.998679 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:23.099191 master-0 kubenswrapper[4051]: E0312 18:11:23.099118 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:23.200177 master-0 kubenswrapper[4051]: E0312 18:11:23.200079 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:23.300746 master-0 kubenswrapper[4051]: E0312 18:11:23.300662 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:23.401740 master-0 kubenswrapper[4051]: E0312 18:11:23.401654 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:23.502579 master-0 kubenswrapper[4051]: E0312 18:11:23.502495 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:23.603562 master-0 kubenswrapper[4051]: E0312 18:11:23.603365 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:23.704431 master-0 kubenswrapper[4051]: E0312 18:11:23.704318 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:23.804663 master-0 kubenswrapper[4051]: E0312 18:11:23.804608 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:23.905440 master-0 kubenswrapper[4051]: E0312 18:11:23.905294 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:24.006127 master-0 kubenswrapper[4051]: E0312 18:11:24.006063 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:24.106786 master-0 kubenswrapper[4051]: E0312 18:11:24.106672 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:24.207485 master-0 kubenswrapper[4051]: E0312 18:11:24.207297 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:24.308050 master-0 kubenswrapper[4051]: E0312 18:11:24.307960 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:24.409028 master-0 kubenswrapper[4051]: E0312 18:11:24.408925 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:24.509290 master-0 kubenswrapper[4051]: E0312 18:11:24.509111 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:24.609377 master-0 kubenswrapper[4051]: E0312 18:11:24.609263 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:24.709827 master-0 kubenswrapper[4051]: E0312 18:11:24.709764 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:24.810985 master-0 kubenswrapper[4051]: E0312 18:11:24.810873 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:24.911953 master-0 kubenswrapper[4051]: E0312 18:11:24.911861 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:25.012359 master-0 kubenswrapper[4051]: E0312 18:11:25.012259 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:25.113078 master-0 kubenswrapper[4051]: E0312 18:11:25.112917 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:25.214222 master-0 kubenswrapper[4051]: E0312 18:11:25.214143 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:25.315442 master-0 kubenswrapper[4051]: E0312 18:11:25.315354 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:25.416399 master-0 kubenswrapper[4051]: E0312 18:11:25.416259 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:25.517144 master-0 kubenswrapper[4051]: E0312 18:11:25.517078 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:25.617399 master-0 kubenswrapper[4051]: E0312 18:11:25.617340 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:25.718319 master-0 kubenswrapper[4051]: E0312 18:11:25.718182 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:25.819217 master-0 kubenswrapper[4051]: E0312 18:11:25.819118 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:25.919867 master-0 kubenswrapper[4051]: E0312 18:11:25.919744 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:26.020696 master-0 kubenswrapper[4051]: E0312 18:11:26.020435 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:26.120961 master-0 kubenswrapper[4051]: E0312 18:11:26.120861 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:26.222030 master-0 kubenswrapper[4051]: E0312 18:11:26.221973 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:26.322208 master-0 kubenswrapper[4051]: E0312 18:11:26.322151 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:26.422991 master-0 kubenswrapper[4051]: E0312 18:11:26.422920 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:26.523662 master-0 kubenswrapper[4051]: E0312 18:11:26.523578 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:26.624323 master-0 kubenswrapper[4051]: E0312 18:11:26.624117 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:26.725060 master-0 kubenswrapper[4051]: E0312 18:11:26.724964 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:26.825226 master-0 kubenswrapper[4051]: E0312 18:11:26.825106 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:26.925995 master-0 kubenswrapper[4051]: E0312 18:11:26.925859 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:27.026253 master-0 kubenswrapper[4051]: E0312 18:11:27.026166 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:27.126697 master-0 kubenswrapper[4051]: E0312 18:11:27.126642 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:27.227472 master-0 kubenswrapper[4051]: E0312 18:11:27.227249 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:27.328002 master-0 kubenswrapper[4051]: E0312 18:11:27.327886 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:27.428671 master-0 kubenswrapper[4051]: E0312 18:11:27.428580 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:27.529439 master-0 kubenswrapper[4051]: E0312 18:11:27.529289 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:27.629497 master-0 kubenswrapper[4051]: E0312 18:11:27.629433 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:27.730280 master-0 kubenswrapper[4051]: E0312 18:11:27.730187 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:27.831507 master-0 kubenswrapper[4051]: E0312 18:11:27.831217 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:27.932065 master-0 kubenswrapper[4051]: E0312 18:11:27.931988 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:28.033173 master-0 kubenswrapper[4051]: E0312 18:11:28.033062 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:28.134249 master-0 kubenswrapper[4051]: E0312 18:11:28.134078 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:28.234372 master-0 kubenswrapper[4051]: E0312 18:11:28.234272 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:28.334699 master-0 kubenswrapper[4051]: E0312 18:11:28.334590 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:28.435632 master-0 kubenswrapper[4051]: E0312 18:11:28.435474 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:28.536316 master-0 kubenswrapper[4051]: E0312 18:11:28.536231 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:28.636673 master-0 kubenswrapper[4051]: E0312 18:11:28.636581 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:28.737710 master-0 kubenswrapper[4051]: E0312 18:11:28.737574 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:28.838678 master-0 kubenswrapper[4051]: E0312 18:11:28.838605 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:28.939865 master-0 kubenswrapper[4051]: E0312 18:11:28.939795 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:29.040490 master-0 kubenswrapper[4051]: E0312 18:11:29.040342 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:29.140621 master-0 kubenswrapper[4051]: E0312 18:11:29.140505 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:29.241133 master-0 kubenswrapper[4051]: E0312 18:11:29.241060 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:29.342041 master-0 kubenswrapper[4051]: E0312 18:11:29.341927 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:29.442768 master-0 kubenswrapper[4051]: E0312 18:11:29.442691 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:29.543555 master-0 kubenswrapper[4051]: E0312 18:11:29.543460 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:29.644043 master-0 kubenswrapper[4051]: E0312 18:11:29.643818 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:29.744816 master-0 kubenswrapper[4051]: E0312 18:11:29.744714 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:29.845656 master-0 kubenswrapper[4051]: E0312 18:11:29.845560 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:29.946774 master-0 kubenswrapper[4051]: E0312 18:11:29.946616 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:30.046874 master-0 kubenswrapper[4051]: E0312 18:11:30.046752 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:30.147039 master-0 kubenswrapper[4051]: E0312 18:11:30.146958 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:30.247584 master-0 kubenswrapper[4051]: E0312 18:11:30.247371 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:30.348595 master-0 kubenswrapper[4051]: E0312 18:11:30.348531 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:30.449562 master-0 kubenswrapper[4051]: E0312 18:11:30.449438 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:30.550388 master-0 kubenswrapper[4051]: E0312 18:11:30.550308 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:30.562208 master-0 kubenswrapper[4051]: I0312 18:11:30.562134 4051 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 12 18:11:30.650504 master-0 kubenswrapper[4051]: E0312 18:11:30.650430 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:30.751329 master-0 kubenswrapper[4051]: E0312 18:11:30.751268 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:30.852035 master-0 kubenswrapper[4051]: E0312 18:11:30.851934 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:30.952845 master-0 kubenswrapper[4051]: E0312 18:11:30.952804 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:31.053030 master-0 kubenswrapper[4051]: E0312 18:11:31.052949 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:31.154135 master-0 kubenswrapper[4051]: E0312 18:11:31.153957 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:31.254219 master-0 kubenswrapper[4051]: E0312 18:11:31.254143 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:31.254219 master-0 kubenswrapper[4051]: E0312 18:11:31.254166 4051 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 12 18:11:31.354986 master-0 kubenswrapper[4051]: E0312 18:11:31.354901 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:31.456261 master-0 kubenswrapper[4051]: E0312 18:11:31.456052 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:31.557141 master-0 kubenswrapper[4051]: E0312 18:11:31.557085 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:31.658226 master-0 kubenswrapper[4051]: E0312 18:11:31.658164 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:31.759050 master-0 kubenswrapper[4051]: E0312 18:11:31.758951 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:31.859805 master-0 kubenswrapper[4051]: E0312 18:11:31.859724 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:31.960612 master-0 kubenswrapper[4051]: E0312 18:11:31.960550 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:32.061666 master-0 kubenswrapper[4051]: E0312 18:11:32.061577 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:32.162214 master-0 kubenswrapper[4051]: E0312 18:11:32.162131 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:32.263169 master-0 kubenswrapper[4051]: E0312 18:11:32.263098 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:32.280695 master-0 kubenswrapper[4051]: I0312 18:11:32.280645 4051 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:11:32.281800 master-0 kubenswrapper[4051]: I0312 18:11:32.281753 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:11:32.281935 master-0 kubenswrapper[4051]: I0312 18:11:32.281813 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:11:32.281935 master-0 kubenswrapper[4051]: I0312 18:11:32.281832 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:11:32.282309 master-0 kubenswrapper[4051]: I0312 18:11:32.282258 4051 scope.go:117] "RemoveContainer" containerID="f4576abbe7a90b1b20fca15403fa958348685c536308da122d58a74078454e59" Mar 12 18:11:32.282599 master-0 kubenswrapper[4051]: E0312 18:11:32.282562 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 12 18:11:32.364100 master-0 kubenswrapper[4051]: E0312 18:11:32.363952 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:32.464814 master-0 kubenswrapper[4051]: E0312 18:11:32.464724 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:32.499182 master-0 kubenswrapper[4051]: E0312 18:11:32.499098 4051 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Mar 12 18:11:32.565897 master-0 kubenswrapper[4051]: E0312 18:11:32.565804 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:32.666214 master-0 kubenswrapper[4051]: E0312 18:11:32.666048 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:32.767124 master-0 kubenswrapper[4051]: E0312 18:11:32.767015 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:32.868309 master-0 kubenswrapper[4051]: E0312 18:11:32.868237 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:32.969450 master-0 kubenswrapper[4051]: E0312 18:11:32.969291 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:33.070562 master-0 kubenswrapper[4051]: E0312 18:11:33.070440 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:33.171234 master-0 kubenswrapper[4051]: E0312 18:11:33.171151 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:33.272563 master-0 kubenswrapper[4051]: E0312 18:11:33.272325 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:33.373642 master-0 kubenswrapper[4051]: E0312 18:11:33.373544 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:33.474498 master-0 kubenswrapper[4051]: E0312 18:11:33.474424 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:33.575240 master-0 kubenswrapper[4051]: E0312 18:11:33.575155 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:33.676293 master-0 kubenswrapper[4051]: E0312 18:11:33.676238 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:33.777265 master-0 kubenswrapper[4051]: E0312 18:11:33.777202 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:33.878108 master-0 kubenswrapper[4051]: E0312 18:11:33.877901 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:33.978902 master-0 kubenswrapper[4051]: E0312 18:11:33.978798 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:34.080038 master-0 kubenswrapper[4051]: E0312 18:11:34.079950 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:34.180618 master-0 kubenswrapper[4051]: E0312 18:11:34.180359 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:34.280535 master-0 kubenswrapper[4051]: E0312 18:11:34.280471 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:34.381582 master-0 kubenswrapper[4051]: E0312 18:11:34.381528 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:34.482405 master-0 kubenswrapper[4051]: E0312 18:11:34.482258 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:34.582644 master-0 kubenswrapper[4051]: E0312 18:11:34.582575 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:34.683773 master-0 kubenswrapper[4051]: E0312 18:11:34.683713 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:34.784553 master-0 kubenswrapper[4051]: E0312 18:11:34.784419 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:34.885098 master-0 kubenswrapper[4051]: E0312 18:11:34.885019 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:34.985963 master-0 kubenswrapper[4051]: E0312 18:11:34.985873 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:35.086990 master-0 kubenswrapper[4051]: E0312 18:11:35.086896 4051 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:11:35.162894 master-0 kubenswrapper[4051]: I0312 18:11:35.162803 4051 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 12 18:11:35.210245 master-0 kubenswrapper[4051]: I0312 18:11:35.210152 4051 apiserver.go:52] "Watching apiserver" Mar 12 18:11:35.214400 master-0 kubenswrapper[4051]: I0312 18:11:35.214351 4051 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 12 18:11:35.214671 master-0 kubenswrapper[4051]: I0312 18:11:35.214628 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-g257x","openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx","openshift-network-operator/network-operator-7c649bf6d4-vksss"] Mar 12 18:11:35.215222 master-0 kubenswrapper[4051]: I0312 18:11:35.215187 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" Mar 12 18:11:35.215412 master-0 kubenswrapper[4051]: I0312 18:11:35.215182 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:11:35.216876 master-0 kubenswrapper[4051]: I0312 18:11:35.215182 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-g257x" Mar 12 18:11:35.218303 master-0 kubenswrapper[4051]: I0312 18:11:35.218204 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Mar 12 18:11:35.220179 master-0 kubenswrapper[4051]: I0312 18:11:35.218889 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 12 18:11:35.220179 master-0 kubenswrapper[4051]: I0312 18:11:35.219153 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 12 18:11:35.220179 master-0 kubenswrapper[4051]: I0312 18:11:35.219322 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Mar 12 18:11:35.220179 master-0 kubenswrapper[4051]: I0312 18:11:35.219627 4051 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Mar 12 18:11:35.220179 master-0 kubenswrapper[4051]: I0312 18:11:35.219770 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Mar 12 18:11:35.220179 master-0 kubenswrapper[4051]: I0312 18:11:35.219838 4051 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 12 18:11:35.220179 master-0 kubenswrapper[4051]: I0312 18:11:35.219894 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 12 18:11:35.220179 master-0 kubenswrapper[4051]: I0312 18:11:35.220128 4051 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 12 18:11:35.220730 master-0 kubenswrapper[4051]: I0312 18:11:35.220464 4051 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 12 18:11:35.221231 master-0 kubenswrapper[4051]: I0312 18:11:35.221007 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 12 18:11:35.290035 master-0 kubenswrapper[4051]: I0312 18:11:35.289947 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/2a4a981c-9454-4e1f-951e-1a62737659cc-sno-bootstrap-files\") pod \"assisted-installer-controller-g257x\" (UID: \"2a4a981c-9454-4e1f-951e-1a62737659cc\") " pod="assisted-installer/assisted-installer-controller-g257x" Mar 12 18:11:35.290537 master-0 kubenswrapper[4051]: I0312 18:11:35.290479 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b6d288e3-8e73-44d2-874d-64c6c98dd991-metrics-tls\") pod \"network-operator-7c649bf6d4-vksss\" (UID: \"b6d288e3-8e73-44d2-874d-64c6c98dd991\") " pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" Mar 12 18:11:35.290698 master-0 kubenswrapper[4051]: I0312 18:11:35.290672 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/00755a4e-124c-4a51-b1c5-7c505b3637a8-service-ca\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:11:35.290850 master-0 kubenswrapper[4051]: I0312 18:11:35.290830 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2a4a981c-9454-4e1f-951e-1a62737659cc-host-resolv-conf\") pod \"assisted-installer-controller-g257x\" (UID: \"2a4a981c-9454-4e1f-951e-1a62737659cc\") " pod="assisted-installer/assisted-installer-controller-g257x" Mar 12 18:11:35.290963 master-0 kubenswrapper[4051]: I0312 18:11:35.290945 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:11:35.291082 master-0 kubenswrapper[4051]: I0312 18:11:35.291060 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/2a4a981c-9454-4e1f-951e-1a62737659cc-host-ca-bundle\") pod \"assisted-installer-controller-g257x\" (UID: \"2a4a981c-9454-4e1f-951e-1a62737659cc\") " pod="assisted-installer/assisted-installer-controller-g257x" Mar 12 18:11:35.291296 master-0 kubenswrapper[4051]: I0312 18:11:35.291258 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/00755a4e-124c-4a51-b1c5-7c505b3637a8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:11:35.291607 master-0 kubenswrapper[4051]: I0312 18:11:35.291581 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdb9w\" (UniqueName: \"kubernetes.io/projected/b6d288e3-8e73-44d2-874d-64c6c98dd991-kube-api-access-vdb9w\") pod \"network-operator-7c649bf6d4-vksss\" (UID: \"b6d288e3-8e73-44d2-874d-64c6c98dd991\") " pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" Mar 12 18:11:35.291746 master-0 kubenswrapper[4051]: I0312 18:11:35.291728 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00755a4e-124c-4a51-b1c5-7c505b3637a8-kube-api-access\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:11:35.291861 master-0 kubenswrapper[4051]: I0312 18:11:35.291843 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2a4a981c-9454-4e1f-951e-1a62737659cc-host-var-run-resolv-conf\") pod \"assisted-installer-controller-g257x\" (UID: \"2a4a981c-9454-4e1f-951e-1a62737659cc\") " pod="assisted-installer/assisted-installer-controller-g257x" Mar 12 18:11:35.291972 master-0 kubenswrapper[4051]: I0312 18:11:35.291954 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz28r\" (UniqueName: \"kubernetes.io/projected/2a4a981c-9454-4e1f-951e-1a62737659cc-kube-api-access-cz28r\") pod \"assisted-installer-controller-g257x\" (UID: \"2a4a981c-9454-4e1f-951e-1a62737659cc\") " pod="assisted-installer/assisted-installer-controller-g257x" Mar 12 18:11:35.292106 master-0 kubenswrapper[4051]: I0312 18:11:35.292085 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/00755a4e-124c-4a51-b1c5-7c505b3637a8-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:11:35.292245 master-0 kubenswrapper[4051]: I0312 18:11:35.292226 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/b6d288e3-8e73-44d2-874d-64c6c98dd991-host-etc-kube\") pod \"network-operator-7c649bf6d4-vksss\" (UID: \"b6d288e3-8e73-44d2-874d-64c6c98dd991\") " pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" Mar 12 18:11:35.392768 master-0 kubenswrapper[4051]: I0312 18:11:35.392568 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/00755a4e-124c-4a51-b1c5-7c505b3637a8-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:11:35.392768 master-0 kubenswrapper[4051]: I0312 18:11:35.392639 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/b6d288e3-8e73-44d2-874d-64c6c98dd991-host-etc-kube\") pod \"network-operator-7c649bf6d4-vksss\" (UID: \"b6d288e3-8e73-44d2-874d-64c6c98dd991\") " pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" Mar 12 18:11:35.392768 master-0 kubenswrapper[4051]: I0312 18:11:35.392657 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/2a4a981c-9454-4e1f-951e-1a62737659cc-sno-bootstrap-files\") pod \"assisted-installer-controller-g257x\" (UID: \"2a4a981c-9454-4e1f-951e-1a62737659cc\") " pod="assisted-installer/assisted-installer-controller-g257x" Mar 12 18:11:35.392768 master-0 kubenswrapper[4051]: I0312 18:11:35.392678 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b6d288e3-8e73-44d2-874d-64c6c98dd991-metrics-tls\") pod \"network-operator-7c649bf6d4-vksss\" (UID: \"b6d288e3-8e73-44d2-874d-64c6c98dd991\") " pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" Mar 12 18:11:35.392768 master-0 kubenswrapper[4051]: I0312 18:11:35.392696 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/00755a4e-124c-4a51-b1c5-7c505b3637a8-service-ca\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:11:35.392768 master-0 kubenswrapper[4051]: I0312 18:11:35.392714 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2a4a981c-9454-4e1f-951e-1a62737659cc-host-resolv-conf\") pod \"assisted-installer-controller-g257x\" (UID: \"2a4a981c-9454-4e1f-951e-1a62737659cc\") " pod="assisted-installer/assisted-installer-controller-g257x" Mar 12 18:11:35.393691 master-0 kubenswrapper[4051]: I0312 18:11:35.392885 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/b6d288e3-8e73-44d2-874d-64c6c98dd991-host-etc-kube\") pod \"network-operator-7c649bf6d4-vksss\" (UID: \"b6d288e3-8e73-44d2-874d-64c6c98dd991\") " pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" Mar 12 18:11:35.393691 master-0 kubenswrapper[4051]: I0312 18:11:35.392952 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:11:35.393691 master-0 kubenswrapper[4051]: I0312 18:11:35.393015 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/2a4a981c-9454-4e1f-951e-1a62737659cc-host-ca-bundle\") pod \"assisted-installer-controller-g257x\" (UID: \"2a4a981c-9454-4e1f-951e-1a62737659cc\") " pod="assisted-installer/assisted-installer-controller-g257x" Mar 12 18:11:35.393691 master-0 kubenswrapper[4051]: I0312 18:11:35.393055 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/00755a4e-124c-4a51-b1c5-7c505b3637a8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:11:35.393691 master-0 kubenswrapper[4051]: E0312 18:11:35.393079 4051 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 18:11:35.393691 master-0 kubenswrapper[4051]: I0312 18:11:35.393124 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/2a4a981c-9454-4e1f-951e-1a62737659cc-host-ca-bundle\") pod \"assisted-installer-controller-g257x\" (UID: \"2a4a981c-9454-4e1f-951e-1a62737659cc\") " pod="assisted-installer/assisted-installer-controller-g257x" Mar 12 18:11:35.393691 master-0 kubenswrapper[4051]: I0312 18:11:35.393236 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2a4a981c-9454-4e1f-951e-1a62737659cc-host-resolv-conf\") pod \"assisted-installer-controller-g257x\" (UID: \"2a4a981c-9454-4e1f-951e-1a62737659cc\") " pod="assisted-installer/assisted-installer-controller-g257x" Mar 12 18:11:35.393691 master-0 kubenswrapper[4051]: E0312 18:11:35.393271 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert podName:00755a4e-124c-4a51-b1c5-7c505b3637a8 nodeName:}" failed. No retries permitted until 2026-03-12 18:11:35.893131802 +0000 UTC m=+75.372258033 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert") pod "cluster-version-operator-745944c6b7-cxwmx" (UID: "00755a4e-124c-4a51-b1c5-7c505b3637a8") : secret "cluster-version-operator-serving-cert" not found Mar 12 18:11:35.393691 master-0 kubenswrapper[4051]: I0312 18:11:35.393294 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/2a4a981c-9454-4e1f-951e-1a62737659cc-sno-bootstrap-files\") pod \"assisted-installer-controller-g257x\" (UID: \"2a4a981c-9454-4e1f-951e-1a62737659cc\") " pod="assisted-installer/assisted-installer-controller-g257x" Mar 12 18:11:35.393691 master-0 kubenswrapper[4051]: I0312 18:11:35.393297 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdb9w\" (UniqueName: \"kubernetes.io/projected/b6d288e3-8e73-44d2-874d-64c6c98dd991-kube-api-access-vdb9w\") pod \"network-operator-7c649bf6d4-vksss\" (UID: \"b6d288e3-8e73-44d2-874d-64c6c98dd991\") " pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" Mar 12 18:11:35.393691 master-0 kubenswrapper[4051]: I0312 18:11:35.393321 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/00755a4e-124c-4a51-b1c5-7c505b3637a8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:11:35.393691 master-0 kubenswrapper[4051]: I0312 18:11:35.393323 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00755a4e-124c-4a51-b1c5-7c505b3637a8-kube-api-access\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:11:35.393691 master-0 kubenswrapper[4051]: I0312 18:11:35.393365 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2a4a981c-9454-4e1f-951e-1a62737659cc-host-var-run-resolv-conf\") pod \"assisted-installer-controller-g257x\" (UID: \"2a4a981c-9454-4e1f-951e-1a62737659cc\") " pod="assisted-installer/assisted-installer-controller-g257x" Mar 12 18:11:35.393691 master-0 kubenswrapper[4051]: I0312 18:11:35.393389 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz28r\" (UniqueName: \"kubernetes.io/projected/2a4a981c-9454-4e1f-951e-1a62737659cc-kube-api-access-cz28r\") pod \"assisted-installer-controller-g257x\" (UID: \"2a4a981c-9454-4e1f-951e-1a62737659cc\") " pod="assisted-installer/assisted-installer-controller-g257x" Mar 12 18:11:35.393691 master-0 kubenswrapper[4051]: I0312 18:11:35.393391 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2a4a981c-9454-4e1f-951e-1a62737659cc-host-var-run-resolv-conf\") pod \"assisted-installer-controller-g257x\" (UID: \"2a4a981c-9454-4e1f-951e-1a62737659cc\") " pod="assisted-installer/assisted-installer-controller-g257x" Mar 12 18:11:35.394200 master-0 kubenswrapper[4051]: I0312 18:11:35.393275 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/00755a4e-124c-4a51-b1c5-7c505b3637a8-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:11:35.394593 master-0 kubenswrapper[4051]: I0312 18:11:35.394547 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/00755a4e-124c-4a51-b1c5-7c505b3637a8-service-ca\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:11:35.395440 master-0 kubenswrapper[4051]: I0312 18:11:35.395381 4051 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 12 18:11:35.401556 master-0 kubenswrapper[4051]: I0312 18:11:35.401313 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b6d288e3-8e73-44d2-874d-64c6c98dd991-metrics-tls\") pod \"network-operator-7c649bf6d4-vksss\" (UID: \"b6d288e3-8e73-44d2-874d-64c6c98dd991\") " pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" Mar 12 18:11:35.411240 master-0 kubenswrapper[4051]: I0312 18:11:35.411177 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdb9w\" (UniqueName: \"kubernetes.io/projected/b6d288e3-8e73-44d2-874d-64c6c98dd991-kube-api-access-vdb9w\") pod \"network-operator-7c649bf6d4-vksss\" (UID: \"b6d288e3-8e73-44d2-874d-64c6c98dd991\") " pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" Mar 12 18:11:35.411464 master-0 kubenswrapper[4051]: I0312 18:11:35.411307 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz28r\" (UniqueName: \"kubernetes.io/projected/2a4a981c-9454-4e1f-951e-1a62737659cc-kube-api-access-cz28r\") pod \"assisted-installer-controller-g257x\" (UID: \"2a4a981c-9454-4e1f-951e-1a62737659cc\") " pod="assisted-installer/assisted-installer-controller-g257x" Mar 12 18:11:35.414556 master-0 kubenswrapper[4051]: I0312 18:11:35.413987 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00755a4e-124c-4a51-b1c5-7c505b3637a8-kube-api-access\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:11:35.542254 master-0 kubenswrapper[4051]: I0312 18:11:35.542164 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" Mar 12 18:11:35.556347 master-0 kubenswrapper[4051]: W0312 18:11:35.555667 4051 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6d288e3_8e73_44d2_874d_64c6c98dd991.slice/crio-6dd6381115d9cbf9ba7c1a108737553c31041609750cc0e631e36ed92f66311d WatchSource:0}: Error finding container 6dd6381115d9cbf9ba7c1a108737553c31041609750cc0e631e36ed92f66311d: Status 404 returned error can't find the container with id 6dd6381115d9cbf9ba7c1a108737553c31041609750cc0e631e36ed92f66311d Mar 12 18:11:35.575007 master-0 kubenswrapper[4051]: I0312 18:11:35.574952 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-g257x" Mar 12 18:11:35.585937 master-0 kubenswrapper[4051]: W0312 18:11:35.585884 4051 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a4a981c_9454_4e1f_951e_1a62737659cc.slice/crio-1ac2e44511feb89f0eb8641549dc02db57c6e394fd3b40bd40da8f07b13abdb2 WatchSource:0}: Error finding container 1ac2e44511feb89f0eb8641549dc02db57c6e394fd3b40bd40da8f07b13abdb2: Status 404 returned error can't find the container with id 1ac2e44511feb89f0eb8641549dc02db57c6e394fd3b40bd40da8f07b13abdb2 Mar 12 18:11:35.896726 master-0 kubenswrapper[4051]: I0312 18:11:35.896670 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:11:35.897078 master-0 kubenswrapper[4051]: E0312 18:11:35.897058 4051 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 18:11:35.897225 master-0 kubenswrapper[4051]: E0312 18:11:35.897209 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert podName:00755a4e-124c-4a51-b1c5-7c505b3637a8 nodeName:}" failed. No retries permitted until 2026-03-12 18:11:36.897190698 +0000 UTC m=+76.376316939 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert") pod "cluster-version-operator-745944c6b7-cxwmx" (UID: "00755a4e-124c-4a51-b1c5-7c505b3637a8") : secret "cluster-version-operator-serving-cert" not found Mar 12 18:11:36.538911 master-0 kubenswrapper[4051]: I0312 18:11:36.538833 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" event={"ID":"b6d288e3-8e73-44d2-874d-64c6c98dd991","Type":"ContainerStarted","Data":"6dd6381115d9cbf9ba7c1a108737553c31041609750cc0e631e36ed92f66311d"} Mar 12 18:11:36.539895 master-0 kubenswrapper[4051]: I0312 18:11:36.539851 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-g257x" event={"ID":"2a4a981c-9454-4e1f-951e-1a62737659cc","Type":"ContainerStarted","Data":"1ac2e44511feb89f0eb8641549dc02db57c6e394fd3b40bd40da8f07b13abdb2"} Mar 12 18:11:36.904089 master-0 kubenswrapper[4051]: I0312 18:11:36.904024 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:11:36.904292 master-0 kubenswrapper[4051]: E0312 18:11:36.904174 4051 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 18:11:36.904292 master-0 kubenswrapper[4051]: E0312 18:11:36.904243 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert podName:00755a4e-124c-4a51-b1c5-7c505b3637a8 nodeName:}" failed. No retries permitted until 2026-03-12 18:11:38.904224433 +0000 UTC m=+78.383350674 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert") pod "cluster-version-operator-745944c6b7-cxwmx" (UID: "00755a4e-124c-4a51-b1c5-7c505b3637a8") : secret "cluster-version-operator-serving-cert" not found Mar 12 18:11:38.916985 master-0 kubenswrapper[4051]: I0312 18:11:38.916875 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:11:38.918066 master-0 kubenswrapper[4051]: E0312 18:11:38.917090 4051 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 18:11:38.918066 master-0 kubenswrapper[4051]: E0312 18:11:38.917178 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert podName:00755a4e-124c-4a51-b1c5-7c505b3637a8 nodeName:}" failed. No retries permitted until 2026-03-12 18:11:42.917159132 +0000 UTC m=+82.396285363 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert") pod "cluster-version-operator-745944c6b7-cxwmx" (UID: "00755a4e-124c-4a51-b1c5-7c505b3637a8") : secret "cluster-version-operator-serving-cert" not found Mar 12 18:11:39.546676 master-0 kubenswrapper[4051]: I0312 18:11:39.546630 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" event={"ID":"b6d288e3-8e73-44d2-874d-64c6c98dd991","Type":"ContainerStarted","Data":"419df3ddca2a5c92855e29992407a4f8d75d516e7e813a5cad7b23a3a032ee64"} Mar 12 18:11:39.562614 master-0 kubenswrapper[4051]: I0312 18:11:39.562535 4051 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" podStartSLOduration=34.406216821 podStartE2EDuration="37.562501483s" podCreationTimestamp="2026-03-12 18:11:02 +0000 UTC" firstStartedPulling="2026-03-12 18:11:35.558651494 +0000 UTC m=+75.037777725" lastFinishedPulling="2026-03-12 18:11:38.714936156 +0000 UTC m=+78.194062387" observedRunningTime="2026-03-12 18:11:39.561343283 +0000 UTC m=+79.040469534" watchObservedRunningTime="2026-03-12 18:11:39.562501483 +0000 UTC m=+79.041627724" Mar 12 18:11:41.294860 master-0 kubenswrapper[4051]: I0312 18:11:41.294404 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 12 18:11:41.294860 master-0 kubenswrapper[4051]: W0312 18:11:41.294558 4051 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 12 18:11:41.458606 master-0 kubenswrapper[4051]: I0312 18:11:41.458451 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/mtu-prober-62lqd"] Mar 12 18:11:41.458799 master-0 kubenswrapper[4051]: I0312 18:11:41.458757 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-62lqd" Mar 12 18:11:41.478193 master-0 kubenswrapper[4051]: I0312 18:11:41.478130 4051 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0-master-0" podStartSLOduration=0.478109761 podStartE2EDuration="478.109761ms" podCreationTimestamp="2026-03-12 18:11:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:11:41.468823218 +0000 UTC m=+80.947949469" watchObservedRunningTime="2026-03-12 18:11:41.478109761 +0000 UTC m=+80.957235982" Mar 12 18:11:41.541364 master-0 kubenswrapper[4051]: I0312 18:11:41.541260 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n8mv\" (UniqueName: \"kubernetes.io/projected/43fdaf13-ffc1-4787-8dd2-90d0685b3124-kube-api-access-4n8mv\") pod \"mtu-prober-62lqd\" (UID: \"43fdaf13-ffc1-4787-8dd2-90d0685b3124\") " pod="openshift-network-operator/mtu-prober-62lqd" Mar 12 18:11:41.553360 master-0 kubenswrapper[4051]: I0312 18:11:41.553312 4051 generic.go:334] "Generic (PLEG): container finished" podID="2a4a981c-9454-4e1f-951e-1a62737659cc" containerID="95a463de33fcdba00f135dbdd2f42b2c5b30584ee4c54c59c7552f930a4442bf" exitCode=0 Mar 12 18:11:41.553501 master-0 kubenswrapper[4051]: I0312 18:11:41.553396 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-g257x" event={"ID":"2a4a981c-9454-4e1f-951e-1a62737659cc","Type":"ContainerDied","Data":"95a463de33fcdba00f135dbdd2f42b2c5b30584ee4c54c59c7552f930a4442bf"} Mar 12 18:11:41.642615 master-0 kubenswrapper[4051]: I0312 18:11:41.641783 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n8mv\" (UniqueName: \"kubernetes.io/projected/43fdaf13-ffc1-4787-8dd2-90d0685b3124-kube-api-access-4n8mv\") pod \"mtu-prober-62lqd\" (UID: \"43fdaf13-ffc1-4787-8dd2-90d0685b3124\") " pod="openshift-network-operator/mtu-prober-62lqd" Mar 12 18:11:41.660209 master-0 kubenswrapper[4051]: I0312 18:11:41.660145 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n8mv\" (UniqueName: \"kubernetes.io/projected/43fdaf13-ffc1-4787-8dd2-90d0685b3124-kube-api-access-4n8mv\") pod \"mtu-prober-62lqd\" (UID: \"43fdaf13-ffc1-4787-8dd2-90d0685b3124\") " pod="openshift-network-operator/mtu-prober-62lqd" Mar 12 18:11:41.770068 master-0 kubenswrapper[4051]: I0312 18:11:41.769886 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-62lqd" Mar 12 18:11:41.784157 master-0 kubenswrapper[4051]: W0312 18:11:41.784104 4051 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43fdaf13_ffc1_4787_8dd2_90d0685b3124.slice/crio-0eba8d5c0b0f8386f8e7b0fcfb1805cff55a3c43911a4889778f4fec45d35584 WatchSource:0}: Error finding container 0eba8d5c0b0f8386f8e7b0fcfb1805cff55a3c43911a4889778f4fec45d35584: Status 404 returned error can't find the container with id 0eba8d5c0b0f8386f8e7b0fcfb1805cff55a3c43911a4889778f4fec45d35584 Mar 12 18:11:42.557705 master-0 kubenswrapper[4051]: I0312 18:11:42.557619 4051 generic.go:334] "Generic (PLEG): container finished" podID="43fdaf13-ffc1-4787-8dd2-90d0685b3124" containerID="87d66e1cc29893f39e111a5a2a21953d603c0527dd13bddf2486860762147978" exitCode=0 Mar 12 18:11:42.558634 master-0 kubenswrapper[4051]: I0312 18:11:42.557782 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-62lqd" event={"ID":"43fdaf13-ffc1-4787-8dd2-90d0685b3124","Type":"ContainerDied","Data":"87d66e1cc29893f39e111a5a2a21953d603c0527dd13bddf2486860762147978"} Mar 12 18:11:42.558634 master-0 kubenswrapper[4051]: I0312 18:11:42.557894 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-62lqd" event={"ID":"43fdaf13-ffc1-4787-8dd2-90d0685b3124","Type":"ContainerStarted","Data":"0eba8d5c0b0f8386f8e7b0fcfb1805cff55a3c43911a4889778f4fec45d35584"} Mar 12 18:11:42.571687 master-0 kubenswrapper[4051]: I0312 18:11:42.571603 4051 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-g257x" Mar 12 18:11:42.750601 master-0 kubenswrapper[4051]: I0312 18:11:42.750461 4051 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/2a4a981c-9454-4e1f-951e-1a62737659cc-sno-bootstrap-files\") pod \"2a4a981c-9454-4e1f-951e-1a62737659cc\" (UID: \"2a4a981c-9454-4e1f-951e-1a62737659cc\") " Mar 12 18:11:42.750805 master-0 kubenswrapper[4051]: I0312 18:11:42.750608 4051 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a4a981c-9454-4e1f-951e-1a62737659cc-sno-bootstrap-files" (OuterVolumeSpecName: "sno-bootstrap-files") pod "2a4a981c-9454-4e1f-951e-1a62737659cc" (UID: "2a4a981c-9454-4e1f-951e-1a62737659cc"). InnerVolumeSpecName "sno-bootstrap-files". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:11:42.750805 master-0 kubenswrapper[4051]: I0312 18:11:42.750620 4051 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2a4a981c-9454-4e1f-951e-1a62737659cc-host-resolv-conf\") pod \"2a4a981c-9454-4e1f-951e-1a62737659cc\" (UID: \"2a4a981c-9454-4e1f-951e-1a62737659cc\") " Mar 12 18:11:42.750805 master-0 kubenswrapper[4051]: I0312 18:11:42.750700 4051 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2a4a981c-9454-4e1f-951e-1a62737659cc-host-var-run-resolv-conf\") pod \"2a4a981c-9454-4e1f-951e-1a62737659cc\" (UID: \"2a4a981c-9454-4e1f-951e-1a62737659cc\") " Mar 12 18:11:42.750805 master-0 kubenswrapper[4051]: I0312 18:11:42.750725 4051 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a4a981c-9454-4e1f-951e-1a62737659cc-host-resolv-conf" (OuterVolumeSpecName: "host-resolv-conf") pod "2a4a981c-9454-4e1f-951e-1a62737659cc" (UID: "2a4a981c-9454-4e1f-951e-1a62737659cc"). InnerVolumeSpecName "host-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:11:42.750805 master-0 kubenswrapper[4051]: I0312 18:11:42.750754 4051 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/2a4a981c-9454-4e1f-951e-1a62737659cc-host-ca-bundle\") pod \"2a4a981c-9454-4e1f-951e-1a62737659cc\" (UID: \"2a4a981c-9454-4e1f-951e-1a62737659cc\") " Mar 12 18:11:42.750805 master-0 kubenswrapper[4051]: I0312 18:11:42.750759 4051 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a4a981c-9454-4e1f-951e-1a62737659cc-host-var-run-resolv-conf" (OuterVolumeSpecName: "host-var-run-resolv-conf") pod "2a4a981c-9454-4e1f-951e-1a62737659cc" (UID: "2a4a981c-9454-4e1f-951e-1a62737659cc"). InnerVolumeSpecName "host-var-run-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:11:42.750805 master-0 kubenswrapper[4051]: I0312 18:11:42.750779 4051 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a4a981c-9454-4e1f-951e-1a62737659cc-host-ca-bundle" (OuterVolumeSpecName: "host-ca-bundle") pod "2a4a981c-9454-4e1f-951e-1a62737659cc" (UID: "2a4a981c-9454-4e1f-951e-1a62737659cc"). InnerVolumeSpecName "host-ca-bundle". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:11:42.751053 master-0 kubenswrapper[4051]: I0312 18:11:42.750814 4051 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz28r\" (UniqueName: \"kubernetes.io/projected/2a4a981c-9454-4e1f-951e-1a62737659cc-kube-api-access-cz28r\") pod \"2a4a981c-9454-4e1f-951e-1a62737659cc\" (UID: \"2a4a981c-9454-4e1f-951e-1a62737659cc\") " Mar 12 18:11:42.751053 master-0 kubenswrapper[4051]: I0312 18:11:42.750935 4051 reconciler_common.go:293] "Volume detached for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2a4a981c-9454-4e1f-951e-1a62737659cc-host-var-run-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 12 18:11:42.751053 master-0 kubenswrapper[4051]: I0312 18:11:42.750970 4051 reconciler_common.go:293] "Volume detached for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/2a4a981c-9454-4e1f-951e-1a62737659cc-host-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:11:42.751053 master-0 kubenswrapper[4051]: I0312 18:11:42.750995 4051 reconciler_common.go:293] "Volume detached for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/2a4a981c-9454-4e1f-951e-1a62737659cc-sno-bootstrap-files\") on node \"master-0\" DevicePath \"\"" Mar 12 18:11:42.751053 master-0 kubenswrapper[4051]: I0312 18:11:42.751018 4051 reconciler_common.go:293] "Volume detached for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2a4a981c-9454-4e1f-951e-1a62737659cc-host-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 12 18:11:42.755557 master-0 kubenswrapper[4051]: I0312 18:11:42.753661 4051 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a4a981c-9454-4e1f-951e-1a62737659cc-kube-api-access-cz28r" (OuterVolumeSpecName: "kube-api-access-cz28r") pod "2a4a981c-9454-4e1f-951e-1a62737659cc" (UID: "2a4a981c-9454-4e1f-951e-1a62737659cc"). InnerVolumeSpecName "kube-api-access-cz28r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:11:42.851838 master-0 kubenswrapper[4051]: I0312 18:11:42.851742 4051 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cz28r\" (UniqueName: \"kubernetes.io/projected/2a4a981c-9454-4e1f-951e-1a62737659cc-kube-api-access-cz28r\") on node \"master-0\" DevicePath \"\"" Mar 12 18:11:42.952865 master-0 kubenswrapper[4051]: I0312 18:11:42.952764 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:11:42.953058 master-0 kubenswrapper[4051]: E0312 18:11:42.952945 4051 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 18:11:42.953058 master-0 kubenswrapper[4051]: E0312 18:11:42.953028 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert podName:00755a4e-124c-4a51-b1c5-7c505b3637a8 nodeName:}" failed. No retries permitted until 2026-03-12 18:11:50.953005177 +0000 UTC m=+90.432131438 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert") pod "cluster-version-operator-745944c6b7-cxwmx" (UID: "00755a4e-124c-4a51-b1c5-7c505b3637a8") : secret "cluster-version-operator-serving-cert" not found Mar 12 18:11:43.562896 master-0 kubenswrapper[4051]: I0312 18:11:43.562772 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-g257x" event={"ID":"2a4a981c-9454-4e1f-951e-1a62737659cc","Type":"ContainerDied","Data":"1ac2e44511feb89f0eb8641549dc02db57c6e394fd3b40bd40da8f07b13abdb2"} Mar 12 18:11:43.562896 master-0 kubenswrapper[4051]: I0312 18:11:43.562796 4051 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-g257x" Mar 12 18:11:43.562896 master-0 kubenswrapper[4051]: I0312 18:11:43.562806 4051 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ac2e44511feb89f0eb8641549dc02db57c6e394fd3b40bd40da8f07b13abdb2" Mar 12 18:11:43.575430 master-0 kubenswrapper[4051]: I0312 18:11:43.575387 4051 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-62lqd" Mar 12 18:11:43.756335 master-0 kubenswrapper[4051]: I0312 18:11:43.756284 4051 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n8mv\" (UniqueName: \"kubernetes.io/projected/43fdaf13-ffc1-4787-8dd2-90d0685b3124-kube-api-access-4n8mv\") pod \"43fdaf13-ffc1-4787-8dd2-90d0685b3124\" (UID: \"43fdaf13-ffc1-4787-8dd2-90d0685b3124\") " Mar 12 18:11:43.759636 master-0 kubenswrapper[4051]: I0312 18:11:43.759505 4051 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43fdaf13-ffc1-4787-8dd2-90d0685b3124-kube-api-access-4n8mv" (OuterVolumeSpecName: "kube-api-access-4n8mv") pod "43fdaf13-ffc1-4787-8dd2-90d0685b3124" (UID: "43fdaf13-ffc1-4787-8dd2-90d0685b3124"). InnerVolumeSpecName "kube-api-access-4n8mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:11:43.857257 master-0 kubenswrapper[4051]: I0312 18:11:43.857108 4051 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n8mv\" (UniqueName: \"kubernetes.io/projected/43fdaf13-ffc1-4787-8dd2-90d0685b3124-kube-api-access-4n8mv\") on node \"master-0\" DevicePath \"\"" Mar 12 18:11:44.297957 master-0 kubenswrapper[4051]: I0312 18:11:44.297874 4051 scope.go:117] "RemoveContainer" containerID="f4576abbe7a90b1b20fca15403fa958348685c536308da122d58a74078454e59" Mar 12 18:11:44.298420 master-0 kubenswrapper[4051]: E0312 18:11:44.298276 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 12 18:11:44.298420 master-0 kubenswrapper[4051]: I0312 18:11:44.298313 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Mar 12 18:11:44.568634 master-0 kubenswrapper[4051]: I0312 18:11:44.568493 4051 scope.go:117] "RemoveContainer" containerID="f4576abbe7a90b1b20fca15403fa958348685c536308da122d58a74078454e59" Mar 12 18:11:44.569470 master-0 kubenswrapper[4051]: E0312 18:11:44.569440 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 12 18:11:44.569628 master-0 kubenswrapper[4051]: I0312 18:11:44.568799 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-62lqd" event={"ID":"43fdaf13-ffc1-4787-8dd2-90d0685b3124","Type":"ContainerDied","Data":"0eba8d5c0b0f8386f8e7b0fcfb1805cff55a3c43911a4889778f4fec45d35584"} Mar 12 18:11:44.569754 master-0 kubenswrapper[4051]: I0312 18:11:44.569734 4051 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0eba8d5c0b0f8386f8e7b0fcfb1805cff55a3c43911a4889778f4fec45d35584" Mar 12 18:11:44.569872 master-0 kubenswrapper[4051]: I0312 18:11:44.568772 4051 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-62lqd" Mar 12 18:11:49.166575 master-0 kubenswrapper[4051]: I0312 18:11:49.166473 4051 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-network-operator/mtu-prober-62lqd"] Mar 12 18:11:50.256846 master-0 kubenswrapper[4051]: I0312 18:11:50.256759 4051 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 12 18:11:50.864671 master-0 kubenswrapper[4051]: I0312 18:11:50.864376 4051 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-network-operator/mtu-prober-62lqd"] Mar 12 18:11:51.008048 master-0 kubenswrapper[4051]: I0312 18:11:51.007991 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:11:51.008211 master-0 kubenswrapper[4051]: E0312 18:11:51.008174 4051 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 18:11:51.008305 master-0 kubenswrapper[4051]: E0312 18:11:51.008273 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert podName:00755a4e-124c-4a51-b1c5-7c505b3637a8 nodeName:}" failed. No retries permitted until 2026-03-12 18:12:07.008245287 +0000 UTC m=+106.487371528 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert") pod "cluster-version-operator-745944c6b7-cxwmx" (UID: "00755a4e-124c-4a51-b1c5-7c505b3637a8") : secret "cluster-version-operator-serving-cert" not found Mar 12 18:11:51.283999 master-0 kubenswrapper[4051]: I0312 18:11:51.283907 4051 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43fdaf13-ffc1-4787-8dd2-90d0685b3124" path="/var/lib/kubelet/pods/43fdaf13-ffc1-4787-8dd2-90d0685b3124/volumes" Mar 12 18:11:55.281693 master-0 kubenswrapper[4051]: I0312 18:11:55.281643 4051 scope.go:117] "RemoveContainer" containerID="f4576abbe7a90b1b20fca15403fa958348685c536308da122d58a74078454e59" Mar 12 18:11:55.592773 master-0 kubenswrapper[4051]: I0312 18:11:55.592680 4051 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/3.log" Mar 12 18:11:55.593208 master-0 kubenswrapper[4051]: I0312 18:11:55.592998 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"93aef700d51857dffd379b4fa5c63e9358523baf23deed5a0f436de9a4c7c7b1"} Mar 12 18:11:56.634844 master-0 kubenswrapper[4051]: I0312 18:11:56.634758 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-656l8"] Mar 12 18:11:56.635919 master-0 kubenswrapper[4051]: E0312 18:11:56.634952 4051 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43fdaf13-ffc1-4787-8dd2-90d0685b3124" containerName="prober" Mar 12 18:11:56.635919 master-0 kubenswrapper[4051]: I0312 18:11:56.634978 4051 state_mem.go:107] "Deleted CPUSet assignment" podUID="43fdaf13-ffc1-4787-8dd2-90d0685b3124" containerName="prober" Mar 12 18:11:56.635919 master-0 kubenswrapper[4051]: E0312 18:11:56.634995 4051 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a4a981c-9454-4e1f-951e-1a62737659cc" containerName="assisted-installer-controller" Mar 12 18:11:56.635919 master-0 kubenswrapper[4051]: I0312 18:11:56.635049 4051 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a4a981c-9454-4e1f-951e-1a62737659cc" containerName="assisted-installer-controller" Mar 12 18:11:56.635919 master-0 kubenswrapper[4051]: I0312 18:11:56.635144 4051 memory_manager.go:354] "RemoveStaleState removing state" podUID="43fdaf13-ffc1-4787-8dd2-90d0685b3124" containerName="prober" Mar 12 18:11:56.635919 master-0 kubenswrapper[4051]: I0312 18:11:56.635160 4051 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a4a981c-9454-4e1f-951e-1a62737659cc" containerName="assisted-installer-controller" Mar 12 18:11:56.635919 master-0 kubenswrapper[4051]: I0312 18:11:56.635571 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-656l8" Mar 12 18:11:56.641006 master-0 kubenswrapper[4051]: I0312 18:11:56.640955 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 12 18:11:56.641006 master-0 kubenswrapper[4051]: I0312 18:11:56.640986 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 12 18:11:56.641627 master-0 kubenswrapper[4051]: I0312 18:11:56.641580 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 12 18:11:56.642164 master-0 kubenswrapper[4051]: I0312 18:11:56.642117 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 12 18:11:56.752645 master-0 kubenswrapper[4051]: I0312 18:11:56.752491 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-var-lib-cni-bin\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.752645 master-0 kubenswrapper[4051]: I0312 18:11:56.752648 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-etc-kubernetes\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.753011 master-0 kubenswrapper[4051]: I0312 18:11:56.752679 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-run-k8s-cni-cncf-io\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.753011 master-0 kubenswrapper[4051]: I0312 18:11:56.752707 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-system-cni-dir\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.753011 master-0 kubenswrapper[4051]: I0312 18:11:56.752733 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-cni-dir\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.753011 master-0 kubenswrapper[4051]: I0312 18:11:56.752756 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-cnibin\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.753011 master-0 kubenswrapper[4051]: I0312 18:11:56.752779 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-var-lib-cni-multus\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.753011 master-0 kubenswrapper[4051]: I0312 18:11:56.752852 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-os-release\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.753011 master-0 kubenswrapper[4051]: I0312 18:11:56.752901 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-run-netns\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.753011 master-0 kubenswrapper[4051]: I0312 18:11:56.752938 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-var-lib-kubelet\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.753011 master-0 kubenswrapper[4051]: I0312 18:11:56.752966 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-conf-dir\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.753011 master-0 kubenswrapper[4051]: I0312 18:11:56.752985 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-daemon-config\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.753011 master-0 kubenswrapper[4051]: I0312 18:11:56.753033 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-socket-dir-parent\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.753682 master-0 kubenswrapper[4051]: I0312 18:11:56.753061 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pn9h\" (UniqueName: \"kubernetes.io/projected/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-kube-api-access-2pn9h\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.753682 master-0 kubenswrapper[4051]: I0312 18:11:56.753083 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-run-multus-certs\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.753682 master-0 kubenswrapper[4051]: I0312 18:11:56.753114 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-hostroot\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.753682 master-0 kubenswrapper[4051]: I0312 18:11:56.753136 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-cni-binary-copy\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.854636 master-0 kubenswrapper[4051]: I0312 18:11:56.854503 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-system-cni-dir\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.854636 master-0 kubenswrapper[4051]: I0312 18:11:56.854632 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-cni-dir\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.855017 master-0 kubenswrapper[4051]: I0312 18:11:56.854668 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-cnibin\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.855017 master-0 kubenswrapper[4051]: I0312 18:11:56.854710 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-var-lib-cni-multus\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.855017 master-0 kubenswrapper[4051]: I0312 18:11:56.854880 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-cni-dir\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.855017 master-0 kubenswrapper[4051]: I0312 18:11:56.854875 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-os-release\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.855017 master-0 kubenswrapper[4051]: I0312 18:11:56.854915 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-cnibin\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.855017 master-0 kubenswrapper[4051]: I0312 18:11:56.854988 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-socket-dir-parent\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.855017 master-0 kubenswrapper[4051]: I0312 18:11:56.854931 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-socket-dir-parent\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.855412 master-0 kubenswrapper[4051]: I0312 18:11:56.854984 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-os-release\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.855412 master-0 kubenswrapper[4051]: I0312 18:11:56.855060 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-run-netns\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.855412 master-0 kubenswrapper[4051]: I0312 18:11:56.855096 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-var-lib-cni-multus\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.855412 master-0 kubenswrapper[4051]: I0312 18:11:56.855115 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-var-lib-kubelet\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.855412 master-0 kubenswrapper[4051]: I0312 18:11:56.855164 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-system-cni-dir\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.855412 master-0 kubenswrapper[4051]: I0312 18:11:56.855207 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-conf-dir\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.855412 master-0 kubenswrapper[4051]: I0312 18:11:56.855225 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-run-netns\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.855412 master-0 kubenswrapper[4051]: I0312 18:11:56.855303 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-conf-dir\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.855412 master-0 kubenswrapper[4051]: I0312 18:11:56.855241 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-daemon-config\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.855412 master-0 kubenswrapper[4051]: I0312 18:11:56.855341 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-var-lib-kubelet\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.855412 master-0 kubenswrapper[4051]: I0312 18:11:56.855382 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pn9h\" (UniqueName: \"kubernetes.io/projected/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-kube-api-access-2pn9h\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.855412 master-0 kubenswrapper[4051]: I0312 18:11:56.855421 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-hostroot\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.856054 master-0 kubenswrapper[4051]: I0312 18:11:56.855454 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-run-multus-certs\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.856054 master-0 kubenswrapper[4051]: I0312 18:11:56.855489 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-cni-binary-copy\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.856054 master-0 kubenswrapper[4051]: I0312 18:11:56.855570 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-var-lib-cni-bin\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.856054 master-0 kubenswrapper[4051]: I0312 18:11:56.855579 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-hostroot\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.856054 master-0 kubenswrapper[4051]: I0312 18:11:56.855640 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-var-lib-cni-bin\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.856054 master-0 kubenswrapper[4051]: I0312 18:11:56.855896 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-run-k8s-cni-cncf-io\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.856245 master-0 kubenswrapper[4051]: I0312 18:11:56.856058 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-run-k8s-cni-cncf-io\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.856245 master-0 kubenswrapper[4051]: I0312 18:11:56.856132 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-run-multus-certs\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.856245 master-0 kubenswrapper[4051]: I0312 18:11:56.856191 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-etc-kubernetes\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.856328 master-0 kubenswrapper[4051]: I0312 18:11:56.856270 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-etc-kubernetes\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.856431 master-0 kubenswrapper[4051]: I0312 18:11:56.856389 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-daemon-config\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:56.856893 master-0 kubenswrapper[4051]: I0312 18:11:56.856834 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-cni-binary-copy\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:57.071479 master-0 kubenswrapper[4051]: I0312 18:11:57.071381 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pn9h\" (UniqueName: \"kubernetes.io/projected/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-kube-api-access-2pn9h\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:11:57.259485 master-0 kubenswrapper[4051]: I0312 18:11:57.259413 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-656l8" Mar 12 18:11:57.330667 master-0 kubenswrapper[4051]: W0312 18:11:57.330478 4051 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38a4bf73_479e_4bbf_9aa3_639fc288c8bc.slice/crio-c14a32fc9f0111ac97c8d0756c820cfe5f40ed691d6de42ac60400f58318b138 WatchSource:0}: Error finding container c14a32fc9f0111ac97c8d0756c820cfe5f40ed691d6de42ac60400f58318b138: Status 404 returned error can't find the container with id c14a32fc9f0111ac97c8d0756c820cfe5f40ed691d6de42ac60400f58318b138 Mar 12 18:11:57.354142 master-0 kubenswrapper[4051]: I0312 18:11:57.354045 4051 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podStartSLOduration=13.354022057 podStartE2EDuration="13.354022057s" podCreationTimestamp="2026-03-12 18:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:11:57.35372678 +0000 UTC m=+96.832853041" watchObservedRunningTime="2026-03-12 18:11:57.354022057 +0000 UTC m=+96.833148298" Mar 12 18:11:57.359269 master-0 kubenswrapper[4051]: I0312 18:11:57.359221 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-lv8hk"] Mar 12 18:11:57.364873 master-0 kubenswrapper[4051]: I0312 18:11:57.360096 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:11:57.365324 master-0 kubenswrapper[4051]: I0312 18:11:57.365283 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 12 18:11:57.366630 master-0 kubenswrapper[4051]: I0312 18:11:57.366091 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 12 18:11:57.462270 master-0 kubenswrapper[4051]: I0312 18:11:57.462160 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:11:57.462502 master-0 kubenswrapper[4051]: I0312 18:11:57.462278 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/455f0aad-add2-49d0-995c-f92467bce2d6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:11:57.462502 master-0 kubenswrapper[4051]: I0312 18:11:57.462370 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/455f0aad-add2-49d0-995c-f92467bce2d6-cni-binary-copy\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:11:57.462502 master-0 kubenswrapper[4051]: I0312 18:11:57.462429 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/455f0aad-add2-49d0-995c-f92467bce2d6-whereabouts-configmap\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:11:57.462502 master-0 kubenswrapper[4051]: I0312 18:11:57.462486 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-cnibin\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:11:57.462800 master-0 kubenswrapper[4051]: I0312 18:11:57.462575 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxsgv\" (UniqueName: \"kubernetes.io/projected/455f0aad-add2-49d0-995c-f92467bce2d6-kube-api-access-pxsgv\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:11:57.462800 master-0 kubenswrapper[4051]: I0312 18:11:57.462640 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-system-cni-dir\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:11:57.462800 master-0 kubenswrapper[4051]: I0312 18:11:57.462693 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-os-release\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:11:57.566879 master-0 kubenswrapper[4051]: I0312 18:11:57.564941 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/455f0aad-add2-49d0-995c-f92467bce2d6-cni-binary-copy\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:11:57.566879 master-0 kubenswrapper[4051]: I0312 18:11:57.566237 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/455f0aad-add2-49d0-995c-f92467bce2d6-cni-binary-copy\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:11:57.566879 master-0 kubenswrapper[4051]: I0312 18:11:57.566331 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/455f0aad-add2-49d0-995c-f92467bce2d6-whereabouts-configmap\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:11:57.566879 master-0 kubenswrapper[4051]: I0312 18:11:57.566401 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-cnibin\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:11:57.566879 master-0 kubenswrapper[4051]: I0312 18:11:57.566423 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxsgv\" (UniqueName: \"kubernetes.io/projected/455f0aad-add2-49d0-995c-f92467bce2d6-kube-api-access-pxsgv\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:11:57.566879 master-0 kubenswrapper[4051]: I0312 18:11:57.566477 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-system-cni-dir\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:11:57.566879 master-0 kubenswrapper[4051]: I0312 18:11:57.566497 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-os-release\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:11:57.566879 master-0 kubenswrapper[4051]: I0312 18:11:57.566572 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:11:57.566879 master-0 kubenswrapper[4051]: I0312 18:11:57.566599 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/455f0aad-add2-49d0-995c-f92467bce2d6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:11:57.568751 master-0 kubenswrapper[4051]: I0312 18:11:57.566986 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-os-release\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:11:57.568751 master-0 kubenswrapper[4051]: I0312 18:11:57.567290 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-system-cni-dir\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:11:57.568751 master-0 kubenswrapper[4051]: I0312 18:11:57.567357 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-cnibin\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:11:57.568751 master-0 kubenswrapper[4051]: I0312 18:11:57.567546 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:11:57.568751 master-0 kubenswrapper[4051]: I0312 18:11:57.567842 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/455f0aad-add2-49d0-995c-f92467bce2d6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:11:57.568751 master-0 kubenswrapper[4051]: I0312 18:11:57.568630 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/455f0aad-add2-49d0-995c-f92467bce2d6-whereabouts-configmap\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:11:57.595617 master-0 kubenswrapper[4051]: I0312 18:11:57.594423 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxsgv\" (UniqueName: \"kubernetes.io/projected/455f0aad-add2-49d0-995c-f92467bce2d6-kube-api-access-pxsgv\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:11:57.598763 master-0 kubenswrapper[4051]: I0312 18:11:57.598721 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-656l8" event={"ID":"38a4bf73-479e-4bbf-9aa3-639fc288c8bc","Type":"ContainerStarted","Data":"c14a32fc9f0111ac97c8d0756c820cfe5f40ed691d6de42ac60400f58318b138"} Mar 12 18:11:57.604141 master-0 kubenswrapper[4051]: I0312 18:11:57.604090 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-z4sc9"] Mar 12 18:11:57.604434 master-0 kubenswrapper[4051]: I0312 18:11:57.604407 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:11:57.604492 master-0 kubenswrapper[4051]: E0312 18:11:57.604470 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z4sc9" podUID="5b48f8fd-2efe-44e3-a6d7-c71358b83a2f" Mar 12 18:11:57.676436 master-0 kubenswrapper[4051]: I0312 18:11:57.676356 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:11:57.685987 master-0 kubenswrapper[4051]: W0312 18:11:57.685911 4051 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod455f0aad_add2_49d0_995c_f92467bce2d6.slice/crio-e7b7f3534352e488adef6510c4ad914236d57c0ac52f6e0d4e107e52563cb840 WatchSource:0}: Error finding container e7b7f3534352e488adef6510c4ad914236d57c0ac52f6e0d4e107e52563cb840: Status 404 returned error can't find the container with id e7b7f3534352e488adef6510c4ad914236d57c0ac52f6e0d4e107e52563cb840 Mar 12 18:11:57.767885 master-0 kubenswrapper[4051]: I0312 18:11:57.767802 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs\") pod \"network-metrics-daemon-z4sc9\" (UID: \"5b48f8fd-2efe-44e3-a6d7-c71358b83a2f\") " pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:11:57.767885 master-0 kubenswrapper[4051]: I0312 18:11:57.767858 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jgbv\" (UniqueName: \"kubernetes.io/projected/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-kube-api-access-9jgbv\") pod \"network-metrics-daemon-z4sc9\" (UID: \"5b48f8fd-2efe-44e3-a6d7-c71358b83a2f\") " pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:11:57.868833 master-0 kubenswrapper[4051]: I0312 18:11:57.868697 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs\") pod \"network-metrics-daemon-z4sc9\" (UID: \"5b48f8fd-2efe-44e3-a6d7-c71358b83a2f\") " pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:11:57.868833 master-0 kubenswrapper[4051]: I0312 18:11:57.868750 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jgbv\" (UniqueName: \"kubernetes.io/projected/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-kube-api-access-9jgbv\") pod \"network-metrics-daemon-z4sc9\" (UID: \"5b48f8fd-2efe-44e3-a6d7-c71358b83a2f\") " pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:11:57.869045 master-0 kubenswrapper[4051]: E0312 18:11:57.868908 4051 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 18:11:57.869045 master-0 kubenswrapper[4051]: E0312 18:11:57.868975 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs podName:5b48f8fd-2efe-44e3-a6d7-c71358b83a2f nodeName:}" failed. No retries permitted until 2026-03-12 18:11:58.368956819 +0000 UTC m=+97.848083060 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs") pod "network-metrics-daemon-z4sc9" (UID: "5b48f8fd-2efe-44e3-a6d7-c71358b83a2f") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 18:11:57.887207 master-0 kubenswrapper[4051]: I0312 18:11:57.887142 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jgbv\" (UniqueName: \"kubernetes.io/projected/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-kube-api-access-9jgbv\") pod \"network-metrics-daemon-z4sc9\" (UID: \"5b48f8fd-2efe-44e3-a6d7-c71358b83a2f\") " pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:11:58.372199 master-0 kubenswrapper[4051]: I0312 18:11:58.372154 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs\") pod \"network-metrics-daemon-z4sc9\" (UID: \"5b48f8fd-2efe-44e3-a6d7-c71358b83a2f\") " pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:11:58.372381 master-0 kubenswrapper[4051]: E0312 18:11:58.372310 4051 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 18:11:58.372465 master-0 kubenswrapper[4051]: E0312 18:11:58.372444 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs podName:5b48f8fd-2efe-44e3-a6d7-c71358b83a2f nodeName:}" failed. No retries permitted until 2026-03-12 18:11:59.372424191 +0000 UTC m=+98.851550432 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs") pod "network-metrics-daemon-z4sc9" (UID: "5b48f8fd-2efe-44e3-a6d7-c71358b83a2f") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 18:11:58.602009 master-0 kubenswrapper[4051]: I0312 18:11:58.601954 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lv8hk" event={"ID":"455f0aad-add2-49d0-995c-f92467bce2d6","Type":"ContainerStarted","Data":"e7b7f3534352e488adef6510c4ad914236d57c0ac52f6e0d4e107e52563cb840"} Mar 12 18:11:59.280965 master-0 kubenswrapper[4051]: I0312 18:11:59.280917 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:11:59.281414 master-0 kubenswrapper[4051]: E0312 18:11:59.281015 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z4sc9" podUID="5b48f8fd-2efe-44e3-a6d7-c71358b83a2f" Mar 12 18:11:59.388256 master-0 kubenswrapper[4051]: I0312 18:11:59.388191 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs\") pod \"network-metrics-daemon-z4sc9\" (UID: \"5b48f8fd-2efe-44e3-a6d7-c71358b83a2f\") " pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:11:59.388417 master-0 kubenswrapper[4051]: E0312 18:11:59.388326 4051 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 18:11:59.388492 master-0 kubenswrapper[4051]: E0312 18:11:59.388458 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs podName:5b48f8fd-2efe-44e3-a6d7-c71358b83a2f nodeName:}" failed. No retries permitted until 2026-03-12 18:12:01.388442912 +0000 UTC m=+100.867569143 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs") pod "network-metrics-daemon-z4sc9" (UID: "5b48f8fd-2efe-44e3-a6d7-c71358b83a2f") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 18:12:00.609377 master-0 kubenswrapper[4051]: I0312 18:12:00.609319 4051 generic.go:334] "Generic (PLEG): container finished" podID="455f0aad-add2-49d0-995c-f92467bce2d6" containerID="f1a76c40be6adf4508f866e6663729add45233d4cc201334c0c921cf2c117caa" exitCode=0 Mar 12 18:12:00.609377 master-0 kubenswrapper[4051]: I0312 18:12:00.609356 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lv8hk" event={"ID":"455f0aad-add2-49d0-995c-f92467bce2d6","Type":"ContainerDied","Data":"f1a76c40be6adf4508f866e6663729add45233d4cc201334c0c921cf2c117caa"} Mar 12 18:12:01.280606 master-0 kubenswrapper[4051]: I0312 18:12:01.280566 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:12:01.280915 master-0 kubenswrapper[4051]: E0312 18:12:01.280879 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z4sc9" podUID="5b48f8fd-2efe-44e3-a6d7-c71358b83a2f" Mar 12 18:12:01.290697 master-0 kubenswrapper[4051]: I0312 18:12:01.290668 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 12 18:12:01.401780 master-0 kubenswrapper[4051]: I0312 18:12:01.401700 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs\") pod \"network-metrics-daemon-z4sc9\" (UID: \"5b48f8fd-2efe-44e3-a6d7-c71358b83a2f\") " pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:12:01.402059 master-0 kubenswrapper[4051]: E0312 18:12:01.401856 4051 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 18:12:01.402059 master-0 kubenswrapper[4051]: E0312 18:12:01.401926 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs podName:5b48f8fd-2efe-44e3-a6d7-c71358b83a2f nodeName:}" failed. No retries permitted until 2026-03-12 18:12:05.401909147 +0000 UTC m=+104.881035378 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs") pod "network-metrics-daemon-z4sc9" (UID: "5b48f8fd-2efe-44e3-a6d7-c71358b83a2f") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 18:12:03.281281 master-0 kubenswrapper[4051]: I0312 18:12:03.281158 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:12:03.281853 master-0 kubenswrapper[4051]: E0312 18:12:03.281813 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z4sc9" podUID="5b48f8fd-2efe-44e3-a6d7-c71358b83a2f" Mar 12 18:12:05.281130 master-0 kubenswrapper[4051]: I0312 18:12:05.280795 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:12:05.283133 master-0 kubenswrapper[4051]: E0312 18:12:05.281179 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z4sc9" podUID="5b48f8fd-2efe-44e3-a6d7-c71358b83a2f" Mar 12 18:12:05.476953 master-0 kubenswrapper[4051]: I0312 18:12:05.476856 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs\") pod \"network-metrics-daemon-z4sc9\" (UID: \"5b48f8fd-2efe-44e3-a6d7-c71358b83a2f\") " pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:12:05.477207 master-0 kubenswrapper[4051]: E0312 18:12:05.477050 4051 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 18:12:05.477207 master-0 kubenswrapper[4051]: E0312 18:12:05.477128 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs podName:5b48f8fd-2efe-44e3-a6d7-c71358b83a2f nodeName:}" failed. No retries permitted until 2026-03-12 18:12:13.47710922 +0000 UTC m=+112.956235441 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs") pod "network-metrics-daemon-z4sc9" (UID: "5b48f8fd-2efe-44e3-a6d7-c71358b83a2f") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 18:12:07.087823 master-0 kubenswrapper[4051]: I0312 18:12:07.087762 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:12:07.088456 master-0 kubenswrapper[4051]: E0312 18:12:07.087891 4051 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 18:12:07.088456 master-0 kubenswrapper[4051]: E0312 18:12:07.087995 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert podName:00755a4e-124c-4a51-b1c5-7c505b3637a8 nodeName:}" failed. No retries permitted until 2026-03-12 18:12:39.08795266 +0000 UTC m=+138.567078891 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert") pod "cluster-version-operator-745944c6b7-cxwmx" (UID: "00755a4e-124c-4a51-b1c5-7c505b3637a8") : secret "cluster-version-operator-serving-cert" not found Mar 12 18:12:07.281337 master-0 kubenswrapper[4051]: I0312 18:12:07.281288 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:12:07.281633 master-0 kubenswrapper[4051]: E0312 18:12:07.281425 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z4sc9" podUID="5b48f8fd-2efe-44e3-a6d7-c71358b83a2f" Mar 12 18:12:08.039659 master-0 kubenswrapper[4051]: I0312 18:12:08.039124 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9"] Mar 12 18:12:08.039659 master-0 kubenswrapper[4051]: I0312 18:12:08.039468 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" Mar 12 18:12:08.041497 master-0 kubenswrapper[4051]: I0312 18:12:08.041114 4051 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 12 18:12:08.041497 master-0 kubenswrapper[4051]: I0312 18:12:08.041481 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 12 18:12:08.041583 master-0 kubenswrapper[4051]: I0312 18:12:08.041498 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 12 18:12:08.041613 master-0 kubenswrapper[4051]: I0312 18:12:08.041600 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 12 18:12:08.042236 master-0 kubenswrapper[4051]: I0312 18:12:08.041827 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 12 18:12:08.195045 master-0 kubenswrapper[4051]: I0312 18:12:08.194956 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-w7wj9\" (UID: \"74eb1407-de29-42e5-9e6c-ce1bec3a9d80\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" Mar 12 18:12:08.195045 master-0 kubenswrapper[4051]: I0312 18:12:08.195020 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-w7wj9\" (UID: \"74eb1407-de29-42e5-9e6c-ce1bec3a9d80\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" Mar 12 18:12:08.195045 master-0 kubenswrapper[4051]: I0312 18:12:08.195047 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6ggc\" (UniqueName: \"kubernetes.io/projected/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-kube-api-access-b6ggc\") pod \"ovnkube-control-plane-66b55d57d-w7wj9\" (UID: \"74eb1407-de29-42e5-9e6c-ce1bec3a9d80\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" Mar 12 18:12:08.195045 master-0 kubenswrapper[4051]: I0312 18:12:08.195071 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-w7wj9\" (UID: \"74eb1407-de29-42e5-9e6c-ce1bec3a9d80\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" Mar 12 18:12:08.258718 master-0 kubenswrapper[4051]: I0312 18:12:08.258658 4051 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-scheduler-master-0" podStartSLOduration=7.258639635 podStartE2EDuration="7.258639635s" podCreationTimestamp="2026-03-12 18:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:12:08.070004443 +0000 UTC m=+107.549130674" watchObservedRunningTime="2026-03-12 18:12:08.258639635 +0000 UTC m=+107.737765866" Mar 12 18:12:08.258922 master-0 kubenswrapper[4051]: I0312 18:12:08.258776 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xwvdd"] Mar 12 18:12:08.259449 master-0 kubenswrapper[4051]: I0312 18:12:08.259428 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.261534 master-0 kubenswrapper[4051]: I0312 18:12:08.261489 4051 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 12 18:12:08.262379 master-0 kubenswrapper[4051]: I0312 18:12:08.262357 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 12 18:12:08.296220 master-0 kubenswrapper[4051]: I0312 18:12:08.296167 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-w7wj9\" (UID: \"74eb1407-de29-42e5-9e6c-ce1bec3a9d80\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" Mar 12 18:12:08.296485 master-0 kubenswrapper[4051]: I0312 18:12:08.296465 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-w7wj9\" (UID: \"74eb1407-de29-42e5-9e6c-ce1bec3a9d80\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" Mar 12 18:12:08.297202 master-0 kubenswrapper[4051]: I0312 18:12:08.296500 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6ggc\" (UniqueName: \"kubernetes.io/projected/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-kube-api-access-b6ggc\") pod \"ovnkube-control-plane-66b55d57d-w7wj9\" (UID: \"74eb1407-de29-42e5-9e6c-ce1bec3a9d80\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" Mar 12 18:12:08.297202 master-0 kubenswrapper[4051]: I0312 18:12:08.296560 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-w7wj9\" (UID: \"74eb1407-de29-42e5-9e6c-ce1bec3a9d80\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" Mar 12 18:12:08.297202 master-0 kubenswrapper[4051]: I0312 18:12:08.296883 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-w7wj9\" (UID: \"74eb1407-de29-42e5-9e6c-ce1bec3a9d80\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" Mar 12 18:12:08.297202 master-0 kubenswrapper[4051]: I0312 18:12:08.297134 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-w7wj9\" (UID: \"74eb1407-de29-42e5-9e6c-ce1bec3a9d80\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" Mar 12 18:12:08.300756 master-0 kubenswrapper[4051]: I0312 18:12:08.300714 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-w7wj9\" (UID: \"74eb1407-de29-42e5-9e6c-ce1bec3a9d80\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" Mar 12 18:12:08.314887 master-0 kubenswrapper[4051]: I0312 18:12:08.314858 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6ggc\" (UniqueName: \"kubernetes.io/projected/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-kube-api-access-b6ggc\") pod \"ovnkube-control-plane-66b55d57d-w7wj9\" (UID: \"74eb1407-de29-42e5-9e6c-ce1bec3a9d80\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" Mar 12 18:12:08.353923 master-0 kubenswrapper[4051]: I0312 18:12:08.353618 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" Mar 12 18:12:08.397851 master-0 kubenswrapper[4051]: I0312 18:12:08.397822 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-run-ovn-kubernetes\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.397936 master-0 kubenswrapper[4051]: I0312 18:12:08.397860 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-run-openvswitch\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.397936 master-0 kubenswrapper[4051]: I0312 18:12:08.397878 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-node-log\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.397936 master-0 kubenswrapper[4051]: I0312 18:12:08.397895 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.397936 master-0 kubenswrapper[4051]: I0312 18:12:08.397915 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ee43418f-8381-4526-9092-bcc61cfcd2e9-ovn-node-metrics-cert\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.397936 master-0 kubenswrapper[4051]: I0312 18:12:08.397930 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-cni-netd\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.398078 master-0 kubenswrapper[4051]: I0312 18:12:08.397946 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4d4h\" (UniqueName: \"kubernetes.io/projected/ee43418f-8381-4526-9092-bcc61cfcd2e9-kube-api-access-d4d4h\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.398078 master-0 kubenswrapper[4051]: I0312 18:12:08.397975 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-var-lib-openvswitch\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.398078 master-0 kubenswrapper[4051]: I0312 18:12:08.397991 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ee43418f-8381-4526-9092-bcc61cfcd2e9-env-overrides\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.398078 master-0 kubenswrapper[4051]: I0312 18:12:08.398017 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-run-ovn\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.398078 master-0 kubenswrapper[4051]: I0312 18:12:08.398036 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-kubelet\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.398078 master-0 kubenswrapper[4051]: I0312 18:12:08.398050 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-slash\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.398078 master-0 kubenswrapper[4051]: I0312 18:12:08.398067 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-run-systemd\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.398253 master-0 kubenswrapper[4051]: I0312 18:12:08.398082 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-log-socket\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.398253 master-0 kubenswrapper[4051]: I0312 18:12:08.398098 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-etc-openvswitch\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.398253 master-0 kubenswrapper[4051]: I0312 18:12:08.398115 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-cni-bin\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.398253 master-0 kubenswrapper[4051]: I0312 18:12:08.398130 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-systemd-units\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.398253 master-0 kubenswrapper[4051]: I0312 18:12:08.398146 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ee43418f-8381-4526-9092-bcc61cfcd2e9-ovnkube-script-lib\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.398253 master-0 kubenswrapper[4051]: I0312 18:12:08.398161 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-run-netns\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.398253 master-0 kubenswrapper[4051]: I0312 18:12:08.398176 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ee43418f-8381-4526-9092-bcc61cfcd2e9-ovnkube-config\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.498763 master-0 kubenswrapper[4051]: I0312 18:12:08.498686 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-run-ovn\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.498763 master-0 kubenswrapper[4051]: I0312 18:12:08.498723 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-kubelet\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.498971 master-0 kubenswrapper[4051]: I0312 18:12:08.498781 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-run-ovn\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.498971 master-0 kubenswrapper[4051]: I0312 18:12:08.498838 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-slash\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.498971 master-0 kubenswrapper[4051]: I0312 18:12:08.498860 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-run-systemd\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.498971 master-0 kubenswrapper[4051]: I0312 18:12:08.498879 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-log-socket\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.498971 master-0 kubenswrapper[4051]: I0312 18:12:08.498894 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-etc-openvswitch\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.498971 master-0 kubenswrapper[4051]: I0312 18:12:08.498910 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-cni-bin\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.498971 master-0 kubenswrapper[4051]: I0312 18:12:08.498925 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-systemd-units\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.498971 master-0 kubenswrapper[4051]: I0312 18:12:08.498941 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ee43418f-8381-4526-9092-bcc61cfcd2e9-ovnkube-script-lib\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.498971 master-0 kubenswrapper[4051]: I0312 18:12:08.498956 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-run-netns\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.498971 master-0 kubenswrapper[4051]: I0312 18:12:08.498970 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ee43418f-8381-4526-9092-bcc61cfcd2e9-ovnkube-config\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.499222 master-0 kubenswrapper[4051]: I0312 18:12:08.498987 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-run-ovn-kubernetes\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.499222 master-0 kubenswrapper[4051]: I0312 18:12:08.499005 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-run-openvswitch\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.499222 master-0 kubenswrapper[4051]: I0312 18:12:08.499019 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-node-log\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.499222 master-0 kubenswrapper[4051]: I0312 18:12:08.499035 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.499222 master-0 kubenswrapper[4051]: I0312 18:12:08.499053 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ee43418f-8381-4526-9092-bcc61cfcd2e9-ovn-node-metrics-cert\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.499222 master-0 kubenswrapper[4051]: I0312 18:12:08.499069 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-cni-netd\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.499222 master-0 kubenswrapper[4051]: I0312 18:12:08.499083 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4d4h\" (UniqueName: \"kubernetes.io/projected/ee43418f-8381-4526-9092-bcc61cfcd2e9-kube-api-access-d4d4h\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.499222 master-0 kubenswrapper[4051]: I0312 18:12:08.499109 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-var-lib-openvswitch\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.499222 master-0 kubenswrapper[4051]: I0312 18:12:08.499127 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ee43418f-8381-4526-9092-bcc61cfcd2e9-env-overrides\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.500033 master-0 kubenswrapper[4051]: I0312 18:12:08.499490 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-run-netns\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.500033 master-0 kubenswrapper[4051]: I0312 18:12:08.499553 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-kubelet\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.500033 master-0 kubenswrapper[4051]: I0312 18:12:08.499576 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-slash\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.500033 master-0 kubenswrapper[4051]: I0312 18:12:08.499599 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-cni-bin\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.500033 master-0 kubenswrapper[4051]: I0312 18:12:08.499629 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-run-systemd\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.500033 master-0 kubenswrapper[4051]: I0312 18:12:08.499631 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ee43418f-8381-4526-9092-bcc61cfcd2e9-env-overrides\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.500033 master-0 kubenswrapper[4051]: I0312 18:12:08.499653 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-systemd-units\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.500033 master-0 kubenswrapper[4051]: I0312 18:12:08.499663 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-run-ovn-kubernetes\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.500033 master-0 kubenswrapper[4051]: I0312 18:12:08.499657 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-etc-openvswitch\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.500033 master-0 kubenswrapper[4051]: I0312 18:12:08.499689 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-log-socket\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.500033 master-0 kubenswrapper[4051]: I0312 18:12:08.499709 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-node-log\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.500033 master-0 kubenswrapper[4051]: I0312 18:12:08.499716 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-cni-netd\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.500033 master-0 kubenswrapper[4051]: I0312 18:12:08.499742 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-run-openvswitch\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.500033 master-0 kubenswrapper[4051]: I0312 18:12:08.499766 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.500033 master-0 kubenswrapper[4051]: I0312 18:12:08.499926 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-var-lib-openvswitch\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.500454 master-0 kubenswrapper[4051]: I0312 18:12:08.500152 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ee43418f-8381-4526-9092-bcc61cfcd2e9-ovnkube-config\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.500734 master-0 kubenswrapper[4051]: I0312 18:12:08.500714 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ee43418f-8381-4526-9092-bcc61cfcd2e9-ovnkube-script-lib\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.502753 master-0 kubenswrapper[4051]: I0312 18:12:08.502734 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ee43418f-8381-4526-9092-bcc61cfcd2e9-ovn-node-metrics-cert\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.514790 master-0 kubenswrapper[4051]: I0312 18:12:08.514763 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4d4h\" (UniqueName: \"kubernetes.io/projected/ee43418f-8381-4526-9092-bcc61cfcd2e9-kube-api-access-d4d4h\") pod \"ovnkube-node-xwvdd\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:08.572229 master-0 kubenswrapper[4051]: I0312 18:12:08.572122 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:09.280677 master-0 kubenswrapper[4051]: I0312 18:12:09.280587 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:12:09.281583 master-0 kubenswrapper[4051]: E0312 18:12:09.280694 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z4sc9" podUID="5b48f8fd-2efe-44e3-a6d7-c71358b83a2f" Mar 12 18:12:11.237027 master-0 kubenswrapper[4051]: I0312 18:12:11.236965 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-cpthp"] Mar 12 18:12:11.237532 master-0 kubenswrapper[4051]: I0312 18:12:11.237340 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:12:11.237532 master-0 kubenswrapper[4051]: E0312 18:12:11.237409 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cpthp" podUID="33feec78-4592-4343-965b-aa1b7044fcf3" Mar 12 18:12:11.275494 master-0 kubenswrapper[4051]: W0312 18:12:11.275456 4051 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee43418f_8381_4526_9092_bcc61cfcd2e9.slice/crio-4b6e13f8214d025f7992cbcd205cec35aba4e97b520d7e8f9e3e7a6bca8ac41d WatchSource:0}: Error finding container 4b6e13f8214d025f7992cbcd205cec35aba4e97b520d7e8f9e3e7a6bca8ac41d: Status 404 returned error can't find the container with id 4b6e13f8214d025f7992cbcd205cec35aba4e97b520d7e8f9e3e7a6bca8ac41d Mar 12 18:12:11.276913 master-0 kubenswrapper[4051]: W0312 18:12:11.276879 4051 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74eb1407_de29_42e5_9e6c_ce1bec3a9d80.slice/crio-16d696d609e4d9275dce2cfecd0a4d1078c8e60ea7f137e92635d2bfd874a46b WatchSource:0}: Error finding container 16d696d609e4d9275dce2cfecd0a4d1078c8e60ea7f137e92635d2bfd874a46b: Status 404 returned error can't find the container with id 16d696d609e4d9275dce2cfecd0a4d1078c8e60ea7f137e92635d2bfd874a46b Mar 12 18:12:11.280243 master-0 kubenswrapper[4051]: I0312 18:12:11.280222 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:12:11.280838 master-0 kubenswrapper[4051]: E0312 18:12:11.280802 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z4sc9" podUID="5b48f8fd-2efe-44e3-a6d7-c71358b83a2f" Mar 12 18:12:11.320191 master-0 kubenswrapper[4051]: I0312 18:12:11.320051 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptrtx\" (UniqueName: \"kubernetes.io/projected/33feec78-4592-4343-965b-aa1b7044fcf3-kube-api-access-ptrtx\") pod \"network-check-target-cpthp\" (UID: \"33feec78-4592-4343-965b-aa1b7044fcf3\") " pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:12:11.421209 master-0 kubenswrapper[4051]: I0312 18:12:11.421165 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptrtx\" (UniqueName: \"kubernetes.io/projected/33feec78-4592-4343-965b-aa1b7044fcf3-kube-api-access-ptrtx\") pod \"network-check-target-cpthp\" (UID: \"33feec78-4592-4343-965b-aa1b7044fcf3\") " pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:12:11.431569 master-0 kubenswrapper[4051]: E0312 18:12:11.431534 4051 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 18:12:11.431569 master-0 kubenswrapper[4051]: E0312 18:12:11.431565 4051 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 18:12:11.431670 master-0 kubenswrapper[4051]: E0312 18:12:11.431576 4051 projected.go:194] Error preparing data for projected volume kube-api-access-ptrtx for pod openshift-network-diagnostics/network-check-target-cpthp: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 18:12:11.431670 master-0 kubenswrapper[4051]: E0312 18:12:11.431628 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33feec78-4592-4343-965b-aa1b7044fcf3-kube-api-access-ptrtx podName:33feec78-4592-4343-965b-aa1b7044fcf3 nodeName:}" failed. No retries permitted until 2026-03-12 18:12:11.931613482 +0000 UTC m=+111.410739713 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ptrtx" (UniqueName: "kubernetes.io/projected/33feec78-4592-4343-965b-aa1b7044fcf3-kube-api-access-ptrtx") pod "network-check-target-cpthp" (UID: "33feec78-4592-4343-965b-aa1b7044fcf3") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 18:12:11.776474 master-0 kubenswrapper[4051]: I0312 18:12:11.776387 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" event={"ID":"74eb1407-de29-42e5-9e6c-ce1bec3a9d80","Type":"ContainerStarted","Data":"a91031625826df94e4579f56ee54c10b609c9627675328f5ef42e86c4fa33b67"} Mar 12 18:12:11.776474 master-0 kubenswrapper[4051]: I0312 18:12:11.776459 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" event={"ID":"74eb1407-de29-42e5-9e6c-ce1bec3a9d80","Type":"ContainerStarted","Data":"16d696d609e4d9275dce2cfecd0a4d1078c8e60ea7f137e92635d2bfd874a46b"} Mar 12 18:12:11.777226 master-0 kubenswrapper[4051]: I0312 18:12:11.777181 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" event={"ID":"ee43418f-8381-4526-9092-bcc61cfcd2e9","Type":"ContainerStarted","Data":"4b6e13f8214d025f7992cbcd205cec35aba4e97b520d7e8f9e3e7a6bca8ac41d"} Mar 12 18:12:11.778331 master-0 kubenswrapper[4051]: I0312 18:12:11.778298 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-656l8" event={"ID":"38a4bf73-479e-4bbf-9aa3-639fc288c8bc","Type":"ContainerStarted","Data":"74cfb002cde3b0dc105ed7540240e5dbfa0cbf86eb23edcec41d1f70ed2176e2"} Mar 12 18:12:11.781658 master-0 kubenswrapper[4051]: I0312 18:12:11.781627 4051 generic.go:334] "Generic (PLEG): container finished" podID="455f0aad-add2-49d0-995c-f92467bce2d6" containerID="a06bfc83f83320e9affd2425dbf28da14fdf99e08ecffd8df981975c0ab701b1" exitCode=0 Mar 12 18:12:11.781721 master-0 kubenswrapper[4051]: I0312 18:12:11.781666 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lv8hk" event={"ID":"455f0aad-add2-49d0-995c-f92467bce2d6","Type":"ContainerDied","Data":"a06bfc83f83320e9affd2425dbf28da14fdf99e08ecffd8df981975c0ab701b1"} Mar 12 18:12:11.791439 master-0 kubenswrapper[4051]: I0312 18:12:11.791363 4051 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-656l8" podStartSLOduration=1.7592140600000001 podStartE2EDuration="15.791345496s" podCreationTimestamp="2026-03-12 18:11:56 +0000 UTC" firstStartedPulling="2026-03-12 18:11:57.332466684 +0000 UTC m=+96.811592945" lastFinishedPulling="2026-03-12 18:12:11.36459816 +0000 UTC m=+110.843724381" observedRunningTime="2026-03-12 18:12:11.790833103 +0000 UTC m=+111.269959394" watchObservedRunningTime="2026-03-12 18:12:11.791345496 +0000 UTC m=+111.270471727" Mar 12 18:12:12.025890 master-0 kubenswrapper[4051]: I0312 18:12:12.025845 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptrtx\" (UniqueName: \"kubernetes.io/projected/33feec78-4592-4343-965b-aa1b7044fcf3-kube-api-access-ptrtx\") pod \"network-check-target-cpthp\" (UID: \"33feec78-4592-4343-965b-aa1b7044fcf3\") " pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:12:12.026063 master-0 kubenswrapper[4051]: E0312 18:12:12.026000 4051 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 18:12:12.026063 master-0 kubenswrapper[4051]: E0312 18:12:12.026014 4051 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 18:12:12.026063 master-0 kubenswrapper[4051]: E0312 18:12:12.026024 4051 projected.go:194] Error preparing data for projected volume kube-api-access-ptrtx for pod openshift-network-diagnostics/network-check-target-cpthp: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 18:12:12.026187 master-0 kubenswrapper[4051]: E0312 18:12:12.026069 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33feec78-4592-4343-965b-aa1b7044fcf3-kube-api-access-ptrtx podName:33feec78-4592-4343-965b-aa1b7044fcf3 nodeName:}" failed. No retries permitted until 2026-03-12 18:12:13.026056802 +0000 UTC m=+112.505183023 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptrtx" (UniqueName: "kubernetes.io/projected/33feec78-4592-4343-965b-aa1b7044fcf3-kube-api-access-ptrtx") pod "network-check-target-cpthp" (UID: "33feec78-4592-4343-965b-aa1b7044fcf3") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 18:12:13.033751 master-0 kubenswrapper[4051]: I0312 18:12:13.033036 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptrtx\" (UniqueName: \"kubernetes.io/projected/33feec78-4592-4343-965b-aa1b7044fcf3-kube-api-access-ptrtx\") pod \"network-check-target-cpthp\" (UID: \"33feec78-4592-4343-965b-aa1b7044fcf3\") " pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:12:13.033751 master-0 kubenswrapper[4051]: E0312 18:12:13.033217 4051 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 18:12:13.033751 master-0 kubenswrapper[4051]: E0312 18:12:13.033254 4051 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 18:12:13.033751 master-0 kubenswrapper[4051]: E0312 18:12:13.033264 4051 projected.go:194] Error preparing data for projected volume kube-api-access-ptrtx for pod openshift-network-diagnostics/network-check-target-cpthp: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 18:12:13.033751 master-0 kubenswrapper[4051]: E0312 18:12:13.033313 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33feec78-4592-4343-965b-aa1b7044fcf3-kube-api-access-ptrtx podName:33feec78-4592-4343-965b-aa1b7044fcf3 nodeName:}" failed. No retries permitted until 2026-03-12 18:12:15.033299474 +0000 UTC m=+114.512425705 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptrtx" (UniqueName: "kubernetes.io/projected/33feec78-4592-4343-965b-aa1b7044fcf3-kube-api-access-ptrtx") pod "network-check-target-cpthp" (UID: "33feec78-4592-4343-965b-aa1b7044fcf3") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 18:12:13.281147 master-0 kubenswrapper[4051]: I0312 18:12:13.280971 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:12:13.281147 master-0 kubenswrapper[4051]: I0312 18:12:13.281080 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:12:13.281400 master-0 kubenswrapper[4051]: E0312 18:12:13.281200 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z4sc9" podUID="5b48f8fd-2efe-44e3-a6d7-c71358b83a2f" Mar 12 18:12:13.282744 master-0 kubenswrapper[4051]: E0312 18:12:13.281405 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cpthp" podUID="33feec78-4592-4343-965b-aa1b7044fcf3" Mar 12 18:12:13.538608 master-0 kubenswrapper[4051]: I0312 18:12:13.538470 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs\") pod \"network-metrics-daemon-z4sc9\" (UID: \"5b48f8fd-2efe-44e3-a6d7-c71358b83a2f\") " pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:12:13.538856 master-0 kubenswrapper[4051]: E0312 18:12:13.538667 4051 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 18:12:13.538856 master-0 kubenswrapper[4051]: E0312 18:12:13.538755 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs podName:5b48f8fd-2efe-44e3-a6d7-c71358b83a2f nodeName:}" failed. No retries permitted until 2026-03-12 18:12:29.538733317 +0000 UTC m=+129.017859548 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs") pod "network-metrics-daemon-z4sc9" (UID: "5b48f8fd-2efe-44e3-a6d7-c71358b83a2f") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 18:12:13.836075 master-0 kubenswrapper[4051]: I0312 18:12:13.835688 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-hqrqt"] Mar 12 18:12:13.836075 master-0 kubenswrapper[4051]: I0312 18:12:13.836038 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-hqrqt" Mar 12 18:12:13.838355 master-0 kubenswrapper[4051]: I0312 18:12:13.838151 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 12 18:12:13.838503 master-0 kubenswrapper[4051]: I0312 18:12:13.838462 4051 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 12 18:12:13.838797 master-0 kubenswrapper[4051]: I0312 18:12:13.838768 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 12 18:12:13.839046 master-0 kubenswrapper[4051]: I0312 18:12:13.839018 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 12 18:12:13.839203 master-0 kubenswrapper[4051]: I0312 18:12:13.839187 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 12 18:12:13.841188 master-0 kubenswrapper[4051]: I0312 18:12:13.841115 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-env-overrides\") pod \"network-node-identity-hqrqt\" (UID: \"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe\") " pod="openshift-network-node-identity/network-node-identity-hqrqt" Mar 12 18:12:13.841188 master-0 kubenswrapper[4051]: I0312 18:12:13.841149 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdlcw\" (UniqueName: \"kubernetes.io/projected/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-kube-api-access-tdlcw\") pod \"network-node-identity-hqrqt\" (UID: \"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe\") " pod="openshift-network-node-identity/network-node-identity-hqrqt" Mar 12 18:12:13.841322 master-0 kubenswrapper[4051]: I0312 18:12:13.841210 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-webhook-cert\") pod \"network-node-identity-hqrqt\" (UID: \"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe\") " pod="openshift-network-node-identity/network-node-identity-hqrqt" Mar 12 18:12:13.841322 master-0 kubenswrapper[4051]: I0312 18:12:13.841227 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-ovnkube-identity-cm\") pod \"network-node-identity-hqrqt\" (UID: \"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe\") " pod="openshift-network-node-identity/network-node-identity-hqrqt" Mar 12 18:12:13.942336 master-0 kubenswrapper[4051]: I0312 18:12:13.942254 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-webhook-cert\") pod \"network-node-identity-hqrqt\" (UID: \"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe\") " pod="openshift-network-node-identity/network-node-identity-hqrqt" Mar 12 18:12:13.942336 master-0 kubenswrapper[4051]: I0312 18:12:13.942295 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-ovnkube-identity-cm\") pod \"network-node-identity-hqrqt\" (UID: \"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe\") " pod="openshift-network-node-identity/network-node-identity-hqrqt" Mar 12 18:12:13.942336 master-0 kubenswrapper[4051]: I0312 18:12:13.942332 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-env-overrides\") pod \"network-node-identity-hqrqt\" (UID: \"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe\") " pod="openshift-network-node-identity/network-node-identity-hqrqt" Mar 12 18:12:13.942336 master-0 kubenswrapper[4051]: I0312 18:12:13.942349 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdlcw\" (UniqueName: \"kubernetes.io/projected/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-kube-api-access-tdlcw\") pod \"network-node-identity-hqrqt\" (UID: \"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe\") " pod="openshift-network-node-identity/network-node-identity-hqrqt" Mar 12 18:12:13.943787 master-0 kubenswrapper[4051]: E0312 18:12:13.942661 4051 secret.go:189] Couldn't get secret openshift-network-node-identity/network-node-identity-cert: secret "network-node-identity-cert" not found Mar 12 18:12:13.943787 master-0 kubenswrapper[4051]: E0312 18:12:13.942731 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-webhook-cert podName:8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe nodeName:}" failed. No retries permitted until 2026-03-12 18:12:14.442695377 +0000 UTC m=+113.921821608 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-webhook-cert") pod "network-node-identity-hqrqt" (UID: "8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe") : secret "network-node-identity-cert" not found Mar 12 18:12:13.944716 master-0 kubenswrapper[4051]: I0312 18:12:13.944391 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-ovnkube-identity-cm\") pod \"network-node-identity-hqrqt\" (UID: \"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe\") " pod="openshift-network-node-identity/network-node-identity-hqrqt" Mar 12 18:12:13.944853 master-0 kubenswrapper[4051]: I0312 18:12:13.944825 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-env-overrides\") pod \"network-node-identity-hqrqt\" (UID: \"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe\") " pod="openshift-network-node-identity/network-node-identity-hqrqt" Mar 12 18:12:13.964225 master-0 kubenswrapper[4051]: I0312 18:12:13.964188 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdlcw\" (UniqueName: \"kubernetes.io/projected/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-kube-api-access-tdlcw\") pod \"network-node-identity-hqrqt\" (UID: \"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe\") " pod="openshift-network-node-identity/network-node-identity-hqrqt" Mar 12 18:12:14.447394 master-0 kubenswrapper[4051]: I0312 18:12:14.446792 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-webhook-cert\") pod \"network-node-identity-hqrqt\" (UID: \"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe\") " pod="openshift-network-node-identity/network-node-identity-hqrqt" Mar 12 18:12:14.450671 master-0 kubenswrapper[4051]: I0312 18:12:14.450623 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-webhook-cert\") pod \"network-node-identity-hqrqt\" (UID: \"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe\") " pod="openshift-network-node-identity/network-node-identity-hqrqt" Mar 12 18:12:14.460307 master-0 kubenswrapper[4051]: I0312 18:12:14.460272 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-hqrqt" Mar 12 18:12:14.469425 master-0 kubenswrapper[4051]: W0312 18:12:14.469380 4051 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e2a83a2_7063_4e17_bf6c_ca6fc6f8cfbe.slice/crio-74bbf11cd33cced50ba626f06b188adf24ce7f72b1161eb2c06db1ce6ae46dd5 WatchSource:0}: Error finding container 74bbf11cd33cced50ba626f06b188adf24ce7f72b1161eb2c06db1ce6ae46dd5: Status 404 returned error can't find the container with id 74bbf11cd33cced50ba626f06b188adf24ce7f72b1161eb2c06db1ce6ae46dd5 Mar 12 18:12:14.791642 master-0 kubenswrapper[4051]: I0312 18:12:14.791600 4051 generic.go:334] "Generic (PLEG): container finished" podID="455f0aad-add2-49d0-995c-f92467bce2d6" containerID="6f8ff60199929b0a4e5f12c0833311ee92d8752831cac14e7f6e3610c7c482cd" exitCode=0 Mar 12 18:12:14.791831 master-0 kubenswrapper[4051]: I0312 18:12:14.791667 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lv8hk" event={"ID":"455f0aad-add2-49d0-995c-f92467bce2d6","Type":"ContainerDied","Data":"6f8ff60199929b0a4e5f12c0833311ee92d8752831cac14e7f6e3610c7c482cd"} Mar 12 18:12:14.792621 master-0 kubenswrapper[4051]: I0312 18:12:14.792580 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-hqrqt" event={"ID":"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe","Type":"ContainerStarted","Data":"74bbf11cd33cced50ba626f06b188adf24ce7f72b1161eb2c06db1ce6ae46dd5"} Mar 12 18:12:15.050661 master-0 kubenswrapper[4051]: I0312 18:12:15.050532 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptrtx\" (UniqueName: \"kubernetes.io/projected/33feec78-4592-4343-965b-aa1b7044fcf3-kube-api-access-ptrtx\") pod \"network-check-target-cpthp\" (UID: \"33feec78-4592-4343-965b-aa1b7044fcf3\") " pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:12:15.050846 master-0 kubenswrapper[4051]: E0312 18:12:15.050769 4051 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 18:12:15.050846 master-0 kubenswrapper[4051]: E0312 18:12:15.050800 4051 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 18:12:15.050846 master-0 kubenswrapper[4051]: E0312 18:12:15.050814 4051 projected.go:194] Error preparing data for projected volume kube-api-access-ptrtx for pod openshift-network-diagnostics/network-check-target-cpthp: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 18:12:15.050964 master-0 kubenswrapper[4051]: E0312 18:12:15.050888 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33feec78-4592-4343-965b-aa1b7044fcf3-kube-api-access-ptrtx podName:33feec78-4592-4343-965b-aa1b7044fcf3 nodeName:}" failed. No retries permitted until 2026-03-12 18:12:19.050868436 +0000 UTC m=+118.529994727 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptrtx" (UniqueName: "kubernetes.io/projected/33feec78-4592-4343-965b-aa1b7044fcf3-kube-api-access-ptrtx") pod "network-check-target-cpthp" (UID: "33feec78-4592-4343-965b-aa1b7044fcf3") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 18:12:15.281161 master-0 kubenswrapper[4051]: I0312 18:12:15.281119 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:12:15.281366 master-0 kubenswrapper[4051]: I0312 18:12:15.281119 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:12:15.282079 master-0 kubenswrapper[4051]: E0312 18:12:15.281995 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cpthp" podUID="33feec78-4592-4343-965b-aa1b7044fcf3" Mar 12 18:12:15.282079 master-0 kubenswrapper[4051]: E0312 18:12:15.281539 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z4sc9" podUID="5b48f8fd-2efe-44e3-a6d7-c71358b83a2f" Mar 12 18:12:15.319442 master-0 kubenswrapper[4051]: I0312 18:12:15.319340 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 12 18:12:17.281113 master-0 kubenswrapper[4051]: I0312 18:12:17.281007 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:12:17.281113 master-0 kubenswrapper[4051]: I0312 18:12:17.281023 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:12:17.281652 master-0 kubenswrapper[4051]: E0312 18:12:17.281116 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cpthp" podUID="33feec78-4592-4343-965b-aa1b7044fcf3" Mar 12 18:12:17.281652 master-0 kubenswrapper[4051]: E0312 18:12:17.281230 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z4sc9" podUID="5b48f8fd-2efe-44e3-a6d7-c71358b83a2f" Mar 12 18:12:17.801539 master-0 kubenswrapper[4051]: I0312 18:12:17.801479 4051 generic.go:334] "Generic (PLEG): container finished" podID="455f0aad-add2-49d0-995c-f92467bce2d6" containerID="47d87208497022a24111ccaca14cfa76489b3e3c8d2e4baeec44eed1ec3639c0" exitCode=0 Mar 12 18:12:17.801539 master-0 kubenswrapper[4051]: I0312 18:12:17.801548 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lv8hk" event={"ID":"455f0aad-add2-49d0-995c-f92467bce2d6","Type":"ContainerDied","Data":"47d87208497022a24111ccaca14cfa76489b3e3c8d2e4baeec44eed1ec3639c0"} Mar 12 18:12:19.119923 master-0 kubenswrapper[4051]: I0312 18:12:19.119874 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptrtx\" (UniqueName: \"kubernetes.io/projected/33feec78-4592-4343-965b-aa1b7044fcf3-kube-api-access-ptrtx\") pod \"network-check-target-cpthp\" (UID: \"33feec78-4592-4343-965b-aa1b7044fcf3\") " pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:12:19.120813 master-0 kubenswrapper[4051]: E0312 18:12:19.120038 4051 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 18:12:19.120813 master-0 kubenswrapper[4051]: E0312 18:12:19.120062 4051 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 18:12:19.120813 master-0 kubenswrapper[4051]: E0312 18:12:19.120077 4051 projected.go:194] Error preparing data for projected volume kube-api-access-ptrtx for pod openshift-network-diagnostics/network-check-target-cpthp: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 18:12:19.120813 master-0 kubenswrapper[4051]: E0312 18:12:19.120121 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33feec78-4592-4343-965b-aa1b7044fcf3-kube-api-access-ptrtx podName:33feec78-4592-4343-965b-aa1b7044fcf3 nodeName:}" failed. No retries permitted until 2026-03-12 18:12:27.120109195 +0000 UTC m=+126.599235426 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptrtx" (UniqueName: "kubernetes.io/projected/33feec78-4592-4343-965b-aa1b7044fcf3-kube-api-access-ptrtx") pod "network-check-target-cpthp" (UID: "33feec78-4592-4343-965b-aa1b7044fcf3") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 18:12:19.281665 master-0 kubenswrapper[4051]: I0312 18:12:19.281610 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:12:19.281829 master-0 kubenswrapper[4051]: I0312 18:12:19.281684 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:12:19.281829 master-0 kubenswrapper[4051]: E0312 18:12:19.281713 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z4sc9" podUID="5b48f8fd-2efe-44e3-a6d7-c71358b83a2f" Mar 12 18:12:19.281829 master-0 kubenswrapper[4051]: E0312 18:12:19.281792 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cpthp" podUID="33feec78-4592-4343-965b-aa1b7044fcf3" Mar 12 18:12:20.288835 master-0 kubenswrapper[4051]: I0312 18:12:20.288699 4051 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podStartSLOduration=5.288684464 podStartE2EDuration="5.288684464s" podCreationTimestamp="2026-03-12 18:12:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:12:17.878031904 +0000 UTC m=+117.357158135" watchObservedRunningTime="2026-03-12 18:12:20.288684464 +0000 UTC m=+119.767810695" Mar 12 18:12:20.289817 master-0 kubenswrapper[4051]: I0312 18:12:20.289786 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 12 18:12:21.188016 master-0 kubenswrapper[4051]: E0312 18:12:21.187944 4051 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Mar 12 18:12:21.270210 master-0 kubenswrapper[4051]: E0312 18:12:21.270158 4051 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 18:12:21.282172 master-0 kubenswrapper[4051]: I0312 18:12:21.281026 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:12:21.282172 master-0 kubenswrapper[4051]: I0312 18:12:21.281185 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:12:21.282172 master-0 kubenswrapper[4051]: E0312 18:12:21.282115 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cpthp" podUID="33feec78-4592-4343-965b-aa1b7044fcf3" Mar 12 18:12:21.283011 master-0 kubenswrapper[4051]: E0312 18:12:21.282016 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z4sc9" podUID="5b48f8fd-2efe-44e3-a6d7-c71358b83a2f" Mar 12 18:12:23.281009 master-0 kubenswrapper[4051]: I0312 18:12:23.280473 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:12:23.281636 master-0 kubenswrapper[4051]: I0312 18:12:23.280656 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:12:23.281636 master-0 kubenswrapper[4051]: E0312 18:12:23.281284 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z4sc9" podUID="5b48f8fd-2efe-44e3-a6d7-c71358b83a2f" Mar 12 18:12:23.281636 master-0 kubenswrapper[4051]: E0312 18:12:23.281130 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cpthp" podUID="33feec78-4592-4343-965b-aa1b7044fcf3" Mar 12 18:12:25.280996 master-0 kubenswrapper[4051]: I0312 18:12:25.280944 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:12:25.282670 master-0 kubenswrapper[4051]: I0312 18:12:25.280943 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:12:25.282670 master-0 kubenswrapper[4051]: E0312 18:12:25.281084 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z4sc9" podUID="5b48f8fd-2efe-44e3-a6d7-c71358b83a2f" Mar 12 18:12:25.282670 master-0 kubenswrapper[4051]: E0312 18:12:25.281127 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cpthp" podUID="33feec78-4592-4343-965b-aa1b7044fcf3" Mar 12 18:12:26.271149 master-0 kubenswrapper[4051]: E0312 18:12:26.271087 4051 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 18:12:27.158271 master-0 kubenswrapper[4051]: I0312 18:12:27.158188 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptrtx\" (UniqueName: \"kubernetes.io/projected/33feec78-4592-4343-965b-aa1b7044fcf3-kube-api-access-ptrtx\") pod \"network-check-target-cpthp\" (UID: \"33feec78-4592-4343-965b-aa1b7044fcf3\") " pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:12:27.159007 master-0 kubenswrapper[4051]: E0312 18:12:27.158354 4051 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 18:12:27.159007 master-0 kubenswrapper[4051]: E0312 18:12:27.158374 4051 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 18:12:27.159007 master-0 kubenswrapper[4051]: E0312 18:12:27.158387 4051 projected.go:194] Error preparing data for projected volume kube-api-access-ptrtx for pod openshift-network-diagnostics/network-check-target-cpthp: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 18:12:27.159007 master-0 kubenswrapper[4051]: E0312 18:12:27.158446 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33feec78-4592-4343-965b-aa1b7044fcf3-kube-api-access-ptrtx podName:33feec78-4592-4343-965b-aa1b7044fcf3 nodeName:}" failed. No retries permitted until 2026-03-12 18:12:43.158428511 +0000 UTC m=+142.637554742 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptrtx" (UniqueName: "kubernetes.io/projected/33feec78-4592-4343-965b-aa1b7044fcf3-kube-api-access-ptrtx") pod "network-check-target-cpthp" (UID: "33feec78-4592-4343-965b-aa1b7044fcf3") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 18:12:27.281202 master-0 kubenswrapper[4051]: I0312 18:12:27.281149 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:12:27.281404 master-0 kubenswrapper[4051]: I0312 18:12:27.281148 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:12:27.281404 master-0 kubenswrapper[4051]: E0312 18:12:27.281297 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cpthp" podUID="33feec78-4592-4343-965b-aa1b7044fcf3" Mar 12 18:12:27.281404 master-0 kubenswrapper[4051]: E0312 18:12:27.281338 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z4sc9" podUID="5b48f8fd-2efe-44e3-a6d7-c71358b83a2f" Mar 12 18:12:29.280297 master-0 kubenswrapper[4051]: I0312 18:12:29.280246 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:12:29.280858 master-0 kubenswrapper[4051]: I0312 18:12:29.280246 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:12:29.280858 master-0 kubenswrapper[4051]: E0312 18:12:29.280400 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cpthp" podUID="33feec78-4592-4343-965b-aa1b7044fcf3" Mar 12 18:12:29.280858 master-0 kubenswrapper[4051]: E0312 18:12:29.280421 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z4sc9" podUID="5b48f8fd-2efe-44e3-a6d7-c71358b83a2f" Mar 12 18:12:29.574979 master-0 kubenswrapper[4051]: I0312 18:12:29.574926 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs\") pod \"network-metrics-daemon-z4sc9\" (UID: \"5b48f8fd-2efe-44e3-a6d7-c71358b83a2f\") " pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:12:29.575167 master-0 kubenswrapper[4051]: E0312 18:12:29.575071 4051 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 18:12:29.575167 master-0 kubenswrapper[4051]: E0312 18:12:29.575127 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs podName:5b48f8fd-2efe-44e3-a6d7-c71358b83a2f nodeName:}" failed. No retries permitted until 2026-03-12 18:13:01.575110078 +0000 UTC m=+161.054236319 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs") pod "network-metrics-daemon-z4sc9" (UID: "5b48f8fd-2efe-44e3-a6d7-c71358b83a2f") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 18:12:31.272290 master-0 kubenswrapper[4051]: E0312 18:12:31.272218 4051 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 18:12:31.281226 master-0 kubenswrapper[4051]: I0312 18:12:31.281202 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:12:31.281313 master-0 kubenswrapper[4051]: I0312 18:12:31.281259 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:12:31.281313 master-0 kubenswrapper[4051]: E0312 18:12:31.281293 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cpthp" podUID="33feec78-4592-4343-965b-aa1b7044fcf3" Mar 12 18:12:31.281471 master-0 kubenswrapper[4051]: E0312 18:12:31.281438 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z4sc9" podUID="5b48f8fd-2efe-44e3-a6d7-c71358b83a2f" Mar 12 18:12:33.281005 master-0 kubenswrapper[4051]: I0312 18:12:33.280933 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:12:33.281901 master-0 kubenswrapper[4051]: E0312 18:12:33.281069 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z4sc9" podUID="5b48f8fd-2efe-44e3-a6d7-c71358b83a2f" Mar 12 18:12:33.281901 master-0 kubenswrapper[4051]: I0312 18:12:33.280933 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:12:33.281901 master-0 kubenswrapper[4051]: E0312 18:12:33.281155 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cpthp" podUID="33feec78-4592-4343-965b-aa1b7044fcf3" Mar 12 18:12:35.280662 master-0 kubenswrapper[4051]: I0312 18:12:35.280608 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:12:35.281274 master-0 kubenswrapper[4051]: I0312 18:12:35.280608 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:12:35.281274 master-0 kubenswrapper[4051]: E0312 18:12:35.280742 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z4sc9" podUID="5b48f8fd-2efe-44e3-a6d7-c71358b83a2f" Mar 12 18:12:35.281274 master-0 kubenswrapper[4051]: E0312 18:12:35.280779 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cpthp" podUID="33feec78-4592-4343-965b-aa1b7044fcf3" Mar 12 18:12:36.274006 master-0 kubenswrapper[4051]: E0312 18:12:36.273911 4051 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 18:12:37.281319 master-0 kubenswrapper[4051]: I0312 18:12:37.281237 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:12:37.282053 master-0 kubenswrapper[4051]: I0312 18:12:37.281247 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:12:37.282053 master-0 kubenswrapper[4051]: E0312 18:12:37.281572 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z4sc9" podUID="5b48f8fd-2efe-44e3-a6d7-c71358b83a2f" Mar 12 18:12:37.282053 master-0 kubenswrapper[4051]: E0312 18:12:37.281392 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cpthp" podUID="33feec78-4592-4343-965b-aa1b7044fcf3" Mar 12 18:12:37.416373 master-0 kubenswrapper[4051]: I0312 18:12:37.416282 4051 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-controller-manager-master-0" podStartSLOduration=17.41626323 podStartE2EDuration="17.41626323s" podCreationTimestamp="2026-03-12 18:12:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:12:21.296443239 +0000 UTC m=+120.775569470" watchObservedRunningTime="2026-03-12 18:12:37.41626323 +0000 UTC m=+136.895389471" Mar 12 18:12:37.417238 master-0 kubenswrapper[4051]: I0312 18:12:37.417194 4051 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xwvdd"] Mar 12 18:12:39.150493 master-0 kubenswrapper[4051]: I0312 18:12:39.150419 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:12:39.151325 master-0 kubenswrapper[4051]: E0312 18:12:39.150645 4051 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 18:12:39.151325 master-0 kubenswrapper[4051]: E0312 18:12:39.150765 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert podName:00755a4e-124c-4a51-b1c5-7c505b3637a8 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:43.150728952 +0000 UTC m=+202.629855223 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert") pod "cluster-version-operator-745944c6b7-cxwmx" (UID: "00755a4e-124c-4a51-b1c5-7c505b3637a8") : secret "cluster-version-operator-serving-cert" not found Mar 12 18:12:39.280981 master-0 kubenswrapper[4051]: I0312 18:12:39.280929 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:12:39.281553 master-0 kubenswrapper[4051]: I0312 18:12:39.281034 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:12:39.282031 master-0 kubenswrapper[4051]: E0312 18:12:39.281477 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cpthp" podUID="33feec78-4592-4343-965b-aa1b7044fcf3" Mar 12 18:12:39.282240 master-0 kubenswrapper[4051]: E0312 18:12:39.282050 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z4sc9" podUID="5b48f8fd-2efe-44e3-a6d7-c71358b83a2f" Mar 12 18:12:40.483252 master-0 kubenswrapper[4051]: I0312 18:12:40.483182 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-hqrqt" event={"ID":"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe","Type":"ContainerStarted","Data":"0e62c9f4417a5a9e30eb23f06a18c4ab2b7d089c3e060926866187529335e3de"} Mar 12 18:12:40.483252 master-0 kubenswrapper[4051]: I0312 18:12:40.483257 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-hqrqt" event={"ID":"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe","Type":"ContainerStarted","Data":"9f542199e120aca0f259c8f37b187c51f9d83a60acac8d5c2aa71e651dac3686"} Mar 12 18:12:40.488191 master-0 kubenswrapper[4051]: I0312 18:12:40.488129 4051 generic.go:334] "Generic (PLEG): container finished" podID="455f0aad-add2-49d0-995c-f92467bce2d6" containerID="0c7342da7ff90812cfb607510698f6f5025811001aa1d822318142b6a574472a" exitCode=0 Mar 12 18:12:40.488317 master-0 kubenswrapper[4051]: I0312 18:12:40.488235 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lv8hk" event={"ID":"455f0aad-add2-49d0-995c-f92467bce2d6","Type":"ContainerDied","Data":"0c7342da7ff90812cfb607510698f6f5025811001aa1d822318142b6a574472a"} Mar 12 18:12:40.497729 master-0 kubenswrapper[4051]: I0312 18:12:40.493053 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" event={"ID":"74eb1407-de29-42e5-9e6c-ce1bec3a9d80","Type":"ContainerStarted","Data":"b68bb8a45412c32b722e21748839c3672ba272871c5c90f6c3a4e4de1a85ff86"} Mar 12 18:12:40.497729 master-0 kubenswrapper[4051]: I0312 18:12:40.496663 4051 generic.go:334] "Generic (PLEG): container finished" podID="ee43418f-8381-4526-9092-bcc61cfcd2e9" containerID="cf83fad4049ad9f6ae4d83c608932c2d1b28eb5535c7e4d63604cd9158893ad6" exitCode=0 Mar 12 18:12:40.497729 master-0 kubenswrapper[4051]: I0312 18:12:40.496724 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" event={"ID":"ee43418f-8381-4526-9092-bcc61cfcd2e9","Type":"ContainerDied","Data":"cf83fad4049ad9f6ae4d83c608932c2d1b28eb5535c7e4d63604cd9158893ad6"} Mar 12 18:12:40.508303 master-0 kubenswrapper[4051]: I0312 18:12:40.507872 4051 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-hqrqt" podStartSLOduration=2.415201731 podStartE2EDuration="27.50784355s" podCreationTimestamp="2026-03-12 18:12:13 +0000 UTC" firstStartedPulling="2026-03-12 18:12:14.472380673 +0000 UTC m=+113.951506904" lastFinishedPulling="2026-03-12 18:12:39.565022492 +0000 UTC m=+139.044148723" observedRunningTime="2026-03-12 18:12:40.506723011 +0000 UTC m=+139.985849302" watchObservedRunningTime="2026-03-12 18:12:40.50784355 +0000 UTC m=+139.986969821" Mar 12 18:12:40.524963 master-0 kubenswrapper[4051]: I0312 18:12:40.524417 4051 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" podStartSLOduration=4.438022172 podStartE2EDuration="32.524392903s" podCreationTimestamp="2026-03-12 18:12:08 +0000 UTC" firstStartedPulling="2026-03-12 18:12:11.470754985 +0000 UTC m=+110.949881216" lastFinishedPulling="2026-03-12 18:12:39.557125706 +0000 UTC m=+139.036251947" observedRunningTime="2026-03-12 18:12:40.524110155 +0000 UTC m=+140.003236466" watchObservedRunningTime="2026-03-12 18:12:40.524392903 +0000 UTC m=+140.003519164" Mar 12 18:12:40.538562 master-0 kubenswrapper[4051]: I0312 18:12:40.537915 4051 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:40.663214 master-0 kubenswrapper[4051]: I0312 18:12:40.663176 4051 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-var-lib-openvswitch\") pod \"ee43418f-8381-4526-9092-bcc61cfcd2e9\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " Mar 12 18:12:40.663214 master-0 kubenswrapper[4051]: I0312 18:12:40.663207 4051 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-run-netns\") pod \"ee43418f-8381-4526-9092-bcc61cfcd2e9\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " Mar 12 18:12:40.663345 master-0 kubenswrapper[4051]: I0312 18:12:40.663227 4051 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-cni-netd\") pod \"ee43418f-8381-4526-9092-bcc61cfcd2e9\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " Mar 12 18:12:40.663345 master-0 kubenswrapper[4051]: I0312 18:12:40.663242 4051 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-systemd-units\") pod \"ee43418f-8381-4526-9092-bcc61cfcd2e9\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " Mar 12 18:12:40.663345 master-0 kubenswrapper[4051]: I0312 18:12:40.663262 4051 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4d4h\" (UniqueName: \"kubernetes.io/projected/ee43418f-8381-4526-9092-bcc61cfcd2e9-kube-api-access-d4d4h\") pod \"ee43418f-8381-4526-9092-bcc61cfcd2e9\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " Mar 12 18:12:40.663345 master-0 kubenswrapper[4051]: I0312 18:12:40.663279 4051 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-run-systemd\") pod \"ee43418f-8381-4526-9092-bcc61cfcd2e9\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " Mar 12 18:12:40.663464 master-0 kubenswrapper[4051]: I0312 18:12:40.663352 4051 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-log-socket\") pod \"ee43418f-8381-4526-9092-bcc61cfcd2e9\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " Mar 12 18:12:40.663464 master-0 kubenswrapper[4051]: I0312 18:12:40.663365 4051 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-etc-openvswitch\") pod \"ee43418f-8381-4526-9092-bcc61cfcd2e9\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " Mar 12 18:12:40.663464 master-0 kubenswrapper[4051]: I0312 18:12:40.663381 4051 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ee43418f-8381-4526-9092-bcc61cfcd2e9-ovnkube-config\") pod \"ee43418f-8381-4526-9092-bcc61cfcd2e9\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " Mar 12 18:12:40.663464 master-0 kubenswrapper[4051]: I0312 18:12:40.663394 4051 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ee43418f-8381-4526-9092-bcc61cfcd2e9\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " Mar 12 18:12:40.663464 master-0 kubenswrapper[4051]: I0312 18:12:40.663410 4051 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-cni-bin\") pod \"ee43418f-8381-4526-9092-bcc61cfcd2e9\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " Mar 12 18:12:40.663464 master-0 kubenswrapper[4051]: I0312 18:12:40.663423 4051 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-run-openvswitch\") pod \"ee43418f-8381-4526-9092-bcc61cfcd2e9\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " Mar 12 18:12:40.663464 master-0 kubenswrapper[4051]: I0312 18:12:40.663437 4051 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-kubelet\") pod \"ee43418f-8381-4526-9092-bcc61cfcd2e9\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " Mar 12 18:12:40.663464 master-0 kubenswrapper[4051]: I0312 18:12:40.663453 4051 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-run-ovn-kubernetes\") pod \"ee43418f-8381-4526-9092-bcc61cfcd2e9\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " Mar 12 18:12:40.663464 master-0 kubenswrapper[4051]: I0312 18:12:40.663468 4051 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-node-log\") pod \"ee43418f-8381-4526-9092-bcc61cfcd2e9\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " Mar 12 18:12:40.663724 master-0 kubenswrapper[4051]: I0312 18:12:40.663483 4051 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-run-ovn\") pod \"ee43418f-8381-4526-9092-bcc61cfcd2e9\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " Mar 12 18:12:40.663724 master-0 kubenswrapper[4051]: I0312 18:12:40.663527 4051 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ee43418f-8381-4526-9092-bcc61cfcd2e9-ovnkube-script-lib\") pod \"ee43418f-8381-4526-9092-bcc61cfcd2e9\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " Mar 12 18:12:40.663724 master-0 kubenswrapper[4051]: I0312 18:12:40.663543 4051 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ee43418f-8381-4526-9092-bcc61cfcd2e9-env-overrides\") pod \"ee43418f-8381-4526-9092-bcc61cfcd2e9\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " Mar 12 18:12:40.663724 master-0 kubenswrapper[4051]: I0312 18:12:40.663559 4051 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-slash\") pod \"ee43418f-8381-4526-9092-bcc61cfcd2e9\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " Mar 12 18:12:40.663724 master-0 kubenswrapper[4051]: I0312 18:12:40.663575 4051 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ee43418f-8381-4526-9092-bcc61cfcd2e9-ovn-node-metrics-cert\") pod \"ee43418f-8381-4526-9092-bcc61cfcd2e9\" (UID: \"ee43418f-8381-4526-9092-bcc61cfcd2e9\") " Mar 12 18:12:40.664176 master-0 kubenswrapper[4051]: I0312 18:12:40.663923 4051 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "ee43418f-8381-4526-9092-bcc61cfcd2e9" (UID: "ee43418f-8381-4526-9092-bcc61cfcd2e9"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:12:40.664176 master-0 kubenswrapper[4051]: I0312 18:12:40.663991 4051 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "ee43418f-8381-4526-9092-bcc61cfcd2e9" (UID: "ee43418f-8381-4526-9092-bcc61cfcd2e9"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:12:40.664176 master-0 kubenswrapper[4051]: I0312 18:12:40.664016 4051 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "ee43418f-8381-4526-9092-bcc61cfcd2e9" (UID: "ee43418f-8381-4526-9092-bcc61cfcd2e9"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:12:40.664176 master-0 kubenswrapper[4051]: I0312 18:12:40.664038 4051 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "ee43418f-8381-4526-9092-bcc61cfcd2e9" (UID: "ee43418f-8381-4526-9092-bcc61cfcd2e9"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:12:40.664176 master-0 kubenswrapper[4051]: I0312 18:12:40.664061 4051 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "ee43418f-8381-4526-9092-bcc61cfcd2e9" (UID: "ee43418f-8381-4526-9092-bcc61cfcd2e9"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:12:40.664176 master-0 kubenswrapper[4051]: I0312 18:12:40.664095 4051 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "ee43418f-8381-4526-9092-bcc61cfcd2e9" (UID: "ee43418f-8381-4526-9092-bcc61cfcd2e9"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:12:40.664176 master-0 kubenswrapper[4051]: I0312 18:12:40.664126 4051 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "ee43418f-8381-4526-9092-bcc61cfcd2e9" (UID: "ee43418f-8381-4526-9092-bcc61cfcd2e9"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:12:40.664176 master-0 kubenswrapper[4051]: I0312 18:12:40.664153 4051 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "ee43418f-8381-4526-9092-bcc61cfcd2e9" (UID: "ee43418f-8381-4526-9092-bcc61cfcd2e9"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:12:40.664176 master-0 kubenswrapper[4051]: I0312 18:12:40.664176 4051 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "ee43418f-8381-4526-9092-bcc61cfcd2e9" (UID: "ee43418f-8381-4526-9092-bcc61cfcd2e9"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:12:40.664176 master-0 kubenswrapper[4051]: I0312 18:12:40.664172 4051 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-log-socket" (OuterVolumeSpecName: "log-socket") pod "ee43418f-8381-4526-9092-bcc61cfcd2e9" (UID: "ee43418f-8381-4526-9092-bcc61cfcd2e9"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:12:40.664577 master-0 kubenswrapper[4051]: I0312 18:12:40.664201 4051 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "ee43418f-8381-4526-9092-bcc61cfcd2e9" (UID: "ee43418f-8381-4526-9092-bcc61cfcd2e9"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:12:40.664577 master-0 kubenswrapper[4051]: I0312 18:12:40.664214 4051 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "ee43418f-8381-4526-9092-bcc61cfcd2e9" (UID: "ee43418f-8381-4526-9092-bcc61cfcd2e9"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:12:40.664577 master-0 kubenswrapper[4051]: I0312 18:12:40.664205 4051 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "ee43418f-8381-4526-9092-bcc61cfcd2e9" (UID: "ee43418f-8381-4526-9092-bcc61cfcd2e9"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:12:40.664577 master-0 kubenswrapper[4051]: I0312 18:12:40.664228 4051 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-node-log" (OuterVolumeSpecName: "node-log") pod "ee43418f-8381-4526-9092-bcc61cfcd2e9" (UID: "ee43418f-8381-4526-9092-bcc61cfcd2e9"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:12:40.664734 master-0 kubenswrapper[4051]: I0312 18:12:40.664619 4051 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee43418f-8381-4526-9092-bcc61cfcd2e9-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "ee43418f-8381-4526-9092-bcc61cfcd2e9" (UID: "ee43418f-8381-4526-9092-bcc61cfcd2e9"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:12:40.664734 master-0 kubenswrapper[4051]: I0312 18:12:40.664657 4051 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-slash" (OuterVolumeSpecName: "host-slash") pod "ee43418f-8381-4526-9092-bcc61cfcd2e9" (UID: "ee43418f-8381-4526-9092-bcc61cfcd2e9"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:12:40.664806 master-0 kubenswrapper[4051]: I0312 18:12:40.664762 4051 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee43418f-8381-4526-9092-bcc61cfcd2e9-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "ee43418f-8381-4526-9092-bcc61cfcd2e9" (UID: "ee43418f-8381-4526-9092-bcc61cfcd2e9"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:12:40.664936 master-0 kubenswrapper[4051]: I0312 18:12:40.664896 4051 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee43418f-8381-4526-9092-bcc61cfcd2e9-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "ee43418f-8381-4526-9092-bcc61cfcd2e9" (UID: "ee43418f-8381-4526-9092-bcc61cfcd2e9"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:12:40.668336 master-0 kubenswrapper[4051]: I0312 18:12:40.668305 4051 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee43418f-8381-4526-9092-bcc61cfcd2e9-kube-api-access-d4d4h" (OuterVolumeSpecName: "kube-api-access-d4d4h") pod "ee43418f-8381-4526-9092-bcc61cfcd2e9" (UID: "ee43418f-8381-4526-9092-bcc61cfcd2e9"). InnerVolumeSpecName "kube-api-access-d4d4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:12:40.668336 master-0 kubenswrapper[4051]: I0312 18:12:40.668306 4051 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee43418f-8381-4526-9092-bcc61cfcd2e9-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "ee43418f-8381-4526-9092-bcc61cfcd2e9" (UID: "ee43418f-8381-4526-9092-bcc61cfcd2e9"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:12:40.764890 master-0 kubenswrapper[4051]: I0312 18:12:40.764852 4051 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-cni-netd\") on node \"master-0\" DevicePath \"\"" Mar 12 18:12:40.764890 master-0 kubenswrapper[4051]: I0312 18:12:40.764885 4051 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-systemd-units\") on node \"master-0\" DevicePath \"\"" Mar 12 18:12:40.765047 master-0 kubenswrapper[4051]: I0312 18:12:40.764910 4051 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4d4h\" (UniqueName: \"kubernetes.io/projected/ee43418f-8381-4526-9092-bcc61cfcd2e9-kube-api-access-d4d4h\") on node \"master-0\" DevicePath \"\"" Mar 12 18:12:40.765047 master-0 kubenswrapper[4051]: I0312 18:12:40.764924 4051 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-run-systemd\") on node \"master-0\" DevicePath \"\"" Mar 12 18:12:40.765047 master-0 kubenswrapper[4051]: I0312 18:12:40.764936 4051 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ee43418f-8381-4526-9092-bcc61cfcd2e9-ovnkube-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:12:40.765047 master-0 kubenswrapper[4051]: I0312 18:12:40.764950 4051 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 12 18:12:40.765047 master-0 kubenswrapper[4051]: I0312 18:12:40.765019 4051 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-log-socket\") on node \"master-0\" DevicePath \"\"" Mar 12 18:12:40.765047 master-0 kubenswrapper[4051]: I0312 18:12:40.765036 4051 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-etc-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 12 18:12:40.765271 master-0 kubenswrapper[4051]: I0312 18:12:40.765052 4051 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-run-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 12 18:12:40.765271 master-0 kubenswrapper[4051]: I0312 18:12:40.765077 4051 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-kubelet\") on node \"master-0\" DevicePath \"\"" Mar 12 18:12:40.765271 master-0 kubenswrapper[4051]: I0312 18:12:40.765090 4051 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-cni-bin\") on node \"master-0\" DevicePath \"\"" Mar 12 18:12:40.765271 master-0 kubenswrapper[4051]: I0312 18:12:40.765103 4051 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-run-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 12 18:12:40.765271 master-0 kubenswrapper[4051]: I0312 18:12:40.765119 4051 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-node-log\") on node \"master-0\" DevicePath \"\"" Mar 12 18:12:40.765271 master-0 kubenswrapper[4051]: I0312 18:12:40.765131 4051 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ee43418f-8381-4526-9092-bcc61cfcd2e9-env-overrides\") on node \"master-0\" DevicePath \"\"" Mar 12 18:12:40.765271 master-0 kubenswrapper[4051]: I0312 18:12:40.765142 4051 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-run-ovn\") on node \"master-0\" DevicePath \"\"" Mar 12 18:12:40.765271 master-0 kubenswrapper[4051]: I0312 18:12:40.765154 4051 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ee43418f-8381-4526-9092-bcc61cfcd2e9-ovnkube-script-lib\") on node \"master-0\" DevicePath \"\"" Mar 12 18:12:40.765271 master-0 kubenswrapper[4051]: I0312 18:12:40.765165 4051 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-slash\") on node \"master-0\" DevicePath \"\"" Mar 12 18:12:40.765271 master-0 kubenswrapper[4051]: I0312 18:12:40.765177 4051 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ee43418f-8381-4526-9092-bcc61cfcd2e9-ovn-node-metrics-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 18:12:40.765271 master-0 kubenswrapper[4051]: I0312 18:12:40.765189 4051 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-var-lib-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 12 18:12:40.765271 master-0 kubenswrapper[4051]: I0312 18:12:40.765201 4051 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ee43418f-8381-4526-9092-bcc61cfcd2e9-host-run-netns\") on node \"master-0\" DevicePath \"\"" Mar 12 18:12:41.274624 master-0 kubenswrapper[4051]: E0312 18:12:41.274535 4051 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 18:12:41.281274 master-0 kubenswrapper[4051]: I0312 18:12:41.281222 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:12:41.282091 master-0 kubenswrapper[4051]: E0312 18:12:41.282037 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cpthp" podUID="33feec78-4592-4343-965b-aa1b7044fcf3" Mar 12 18:12:41.282259 master-0 kubenswrapper[4051]: I0312 18:12:41.282194 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:12:41.282507 master-0 kubenswrapper[4051]: E0312 18:12:41.282454 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z4sc9" podUID="5b48f8fd-2efe-44e3-a6d7-c71358b83a2f" Mar 12 18:12:41.508354 master-0 kubenswrapper[4051]: I0312 18:12:41.508158 4051 generic.go:334] "Generic (PLEG): container finished" podID="455f0aad-add2-49d0-995c-f92467bce2d6" containerID="c629217e0646c42efab7b6831a82c134d4897e205bc3cb7b99ec2b82209a7725" exitCode=0 Mar 12 18:12:41.508354 master-0 kubenswrapper[4051]: I0312 18:12:41.508311 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lv8hk" event={"ID":"455f0aad-add2-49d0-995c-f92467bce2d6","Type":"ContainerDied","Data":"c629217e0646c42efab7b6831a82c134d4897e205bc3cb7b99ec2b82209a7725"} Mar 12 18:12:41.511942 master-0 kubenswrapper[4051]: I0312 18:12:41.511179 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" event={"ID":"ee43418f-8381-4526-9092-bcc61cfcd2e9","Type":"ContainerDied","Data":"4b6e13f8214d025f7992cbcd205cec35aba4e97b520d7e8f9e3e7a6bca8ac41d"} Mar 12 18:12:41.511942 master-0 kubenswrapper[4051]: I0312 18:12:41.511255 4051 scope.go:117] "RemoveContainer" containerID="cf83fad4049ad9f6ae4d83c608932c2d1b28eb5535c7e4d63604cd9158893ad6" Mar 12 18:12:41.511942 master-0 kubenswrapper[4051]: I0312 18:12:41.511732 4051 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xwvdd" Mar 12 18:12:41.569010 master-0 kubenswrapper[4051]: I0312 18:12:41.567713 4051 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xwvdd"] Mar 12 18:12:41.573493 master-0 kubenswrapper[4051]: I0312 18:12:41.573132 4051 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xwvdd"] Mar 12 18:12:41.595939 master-0 kubenswrapper[4051]: I0312 18:12:41.595639 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hx8q8"] Mar 12 18:12:41.595939 master-0 kubenswrapper[4051]: E0312 18:12:41.595852 4051 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee43418f-8381-4526-9092-bcc61cfcd2e9" containerName="kubecfg-setup" Mar 12 18:12:41.595939 master-0 kubenswrapper[4051]: I0312 18:12:41.595878 4051 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee43418f-8381-4526-9092-bcc61cfcd2e9" containerName="kubecfg-setup" Mar 12 18:12:41.596441 master-0 kubenswrapper[4051]: I0312 18:12:41.595967 4051 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee43418f-8381-4526-9092-bcc61cfcd2e9" containerName="kubecfg-setup" Mar 12 18:12:41.597496 master-0 kubenswrapper[4051]: I0312 18:12:41.597455 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.601336 master-0 kubenswrapper[4051]: I0312 18:12:41.600290 4051 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 12 18:12:41.606975 master-0 kubenswrapper[4051]: I0312 18:12:41.606929 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 12 18:12:41.773261 master-0 kubenswrapper[4051]: I0312 18:12:41.773231 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-run-systemd\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.773427 master-0 kubenswrapper[4051]: I0312 18:12:41.773411 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b8dd13a7-10e5-431b-8d30-405dcfea02f5-ovn-node-metrics-cert\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.773553 master-0 kubenswrapper[4051]: I0312 18:12:41.773533 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rhmv\" (UniqueName: \"kubernetes.io/projected/b8dd13a7-10e5-431b-8d30-405dcfea02f5-kube-api-access-7rhmv\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.773645 master-0 kubenswrapper[4051]: I0312 18:12:41.773629 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-etc-openvswitch\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.773739 master-0 kubenswrapper[4051]: I0312 18:12:41.773725 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-run-ovn-kubernetes\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.773806 master-0 kubenswrapper[4051]: I0312 18:12:41.773794 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-cni-bin\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.773883 master-0 kubenswrapper[4051]: I0312 18:12:41.773872 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b8dd13a7-10e5-431b-8d30-405dcfea02f5-env-overrides\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.773973 master-0 kubenswrapper[4051]: I0312 18:12:41.773961 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-run-netns\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.774056 master-0 kubenswrapper[4051]: I0312 18:12:41.774035 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-node-log\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.774164 master-0 kubenswrapper[4051]: I0312 18:12:41.774152 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-kubelet\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.774275 master-0 kubenswrapper[4051]: I0312 18:12:41.774255 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-slash\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.774354 master-0 kubenswrapper[4051]: I0312 18:12:41.774342 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b8dd13a7-10e5-431b-8d30-405dcfea02f5-ovnkube-script-lib\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.774443 master-0 kubenswrapper[4051]: I0312 18:12:41.774432 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-systemd-units\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.774547 master-0 kubenswrapper[4051]: I0312 18:12:41.774529 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-cni-netd\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.774637 master-0 kubenswrapper[4051]: I0312 18:12:41.774625 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-var-lib-openvswitch\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.774747 master-0 kubenswrapper[4051]: I0312 18:12:41.774735 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b8dd13a7-10e5-431b-8d30-405dcfea02f5-ovnkube-config\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.774835 master-0 kubenswrapper[4051]: I0312 18:12:41.774824 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-run-ovn\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.774952 master-0 kubenswrapper[4051]: I0312 18:12:41.774933 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-run-openvswitch\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.775035 master-0 kubenswrapper[4051]: I0312 18:12:41.775024 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-log-socket\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.775136 master-0 kubenswrapper[4051]: I0312 18:12:41.775115 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.876030 master-0 kubenswrapper[4051]: I0312 18:12:41.875932 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-run-systemd\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.876030 master-0 kubenswrapper[4051]: I0312 18:12:41.876023 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b8dd13a7-10e5-431b-8d30-405dcfea02f5-ovn-node-metrics-cert\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.876360 master-0 kubenswrapper[4051]: I0312 18:12:41.876068 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rhmv\" (UniqueName: \"kubernetes.io/projected/b8dd13a7-10e5-431b-8d30-405dcfea02f5-kube-api-access-7rhmv\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.876360 master-0 kubenswrapper[4051]: I0312 18:12:41.876104 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-etc-openvswitch\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.876360 master-0 kubenswrapper[4051]: I0312 18:12:41.876120 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-run-systemd\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.876360 master-0 kubenswrapper[4051]: I0312 18:12:41.876146 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-run-ovn-kubernetes\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.876360 master-0 kubenswrapper[4051]: I0312 18:12:41.876357 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-cni-bin\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.877597 master-0 kubenswrapper[4051]: I0312 18:12:41.876471 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-cni-bin\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.877597 master-0 kubenswrapper[4051]: I0312 18:12:41.876594 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-run-ovn-kubernetes\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.877597 master-0 kubenswrapper[4051]: I0312 18:12:41.876703 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-etc-openvswitch\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.877597 master-0 kubenswrapper[4051]: I0312 18:12:41.876729 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b8dd13a7-10e5-431b-8d30-405dcfea02f5-env-overrides\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.877597 master-0 kubenswrapper[4051]: I0312 18:12:41.876786 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-run-netns\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.877597 master-0 kubenswrapper[4051]: I0312 18:12:41.876835 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-node-log\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.877597 master-0 kubenswrapper[4051]: I0312 18:12:41.876919 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-kubelet\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.877597 master-0 kubenswrapper[4051]: I0312 18:12:41.877007 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b8dd13a7-10e5-431b-8d30-405dcfea02f5-ovnkube-script-lib\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.877597 master-0 kubenswrapper[4051]: I0312 18:12:41.877033 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-run-netns\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.877597 master-0 kubenswrapper[4051]: I0312 18:12:41.877060 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-slash\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.877597 master-0 kubenswrapper[4051]: I0312 18:12:41.877137 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-slash\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.877597 master-0 kubenswrapper[4051]: I0312 18:12:41.877173 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-systemd-units\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.877597 master-0 kubenswrapper[4051]: I0312 18:12:41.877203 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-node-log\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.877597 master-0 kubenswrapper[4051]: I0312 18:12:41.877247 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-cni-netd\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.877597 master-0 kubenswrapper[4051]: I0312 18:12:41.877253 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-kubelet\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.877597 master-0 kubenswrapper[4051]: I0312 18:12:41.877324 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-cni-netd\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.877597 master-0 kubenswrapper[4051]: I0312 18:12:41.877494 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-var-lib-openvswitch\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.877597 master-0 kubenswrapper[4051]: I0312 18:12:41.877582 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b8dd13a7-10e5-431b-8d30-405dcfea02f5-ovnkube-config\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.879682 master-0 kubenswrapper[4051]: I0312 18:12:41.877591 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-systemd-units\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.879682 master-0 kubenswrapper[4051]: I0312 18:12:41.877650 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-run-ovn\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.879682 master-0 kubenswrapper[4051]: I0312 18:12:41.877693 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-var-lib-openvswitch\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.879682 master-0 kubenswrapper[4051]: I0312 18:12:41.877723 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-log-socket\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.879682 master-0 kubenswrapper[4051]: I0312 18:12:41.877746 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-run-ovn\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.879682 master-0 kubenswrapper[4051]: I0312 18:12:41.877776 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.879682 master-0 kubenswrapper[4051]: I0312 18:12:41.877853 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.879682 master-0 kubenswrapper[4051]: I0312 18:12:41.877870 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-run-openvswitch\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.879682 master-0 kubenswrapper[4051]: I0312 18:12:41.877930 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-log-socket\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.879682 master-0 kubenswrapper[4051]: I0312 18:12:41.877966 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-run-openvswitch\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.879682 master-0 kubenswrapper[4051]: I0312 18:12:41.878245 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b8dd13a7-10e5-431b-8d30-405dcfea02f5-env-overrides\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.879682 master-0 kubenswrapper[4051]: I0312 18:12:41.879205 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b8dd13a7-10e5-431b-8d30-405dcfea02f5-ovnkube-config\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.879682 master-0 kubenswrapper[4051]: I0312 18:12:41.879290 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b8dd13a7-10e5-431b-8d30-405dcfea02f5-ovnkube-script-lib\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.882704 master-0 kubenswrapper[4051]: I0312 18:12:41.882662 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b8dd13a7-10e5-431b-8d30-405dcfea02f5-ovn-node-metrics-cert\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.907814 master-0 kubenswrapper[4051]: I0312 18:12:41.907718 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rhmv\" (UniqueName: \"kubernetes.io/projected/b8dd13a7-10e5-431b-8d30-405dcfea02f5-kube-api-access-7rhmv\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.925618 master-0 kubenswrapper[4051]: I0312 18:12:41.925587 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:41.944360 master-0 kubenswrapper[4051]: W0312 18:12:41.944297 4051 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8dd13a7_10e5_431b_8d30_405dcfea02f5.slice/crio-6d3dbd3e29a6d7e3e111f4c45f534b1d831fee55b19f442dc477ede7e14f8ccb WatchSource:0}: Error finding container 6d3dbd3e29a6d7e3e111f4c45f534b1d831fee55b19f442dc477ede7e14f8ccb: Status 404 returned error can't find the container with id 6d3dbd3e29a6d7e3e111f4c45f534b1d831fee55b19f442dc477ede7e14f8ccb Mar 12 18:12:42.520541 master-0 kubenswrapper[4051]: I0312 18:12:42.520314 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lv8hk" event={"ID":"455f0aad-add2-49d0-995c-f92467bce2d6","Type":"ContainerStarted","Data":"65cd5179457220fe4b00ab9fb282c156c761fa68e35aadbd1f1e94416119a8d1"} Mar 12 18:12:42.524040 master-0 kubenswrapper[4051]: I0312 18:12:42.523965 4051 generic.go:334] "Generic (PLEG): container finished" podID="b8dd13a7-10e5-431b-8d30-405dcfea02f5" containerID="69a2563b13bb321b549ca470bba68e3784ff4506218240cbeb3734f424459804" exitCode=0 Mar 12 18:12:42.524146 master-0 kubenswrapper[4051]: I0312 18:12:42.524057 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" event={"ID":"b8dd13a7-10e5-431b-8d30-405dcfea02f5","Type":"ContainerDied","Data":"69a2563b13bb321b549ca470bba68e3784ff4506218240cbeb3734f424459804"} Mar 12 18:12:42.524146 master-0 kubenswrapper[4051]: I0312 18:12:42.524111 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" event={"ID":"b8dd13a7-10e5-431b-8d30-405dcfea02f5","Type":"ContainerStarted","Data":"6d3dbd3e29a6d7e3e111f4c45f534b1d831fee55b19f442dc477ede7e14f8ccb"} Mar 12 18:12:42.549013 master-0 kubenswrapper[4051]: I0312 18:12:42.548939 4051 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-lv8hk" podStartSLOduration=3.745535183 podStartE2EDuration="45.548920182s" podCreationTimestamp="2026-03-12 18:11:57 +0000 UTC" firstStartedPulling="2026-03-12 18:11:57.688562603 +0000 UTC m=+97.167688854" lastFinishedPulling="2026-03-12 18:12:39.491947612 +0000 UTC m=+138.971073853" observedRunningTime="2026-03-12 18:12:42.548280305 +0000 UTC m=+142.027406566" watchObservedRunningTime="2026-03-12 18:12:42.548920182 +0000 UTC m=+142.028046413" Mar 12 18:12:43.187814 master-0 kubenswrapper[4051]: I0312 18:12:43.187748 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptrtx\" (UniqueName: \"kubernetes.io/projected/33feec78-4592-4343-965b-aa1b7044fcf3-kube-api-access-ptrtx\") pod \"network-check-target-cpthp\" (UID: \"33feec78-4592-4343-965b-aa1b7044fcf3\") " pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:12:43.188146 master-0 kubenswrapper[4051]: E0312 18:12:43.187931 4051 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 18:12:43.188146 master-0 kubenswrapper[4051]: E0312 18:12:43.187959 4051 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 18:12:43.188146 master-0 kubenswrapper[4051]: E0312 18:12:43.187979 4051 projected.go:194] Error preparing data for projected volume kube-api-access-ptrtx for pod openshift-network-diagnostics/network-check-target-cpthp: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 18:12:43.188146 master-0 kubenswrapper[4051]: E0312 18:12:43.188056 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33feec78-4592-4343-965b-aa1b7044fcf3-kube-api-access-ptrtx podName:33feec78-4592-4343-965b-aa1b7044fcf3 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:15.188033003 +0000 UTC m=+174.667159274 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptrtx" (UniqueName: "kubernetes.io/projected/33feec78-4592-4343-965b-aa1b7044fcf3-kube-api-access-ptrtx") pod "network-check-target-cpthp" (UID: "33feec78-4592-4343-965b-aa1b7044fcf3") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 18:12:43.281140 master-0 kubenswrapper[4051]: I0312 18:12:43.281047 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:12:43.281401 master-0 kubenswrapper[4051]: I0312 18:12:43.281216 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:12:43.281506 master-0 kubenswrapper[4051]: E0312 18:12:43.281238 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z4sc9" podUID="5b48f8fd-2efe-44e3-a6d7-c71358b83a2f" Mar 12 18:12:43.281506 master-0 kubenswrapper[4051]: E0312 18:12:43.281438 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cpthp" podUID="33feec78-4592-4343-965b-aa1b7044fcf3" Mar 12 18:12:43.286882 master-0 kubenswrapper[4051]: I0312 18:12:43.286818 4051 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee43418f-8381-4526-9092-bcc61cfcd2e9" path="/var/lib/kubelet/pods/ee43418f-8381-4526-9092-bcc61cfcd2e9/volumes" Mar 12 18:12:43.534612 master-0 kubenswrapper[4051]: I0312 18:12:43.534369 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" event={"ID":"b8dd13a7-10e5-431b-8d30-405dcfea02f5","Type":"ContainerStarted","Data":"766f0126260c45fa2d48f63578b920f75ac019a9c881dc0b9b9e55140d01fc62"} Mar 12 18:12:43.534612 master-0 kubenswrapper[4051]: I0312 18:12:43.534479 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" event={"ID":"b8dd13a7-10e5-431b-8d30-405dcfea02f5","Type":"ContainerStarted","Data":"9ff88dc4564ee8bf6434336d2b63eb8d864d46e7cc846735768359290d4ac27a"} Mar 12 18:12:43.534612 master-0 kubenswrapper[4051]: I0312 18:12:43.534539 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" event={"ID":"b8dd13a7-10e5-431b-8d30-405dcfea02f5","Type":"ContainerStarted","Data":"589e389e47ce30b3ea9c378f57734f2abc1ea9bebf649c405c8fef6b0b066f1d"} Mar 12 18:12:43.534612 master-0 kubenswrapper[4051]: I0312 18:12:43.534568 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" event={"ID":"b8dd13a7-10e5-431b-8d30-405dcfea02f5","Type":"ContainerStarted","Data":"809bd6e19782c7160b0ef3ae9616ae6c4e1418f6db84dc4710db453c19806936"} Mar 12 18:12:43.534612 master-0 kubenswrapper[4051]: I0312 18:12:43.534598 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" event={"ID":"b8dd13a7-10e5-431b-8d30-405dcfea02f5","Type":"ContainerStarted","Data":"78d939d3c420fc49bdcd66e6faae327e9d9c6855bf5a11a961331e03470b3019"} Mar 12 18:12:43.535938 master-0 kubenswrapper[4051]: I0312 18:12:43.534625 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" event={"ID":"b8dd13a7-10e5-431b-8d30-405dcfea02f5","Type":"ContainerStarted","Data":"e3aed25d494940b68ce6b37b97b56859168a4d92cb7dbf43febbeb6eafa8d357"} Mar 12 18:12:45.281037 master-0 kubenswrapper[4051]: I0312 18:12:45.280919 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:12:45.281950 master-0 kubenswrapper[4051]: E0312 18:12:45.281198 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cpthp" podUID="33feec78-4592-4343-965b-aa1b7044fcf3" Mar 12 18:12:45.281950 master-0 kubenswrapper[4051]: I0312 18:12:45.281373 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:12:45.281950 master-0 kubenswrapper[4051]: E0312 18:12:45.281670 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z4sc9" podUID="5b48f8fd-2efe-44e3-a6d7-c71358b83a2f" Mar 12 18:12:45.546403 master-0 kubenswrapper[4051]: I0312 18:12:45.546331 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" event={"ID":"b8dd13a7-10e5-431b-8d30-405dcfea02f5","Type":"ContainerStarted","Data":"f46c45ee76d571e8ef174422447fd090c1a789ab4e5df0564a5cdc66be735848"} Mar 12 18:12:46.275625 master-0 kubenswrapper[4051]: E0312 18:12:46.275534 4051 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 18:12:47.280964 master-0 kubenswrapper[4051]: I0312 18:12:47.280891 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:12:47.280964 master-0 kubenswrapper[4051]: I0312 18:12:47.280955 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:12:47.281730 master-0 kubenswrapper[4051]: E0312 18:12:47.281043 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cpthp" podUID="33feec78-4592-4343-965b-aa1b7044fcf3" Mar 12 18:12:47.281730 master-0 kubenswrapper[4051]: E0312 18:12:47.281178 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z4sc9" podUID="5b48f8fd-2efe-44e3-a6d7-c71358b83a2f" Mar 12 18:12:48.564735 master-0 kubenswrapper[4051]: I0312 18:12:48.563907 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" event={"ID":"b8dd13a7-10e5-431b-8d30-405dcfea02f5","Type":"ContainerStarted","Data":"7bd6903a8b89d938954ee9d643e9f81c2435b564b23c7bca3663eb5ecebb1494"} Mar 12 18:12:48.564735 master-0 kubenswrapper[4051]: I0312 18:12:48.564506 4051 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:48.564735 master-0 kubenswrapper[4051]: I0312 18:12:48.564595 4051 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:48.564735 master-0 kubenswrapper[4051]: I0312 18:12:48.564615 4051 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:48.633412 master-0 kubenswrapper[4051]: I0312 18:12:48.633325 4051 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:48.633679 master-0 kubenswrapper[4051]: I0312 18:12:48.633593 4051 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:12:48.680899 master-0 kubenswrapper[4051]: I0312 18:12:48.680801 4051 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" podStartSLOduration=7.680774476 podStartE2EDuration="7.680774476s" podCreationTimestamp="2026-03-12 18:12:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:12:48.637034583 +0000 UTC m=+148.116160904" watchObservedRunningTime="2026-03-12 18:12:48.680774476 +0000 UTC m=+148.159900747" Mar 12 18:12:49.280819 master-0 kubenswrapper[4051]: I0312 18:12:49.280717 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:12:49.281094 master-0 kubenswrapper[4051]: I0312 18:12:49.280726 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:12:49.281094 master-0 kubenswrapper[4051]: E0312 18:12:49.280941 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z4sc9" podUID="5b48f8fd-2efe-44e3-a6d7-c71358b83a2f" Mar 12 18:12:49.281094 master-0 kubenswrapper[4051]: E0312 18:12:49.280990 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cpthp" podUID="33feec78-4592-4343-965b-aa1b7044fcf3" Mar 12 18:12:49.779501 master-0 kubenswrapper[4051]: I0312 18:12:49.779220 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-z4sc9"] Mar 12 18:12:49.779501 master-0 kubenswrapper[4051]: I0312 18:12:49.779417 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:12:49.780344 master-0 kubenswrapper[4051]: E0312 18:12:49.779638 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z4sc9" podUID="5b48f8fd-2efe-44e3-a6d7-c71358b83a2f" Mar 12 18:12:49.781097 master-0 kubenswrapper[4051]: I0312 18:12:49.781027 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-cpthp"] Mar 12 18:12:49.781234 master-0 kubenswrapper[4051]: I0312 18:12:49.781134 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:12:49.781325 master-0 kubenswrapper[4051]: E0312 18:12:49.781268 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cpthp" podUID="33feec78-4592-4343-965b-aa1b7044fcf3" Mar 12 18:12:51.275945 master-0 kubenswrapper[4051]: E0312 18:12:51.275887 4051 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 18:12:51.280316 master-0 kubenswrapper[4051]: I0312 18:12:51.280285 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:12:51.280364 master-0 kubenswrapper[4051]: I0312 18:12:51.280285 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:12:51.281152 master-0 kubenswrapper[4051]: E0312 18:12:51.281092 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z4sc9" podUID="5b48f8fd-2efe-44e3-a6d7-c71358b83a2f" Mar 12 18:12:51.281260 master-0 kubenswrapper[4051]: E0312 18:12:51.281219 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cpthp" podUID="33feec78-4592-4343-965b-aa1b7044fcf3" Mar 12 18:12:53.281397 master-0 kubenswrapper[4051]: I0312 18:12:53.281222 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:12:53.281397 master-0 kubenswrapper[4051]: I0312 18:12:53.281268 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:12:53.283356 master-0 kubenswrapper[4051]: E0312 18:12:53.281434 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z4sc9" podUID="5b48f8fd-2efe-44e3-a6d7-c71358b83a2f" Mar 12 18:12:53.283356 master-0 kubenswrapper[4051]: E0312 18:12:53.281599 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cpthp" podUID="33feec78-4592-4343-965b-aa1b7044fcf3" Mar 12 18:12:55.281671 master-0 kubenswrapper[4051]: I0312 18:12:55.280683 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:12:55.281671 master-0 kubenswrapper[4051]: I0312 18:12:55.280823 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:12:55.281671 master-0 kubenswrapper[4051]: E0312 18:12:55.281256 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z4sc9" podUID="5b48f8fd-2efe-44e3-a6d7-c71358b83a2f" Mar 12 18:12:55.282886 master-0 kubenswrapper[4051]: E0312 18:12:55.281641 4051 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-cpthp" podUID="33feec78-4592-4343-965b-aa1b7044fcf3" Mar 12 18:12:57.281218 master-0 kubenswrapper[4051]: I0312 18:12:57.281033 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:12:57.281218 master-0 kubenswrapper[4051]: I0312 18:12:57.281059 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:12:57.284474 master-0 kubenswrapper[4051]: I0312 18:12:57.283757 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 12 18:12:57.284474 master-0 kubenswrapper[4051]: I0312 18:12:57.283757 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 12 18:12:57.284474 master-0 kubenswrapper[4051]: I0312 18:12:57.284234 4051 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 12 18:13:01.653163 master-0 kubenswrapper[4051]: I0312 18:13:01.653079 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs\") pod \"network-metrics-daemon-z4sc9\" (UID: \"5b48f8fd-2efe-44e3-a6d7-c71358b83a2f\") " pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:13:01.654160 master-0 kubenswrapper[4051]: E0312 18:13:01.653290 4051 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 12 18:13:01.654160 master-0 kubenswrapper[4051]: E0312 18:13:01.653408 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs podName:5b48f8fd-2efe-44e3-a6d7-c71358b83a2f nodeName:}" failed. No retries permitted until 2026-03-12 18:14:05.653379601 +0000 UTC m=+225.132505872 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs") pod "network-metrics-daemon-z4sc9" (UID: "5b48f8fd-2efe-44e3-a6d7-c71358b83a2f") : secret "metrics-daemon-secret" not found Mar 12 18:13:04.517317 master-0 kubenswrapper[4051]: I0312 18:13:04.517134 4051 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Mar 12 18:13:05.781889 master-0 kubenswrapper[4051]: I0312 18:13:05.781838 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c"] Mar 12 18:13:05.782383 master-0 kubenswrapper[4051]: I0312 18:13:05.782234 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" Mar 12 18:13:05.789421 master-0 kubenswrapper[4051]: I0312 18:13:05.789387 4051 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 12 18:13:05.789591 master-0 kubenswrapper[4051]: I0312 18:13:05.789480 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 12 18:13:05.789725 master-0 kubenswrapper[4051]: I0312 18:13:05.789705 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 12 18:13:05.790356 master-0 kubenswrapper[4051]: I0312 18:13:05.790328 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 12 18:13:05.790543 master-0 kubenswrapper[4051]: I0312 18:13:05.790489 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-677db989d6-4527l"] Mar 12 18:13:05.791012 master-0 kubenswrapper[4051]: I0312 18:13:05.790980 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpb6k"] Mar 12 18:13:05.791290 master-0 kubenswrapper[4051]: I0312 18:13:05.791266 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpb6k" Mar 12 18:13:05.791767 master-0 kubenswrapper[4051]: I0312 18:13:05.791744 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:13:05.791899 master-0 kubenswrapper[4051]: I0312 18:13:05.791871 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s"] Mar 12 18:13:05.792365 master-0 kubenswrapper[4051]: I0312 18:13:05.792347 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:13:05.793478 master-0 kubenswrapper[4051]: I0312 18:13:05.793026 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-64bf9778cb-clkx5"] Mar 12 18:13:05.793478 master-0 kubenswrapper[4051]: I0312 18:13:05.793261 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:13:05.793606 master-0 kubenswrapper[4051]: I0312 18:13:05.793565 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r"] Mar 12 18:13:05.794024 master-0 kubenswrapper[4051]: I0312 18:13:05.793991 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r" Mar 12 18:13:05.794145 master-0 kubenswrapper[4051]: I0312 18:13:05.794097 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4"] Mar 12 18:13:05.794927 master-0 kubenswrapper[4051]: I0312 18:13:05.794895 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-2xr98"] Mar 12 18:13:05.795257 master-0 kubenswrapper[4051]: I0312 18:13:05.795227 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7"] Mar 12 18:13:05.795312 master-0 kubenswrapper[4051]: I0312 18:13:05.795268 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4" Mar 12 18:13:05.798544 master-0 kubenswrapper[4051]: I0312 18:13:05.798487 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-2xr98" Mar 12 18:13:05.810417 master-0 kubenswrapper[4051]: I0312 18:13:05.805611 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6qlzz"] Mar 12 18:13:05.812284 master-0 kubenswrapper[4051]: I0312 18:13:05.812216 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7" Mar 12 18:13:05.817012 master-0 kubenswrapper[4051]: I0312 18:13:05.816953 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-ddsbn"] Mar 12 18:13:05.817267 master-0 kubenswrapper[4051]: I0312 18:13:05.817224 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq"] Mar 12 18:13:05.817758 master-0 kubenswrapper[4051]: I0312 18:13:05.817707 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:13:05.818043 master-0 kubenswrapper[4051]: I0312 18:13:05.817986 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6qlzz" Mar 12 18:13:05.818132 master-0 kubenswrapper[4051]: I0312 18:13:05.818069 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-ddsbn" Mar 12 18:13:05.819591 master-0 kubenswrapper[4051]: I0312 18:13:05.819548 4051 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 12 18:13:05.819782 master-0 kubenswrapper[4051]: I0312 18:13:05.819750 4051 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 12 18:13:05.820177 master-0 kubenswrapper[4051]: I0312 18:13:05.819673 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 12 18:13:05.820292 master-0 kubenswrapper[4051]: I0312 18:13:05.819714 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 12 18:13:05.820375 master-0 kubenswrapper[4051]: I0312 18:13:05.820351 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-69b6fc6b88-9xlgl"] Mar 12 18:13:05.820926 master-0 kubenswrapper[4051]: I0312 18:13:05.820894 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-9xlgl" Mar 12 18:13:05.821056 master-0 kubenswrapper[4051]: I0312 18:13:05.821032 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j"] Mar 12 18:13:05.821215 master-0 kubenswrapper[4051]: I0312 18:13:05.821166 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 12 18:13:05.821454 master-0 kubenswrapper[4051]: I0312 18:13:05.821421 4051 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 12 18:13:05.821613 master-0 kubenswrapper[4051]: I0312 18:13:05.821535 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" Mar 12 18:13:05.821809 master-0 kubenswrapper[4051]: I0312 18:13:05.821776 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 12 18:13:05.822085 master-0 kubenswrapper[4051]: I0312 18:13:05.822054 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 12 18:13:05.822462 master-0 kubenswrapper[4051]: I0312 18:13:05.822427 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-m6hsp"] Mar 12 18:13:05.822749 master-0 kubenswrapper[4051]: I0312 18:13:05.822718 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-m6hsp" Mar 12 18:13:05.825731 master-0 kubenswrapper[4051]: I0312 18:13:05.825689 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 12 18:13:05.825936 master-0 kubenswrapper[4051]: I0312 18:13:05.825905 4051 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 12 18:13:05.826103 master-0 kubenswrapper[4051]: I0312 18:13:05.826072 4051 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 12 18:13:05.826278 master-0 kubenswrapper[4051]: I0312 18:13:05.826245 4051 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 12 18:13:05.826374 master-0 kubenswrapper[4051]: I0312 18:13:05.826350 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 12 18:13:05.826374 master-0 kubenswrapper[4051]: I0312 18:13:05.826417 4051 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 12 18:13:05.826374 master-0 kubenswrapper[4051]: I0312 18:13:05.826488 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 12 18:13:05.826851 master-0 kubenswrapper[4051]: I0312 18:13:05.826600 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 12 18:13:05.826851 master-0 kubenswrapper[4051]: I0312 18:13:05.826684 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 12 18:13:05.826851 master-0 kubenswrapper[4051]: I0312 18:13:05.826775 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 12 18:13:05.826851 master-0 kubenswrapper[4051]: I0312 18:13:05.826089 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 12 18:13:05.827131 master-0 kubenswrapper[4051]: I0312 18:13:05.826137 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 12 18:13:05.828864 master-0 kubenswrapper[4051]: I0312 18:13:05.827256 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 12 18:13:05.828864 master-0 kubenswrapper[4051]: I0312 18:13:05.827688 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 12 18:13:05.829109 master-0 kubenswrapper[4051]: I0312 18:13:05.829086 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 12 18:13:05.830647 master-0 kubenswrapper[4051]: I0312 18:13:05.830547 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 12 18:13:05.835806 master-0 kubenswrapper[4051]: I0312 18:13:05.835744 4051 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 12 18:13:05.836984 master-0 kubenswrapper[4051]: I0312 18:13:05.836948 4051 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 12 18:13:05.837605 master-0 kubenswrapper[4051]: I0312 18:13:05.837567 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 12 18:13:05.837811 master-0 kubenswrapper[4051]: I0312 18:13:05.837762 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 12 18:13:05.838081 master-0 kubenswrapper[4051]: I0312 18:13:05.838051 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 12 18:13:05.838283 master-0 kubenswrapper[4051]: I0312 18:13:05.838245 4051 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 12 18:13:05.838356 master-0 kubenswrapper[4051]: I0312 18:13:05.838294 4051 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 12 18:13:05.838356 master-0 kubenswrapper[4051]: I0312 18:13:05.838335 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 18:13:05.838356 master-0 kubenswrapper[4051]: I0312 18:13:05.838348 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 12 18:13:05.838539 master-0 kubenswrapper[4051]: I0312 18:13:05.838252 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 12 18:13:05.838539 master-0 kubenswrapper[4051]: I0312 18:13:05.838456 4051 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 12 18:13:05.838656 master-0 kubenswrapper[4051]: I0312 18:13:05.838585 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 12 18:13:05.838656 master-0 kubenswrapper[4051]: I0312 18:13:05.838592 4051 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 12 18:13:05.838656 master-0 kubenswrapper[4051]: I0312 18:13:05.838606 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 12 18:13:05.838835 master-0 kubenswrapper[4051]: I0312 18:13:05.838808 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 12 18:13:05.838902 master-0 kubenswrapper[4051]: I0312 18:13:05.838856 4051 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 12 18:13:05.838902 master-0 kubenswrapper[4051]: I0312 18:13:05.838815 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 12 18:13:05.839052 master-0 kubenswrapper[4051]: I0312 18:13:05.839028 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 12 18:13:05.839113 master-0 kubenswrapper[4051]: I0312 18:13:05.839080 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 12 18:13:05.839903 master-0 kubenswrapper[4051]: I0312 18:13:05.839875 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 12 18:13:05.842284 master-0 kubenswrapper[4051]: I0312 18:13:05.842242 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 12 18:13:05.923456 master-0 kubenswrapper[4051]: I0312 18:13:05.923419 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert\") pod \"catalog-operator-7d9c49f57b-pslh7\" (UID: \"47850839-bb4b-41e9-ac31-f1cabbb4926d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7" Mar 12 18:13:05.923686 master-0 kubenswrapper[4051]: I0312 18:13:05.923468 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37cd9c0a-697e-4e67-932b-b331ff77c8c0-serving-cert\") pod \"openshift-config-operator-64488f9d78-tjp2j\" (UID: \"37cd9c0a-697e-4e67-932b-b331ff77c8c0\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" Mar 12 18:13:05.923686 master-0 kubenswrapper[4051]: I0312 18:13:05.923498 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s55hv\" (UniqueName: \"kubernetes.io/projected/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa-kube-api-access-s55hv\") pod \"openshift-controller-manager-operator-8565d84698-m6hsp\" (UID: \"223a548b-a3ad-40dd-82de-e3dbb7f3e4fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-m6hsp" Mar 12 18:13:05.923686 master-0 kubenswrapper[4051]: I0312 18:13:05.923619 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/055f5c67-f512-4510-99c5-e194944b0599-config\") pod \"service-ca-operator-69b6fc6b88-9xlgl\" (UID: \"055f5c67-f512-4510-99c5-e194944b0599\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-9xlgl" Mar 12 18:13:05.923686 master-0 kubenswrapper[4051]: I0312 18:13:05.923659 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa-config\") pod \"openshift-controller-manager-operator-8565d84698-m6hsp\" (UID: \"223a548b-a3ad-40dd-82de-e3dbb7f3e4fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-m6hsp" Mar 12 18:13:05.923686 master-0 kubenswrapper[4051]: I0312 18:13:05.923683 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4ae1240-e04e-48e9-88df-9f1a53508da7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpb6k\" (UID: \"d4ae1240-e04e-48e9-88df-9f1a53508da7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpb6k" Mar 12 18:13:05.923880 master-0 kubenswrapper[4051]: I0312 18:13:05.923702 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vpbp\" (UniqueName: \"kubernetes.io/projected/a1e2340b-ebca-40de-b1e0-8133999cd860-kube-api-access-6vpbp\") pod \"kube-storage-version-migrator-operator-7f65c457f5-2xr98\" (UID: \"a1e2340b-ebca-40de-b1e0-8133999cd860\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-2xr98" Mar 12 18:13:05.923880 master-0 kubenswrapper[4051]: I0312 18:13:05.923728 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfpb9\" (UniqueName: \"kubernetes.io/projected/37cd9c0a-697e-4e67-932b-b331ff77c8c0-kube-api-access-pfpb9\") pod \"openshift-config-operator-64488f9d78-tjp2j\" (UID: \"37cd9c0a-697e-4e67-932b-b331ff77c8c0\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" Mar 12 18:13:05.923880 master-0 kubenswrapper[4051]: I0312 18:13:05.923762 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcvfv\" (UniqueName: \"kubernetes.io/projected/f3a2cda2-b70f-4128-a1be-48503f5aad6d-kube-api-access-tcvfv\") pod \"cluster-olm-operator-77899cf6d-6nvn4\" (UID: \"f3a2cda2-b70f-4128-a1be-48503f5aad6d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4" Mar 12 18:13:05.923880 master-0 kubenswrapper[4051]: I0312 18:13:05.923792 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab926874-9722-4e65-9084-27b2f9915450-serving-cert\") pod \"kube-apiserver-operator-68bd585b-ddsbn\" (UID: \"ab926874-9722-4e65-9084-27b2f9915450\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-ddsbn" Mar 12 18:13:05.923880 master-0 kubenswrapper[4051]: I0312 18:13:05.923811 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/236f2886-bb69-49a7-9471-36454fd1cbd3-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-6qlzz\" (UID: \"236f2886-bb69-49a7-9471-36454fd1cbd3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6qlzz" Mar 12 18:13:05.923880 master-0 kubenswrapper[4051]: I0312 18:13:05.923833 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/055f5c67-f512-4510-99c5-e194944b0599-serving-cert\") pod \"service-ca-operator-69b6fc6b88-9xlgl\" (UID: \"055f5c67-f512-4510-99c5-e194944b0599\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-9xlgl" Mar 12 18:13:05.923880 master-0 kubenswrapper[4051]: I0312 18:13:05.923854 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d94dc349-c5cb-4f12-8e48-867030af4981-trusted-ca\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:13:05.923880 master-0 kubenswrapper[4051]: I0312 18:13:05.923877 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-kwv7s\" (UID: \"51eb717b-d11f-4bc3-8df6-deb51d5889f3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:13:05.924155 master-0 kubenswrapper[4051]: I0312 18:13:05.923899 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e22c7035-4b7a-48cb-9abb-db277b387842-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:13:05.924155 master-0 kubenswrapper[4051]: I0312 18:13:05.923964 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1e2340b-ebca-40de-b1e0-8133999cd860-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-2xr98\" (UID: \"a1e2340b-ebca-40de-b1e0-8133999cd860\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-2xr98" Mar 12 18:13:05.924155 master-0 kubenswrapper[4051]: I0312 18:13:05.923991 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:13:05.924155 master-0 kubenswrapper[4051]: I0312 18:13:05.924020 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:13:05.924155 master-0 kubenswrapper[4051]: I0312 18:13:05.924045 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e22c7035-4b7a-48cb-9abb-db277b387842-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:13:05.924155 master-0 kubenswrapper[4051]: I0312 18:13:05.924087 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-fz79c\" (UID: \"e94d098b-fbcc-4e85-b8ad-42f3a21c822c\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" Mar 12 18:13:05.924155 master-0 kubenswrapper[4051]: I0312 18:13:05.924134 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bttzm\" (UniqueName: \"kubernetes.io/projected/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-kube-api-access-bttzm\") pod \"cluster-monitoring-operator-674cbfbd9d-fz79c\" (UID: \"e94d098b-fbcc-4e85-b8ad-42f3a21c822c\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" Mar 12 18:13:05.924382 master-0 kubenswrapper[4051]: I0312 18:13:05.924183 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggsdx\" (UniqueName: \"kubernetes.io/projected/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-kube-api-access-ggsdx\") pod \"olm-operator-d64cfc9db-npt4r\" (UID: \"d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r" Mar 12 18:13:05.924382 master-0 kubenswrapper[4051]: I0312 18:13:05.924220 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/37cd9c0a-697e-4e67-932b-b331ff77c8c0-available-featuregates\") pod \"openshift-config-operator-64488f9d78-tjp2j\" (UID: \"37cd9c0a-697e-4e67-932b-b331ff77c8c0\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" Mar 12 18:13:05.924382 master-0 kubenswrapper[4051]: I0312 18:13:05.924245 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-m6hsp\" (UID: \"223a548b-a3ad-40dd-82de-e3dbb7f3e4fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-m6hsp" Mar 12 18:13:05.924382 master-0 kubenswrapper[4051]: I0312 18:13:05.924269 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krrkl\" (UniqueName: \"kubernetes.io/projected/47850839-bb4b-41e9-ac31-f1cabbb4926d-kube-api-access-krrkl\") pod \"catalog-operator-7d9c49f57b-pslh7\" (UID: \"47850839-bb4b-41e9-ac31-f1cabbb4926d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7" Mar 12 18:13:05.924382 master-0 kubenswrapper[4051]: I0312 18:13:05.924295 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjmcv\" (UniqueName: \"kubernetes.io/projected/d94dc349-c5cb-4f12-8e48-867030af4981-kube-api-access-zjmcv\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:13:05.924382 master-0 kubenswrapper[4051]: I0312 18:13:05.924317 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab926874-9722-4e65-9084-27b2f9915450-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-ddsbn\" (UID: \"ab926874-9722-4e65-9084-27b2f9915450\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-ddsbn" Mar 12 18:13:05.924382 master-0 kubenswrapper[4051]: I0312 18:13:05.924356 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-clkx5\" (UID: \"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:13:05.924711 master-0 kubenswrapper[4051]: I0312 18:13:05.924418 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdlxn\" (UniqueName: \"kubernetes.io/projected/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-kube-api-access-fdlxn\") pod \"marketplace-operator-64bf9778cb-clkx5\" (UID: \"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:13:05.924711 master-0 kubenswrapper[4051]: I0312 18:13:05.924436 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d94dc349-c5cb-4f12-8e48-867030af4981-bound-sa-token\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:13:05.924711 master-0 kubenswrapper[4051]: I0312 18:13:05.924451 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab926874-9722-4e65-9084-27b2f9915450-config\") pod \"kube-apiserver-operator-68bd585b-ddsbn\" (UID: \"ab926874-9722-4e65-9084-27b2f9915450\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-ddsbn" Mar 12 18:13:05.924711 master-0 kubenswrapper[4051]: I0312 18:13:05.924467 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmxc2\" (UniqueName: \"kubernetes.io/projected/e22c7035-4b7a-48cb-9abb-db277b387842-kube-api-access-pmxc2\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:13:05.924711 master-0 kubenswrapper[4051]: I0312 18:13:05.924482 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkt7d\" (UniqueName: \"kubernetes.io/projected/055f5c67-f512-4510-99c5-e194944b0599-kube-api-access-tkt7d\") pod \"service-ca-operator-69b6fc6b88-9xlgl\" (UID: \"055f5c67-f512-4510-99c5-e194944b0599\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-9xlgl" Mar 12 18:13:05.924711 master-0 kubenswrapper[4051]: I0312 18:13:05.924499 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/236f2886-bb69-49a7-9471-36454fd1cbd3-config\") pod \"openshift-apiserver-operator-799b6db4d7-6qlzz\" (UID: \"236f2886-bb69-49a7-9471-36454fd1cbd3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6qlzz" Mar 12 18:13:05.924711 master-0 kubenswrapper[4051]: I0312 18:13:05.924567 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ae1240-e04e-48e9-88df-9f1a53508da7-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpb6k\" (UID: \"d4ae1240-e04e-48e9-88df-9f1a53508da7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpb6k" Mar 12 18:13:05.924711 master-0 kubenswrapper[4051]: I0312 18:13:05.924603 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3a2cda2-b70f-4128-a1be-48503f5aad6d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-6nvn4\" (UID: \"f3a2cda2-b70f-4128-a1be-48503f5aad6d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4" Mar 12 18:13:05.924711 master-0 kubenswrapper[4051]: I0312 18:13:05.924641 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert\") pod \"olm-operator-d64cfc9db-npt4r\" (UID: \"d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r" Mar 12 18:13:05.924711 master-0 kubenswrapper[4051]: I0312 18:13:05.924665 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbnx8\" (UniqueName: \"kubernetes.io/projected/51eb717b-d11f-4bc3-8df6-deb51d5889f3-kube-api-access-gbnx8\") pod \"package-server-manager-854648ff6d-kwv7s\" (UID: \"51eb717b-d11f-4bc3-8df6-deb51d5889f3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:13:05.924711 master-0 kubenswrapper[4051]: I0312 18:13:05.924690 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ae1240-e04e-48e9-88df-9f1a53508da7-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpb6k\" (UID: \"d4ae1240-e04e-48e9-88df-9f1a53508da7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpb6k" Mar 12 18:13:05.924711 master-0 kubenswrapper[4051]: I0312 18:13:05.924711 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6ggg\" (UniqueName: \"kubernetes.io/projected/236f2886-bb69-49a7-9471-36454fd1cbd3-kube-api-access-b6ggg\") pod \"openshift-apiserver-operator-799b6db4d7-6qlzz\" (UID: \"236f2886-bb69-49a7-9471-36454fd1cbd3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6qlzz" Mar 12 18:13:05.924711 master-0 kubenswrapper[4051]: I0312 18:13:05.924726 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-fz79c\" (UID: \"e94d098b-fbcc-4e85-b8ad-42f3a21c822c\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" Mar 12 18:13:05.925538 master-0 kubenswrapper[4051]: I0312 18:13:05.924746 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-clkx5\" (UID: \"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:13:05.925538 master-0 kubenswrapper[4051]: I0312 18:13:05.924765 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/f3a2cda2-b70f-4128-a1be-48503f5aad6d-operand-assets\") pod \"cluster-olm-operator-77899cf6d-6nvn4\" (UID: \"f3a2cda2-b70f-4128-a1be-48503f5aad6d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4" Mar 12 18:13:05.925538 master-0 kubenswrapper[4051]: I0312 18:13:05.924803 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1e2340b-ebca-40de-b1e0-8133999cd860-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-2xr98\" (UID: \"a1e2340b-ebca-40de-b1e0-8133999cd860\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-2xr98" Mar 12 18:13:06.025965 master-0 kubenswrapper[4051]: I0312 18:13:06.025891 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3a2cda2-b70f-4128-a1be-48503f5aad6d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-6nvn4\" (UID: \"f3a2cda2-b70f-4128-a1be-48503f5aad6d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4" Mar 12 18:13:06.025965 master-0 kubenswrapper[4051]: I0312 18:13:06.025951 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/236f2886-bb69-49a7-9471-36454fd1cbd3-config\") pod \"openshift-apiserver-operator-799b6db4d7-6qlzz\" (UID: \"236f2886-bb69-49a7-9471-36454fd1cbd3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6qlzz" Mar 12 18:13:06.025965 master-0 kubenswrapper[4051]: I0312 18:13:06.025979 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ae1240-e04e-48e9-88df-9f1a53508da7-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpb6k\" (UID: \"d4ae1240-e04e-48e9-88df-9f1a53508da7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpb6k" Mar 12 18:13:06.026272 master-0 kubenswrapper[4051]: I0312 18:13:06.026004 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert\") pod \"olm-operator-d64cfc9db-npt4r\" (UID: \"d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r" Mar 12 18:13:06.026272 master-0 kubenswrapper[4051]: I0312 18:13:06.026162 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbnx8\" (UniqueName: \"kubernetes.io/projected/51eb717b-d11f-4bc3-8df6-deb51d5889f3-kube-api-access-gbnx8\") pod \"package-server-manager-854648ff6d-kwv7s\" (UID: \"51eb717b-d11f-4bc3-8df6-deb51d5889f3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:13:06.026272 master-0 kubenswrapper[4051]: I0312 18:13:06.026211 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ae1240-e04e-48e9-88df-9f1a53508da7-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpb6k\" (UID: \"d4ae1240-e04e-48e9-88df-9f1a53508da7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpb6k" Mar 12 18:13:06.026272 master-0 kubenswrapper[4051]: I0312 18:13:06.026230 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-fz79c\" (UID: \"e94d098b-fbcc-4e85-b8ad-42f3a21c822c\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" Mar 12 18:13:06.026272 master-0 kubenswrapper[4051]: I0312 18:13:06.026250 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6ggg\" (UniqueName: \"kubernetes.io/projected/236f2886-bb69-49a7-9471-36454fd1cbd3-kube-api-access-b6ggg\") pod \"openshift-apiserver-operator-799b6db4d7-6qlzz\" (UID: \"236f2886-bb69-49a7-9471-36454fd1cbd3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6qlzz" Mar 12 18:13:06.026272 master-0 kubenswrapper[4051]: I0312 18:13:06.026268 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-clkx5\" (UID: \"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:13:06.026851 master-0 kubenswrapper[4051]: E0312 18:13:06.026539 4051 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 12 18:13:06.026851 master-0 kubenswrapper[4051]: E0312 18:13:06.026675 4051 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 12 18:13:06.026851 master-0 kubenswrapper[4051]: I0312 18:13:06.026647 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/f3a2cda2-b70f-4128-a1be-48503f5aad6d-operand-assets\") pod \"cluster-olm-operator-77899cf6d-6nvn4\" (UID: \"f3a2cda2-b70f-4128-a1be-48503f5aad6d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4" Mar 12 18:13:06.026851 master-0 kubenswrapper[4051]: I0312 18:13:06.026754 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/236f2886-bb69-49a7-9471-36454fd1cbd3-config\") pod \"openshift-apiserver-operator-799b6db4d7-6qlzz\" (UID: \"236f2886-bb69-49a7-9471-36454fd1cbd3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6qlzz" Mar 12 18:13:06.026851 master-0 kubenswrapper[4051]: E0312 18:13:06.026780 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics podName:4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:06.526732321 +0000 UTC m=+166.005858552 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-clkx5" (UID: "4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64") : secret "marketplace-operator-metrics" not found Mar 12 18:13:06.026851 master-0 kubenswrapper[4051]: I0312 18:13:06.026782 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ae1240-e04e-48e9-88df-9f1a53508da7-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpb6k\" (UID: \"d4ae1240-e04e-48e9-88df-9f1a53508da7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpb6k" Mar 12 18:13:06.026851 master-0 kubenswrapper[4051]: E0312 18:13:06.026578 4051 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 12 18:13:06.026851 master-0 kubenswrapper[4051]: I0312 18:13:06.026811 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1e2340b-ebca-40de-b1e0-8133999cd860-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-2xr98\" (UID: \"a1e2340b-ebca-40de-b1e0-8133999cd860\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-2xr98" Mar 12 18:13:06.027280 master-0 kubenswrapper[4051]: I0312 18:13:06.027230 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/f3a2cda2-b70f-4128-a1be-48503f5aad6d-operand-assets\") pod \"cluster-olm-operator-77899cf6d-6nvn4\" (UID: \"f3a2cda2-b70f-4128-a1be-48503f5aad6d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4" Mar 12 18:13:06.027499 master-0 kubenswrapper[4051]: E0312 18:13:06.027464 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls podName:e94d098b-fbcc-4e85-b8ad-42f3a21c822c nodeName:}" failed. No retries permitted until 2026-03-12 18:13:06.527374307 +0000 UTC m=+166.006500538 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-fz79c" (UID: "e94d098b-fbcc-4e85-b8ad-42f3a21c822c") : secret "cluster-monitoring-operator-tls" not found Mar 12 18:13:06.027584 master-0 kubenswrapper[4051]: E0312 18:13:06.027546 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert podName:d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:06.52748687 +0000 UTC m=+166.006613101 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert") pod "olm-operator-d64cfc9db-npt4r" (UID: "d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27") : secret "olm-operator-serving-cert" not found Mar 12 18:13:06.027632 master-0 kubenswrapper[4051]: I0312 18:13:06.027583 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert\") pod \"catalog-operator-7d9c49f57b-pslh7\" (UID: \"47850839-bb4b-41e9-ac31-f1cabbb4926d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7" Mar 12 18:13:06.027732 master-0 kubenswrapper[4051]: E0312 18:13:06.027696 4051 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 12 18:13:06.027732 master-0 kubenswrapper[4051]: E0312 18:13:06.027729 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert podName:47850839-bb4b-41e9-ac31-f1cabbb4926d nodeName:}" failed. No retries permitted until 2026-03-12 18:13:06.527720326 +0000 UTC m=+166.006846557 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert") pod "catalog-operator-7d9c49f57b-pslh7" (UID: "47850839-bb4b-41e9-ac31-f1cabbb4926d") : secret "catalog-operator-serving-cert" not found Mar 12 18:13:06.027819 master-0 kubenswrapper[4051]: I0312 18:13:06.027628 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37cd9c0a-697e-4e67-932b-b331ff77c8c0-serving-cert\") pod \"openshift-config-operator-64488f9d78-tjp2j\" (UID: \"37cd9c0a-697e-4e67-932b-b331ff77c8c0\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" Mar 12 18:13:06.027819 master-0 kubenswrapper[4051]: I0312 18:13:06.027786 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s55hv\" (UniqueName: \"kubernetes.io/projected/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa-kube-api-access-s55hv\") pod \"openshift-controller-manager-operator-8565d84698-m6hsp\" (UID: \"223a548b-a3ad-40dd-82de-e3dbb7f3e4fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-m6hsp" Mar 12 18:13:06.027891 master-0 kubenswrapper[4051]: I0312 18:13:06.027832 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/055f5c67-f512-4510-99c5-e194944b0599-config\") pod \"service-ca-operator-69b6fc6b88-9xlgl\" (UID: \"055f5c67-f512-4510-99c5-e194944b0599\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-9xlgl" Mar 12 18:13:06.027891 master-0 kubenswrapper[4051]: I0312 18:13:06.027854 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa-config\") pod \"openshift-controller-manager-operator-8565d84698-m6hsp\" (UID: \"223a548b-a3ad-40dd-82de-e3dbb7f3e4fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-m6hsp" Mar 12 18:13:06.027891 master-0 kubenswrapper[4051]: I0312 18:13:06.027874 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4ae1240-e04e-48e9-88df-9f1a53508da7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpb6k\" (UID: \"d4ae1240-e04e-48e9-88df-9f1a53508da7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpb6k" Mar 12 18:13:06.028000 master-0 kubenswrapper[4051]: I0312 18:13:06.027900 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfpb9\" (UniqueName: \"kubernetes.io/projected/37cd9c0a-697e-4e67-932b-b331ff77c8c0-kube-api-access-pfpb9\") pod \"openshift-config-operator-64488f9d78-tjp2j\" (UID: \"37cd9c0a-697e-4e67-932b-b331ff77c8c0\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" Mar 12 18:13:06.028000 master-0 kubenswrapper[4051]: I0312 18:13:06.027925 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vpbp\" (UniqueName: \"kubernetes.io/projected/a1e2340b-ebca-40de-b1e0-8133999cd860-kube-api-access-6vpbp\") pod \"kube-storage-version-migrator-operator-7f65c457f5-2xr98\" (UID: \"a1e2340b-ebca-40de-b1e0-8133999cd860\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-2xr98" Mar 12 18:13:06.028000 master-0 kubenswrapper[4051]: I0312 18:13:06.027955 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcvfv\" (UniqueName: \"kubernetes.io/projected/f3a2cda2-b70f-4128-a1be-48503f5aad6d-kube-api-access-tcvfv\") pod \"cluster-olm-operator-77899cf6d-6nvn4\" (UID: \"f3a2cda2-b70f-4128-a1be-48503f5aad6d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4" Mar 12 18:13:06.028000 master-0 kubenswrapper[4051]: I0312 18:13:06.027978 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab926874-9722-4e65-9084-27b2f9915450-serving-cert\") pod \"kube-apiserver-operator-68bd585b-ddsbn\" (UID: \"ab926874-9722-4e65-9084-27b2f9915450\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-ddsbn" Mar 12 18:13:06.028180 master-0 kubenswrapper[4051]: I0312 18:13:06.028119 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1e2340b-ebca-40de-b1e0-8133999cd860-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-2xr98\" (UID: \"a1e2340b-ebca-40de-b1e0-8133999cd860\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-2xr98" Mar 12 18:13:06.028841 master-0 kubenswrapper[4051]: I0312 18:13:06.028570 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa-config\") pod \"openshift-controller-manager-operator-8565d84698-m6hsp\" (UID: \"223a548b-a3ad-40dd-82de-e3dbb7f3e4fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-m6hsp" Mar 12 18:13:06.028841 master-0 kubenswrapper[4051]: I0312 18:13:06.027997 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/236f2886-bb69-49a7-9471-36454fd1cbd3-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-6qlzz\" (UID: \"236f2886-bb69-49a7-9471-36454fd1cbd3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6qlzz" Mar 12 18:13:06.028841 master-0 kubenswrapper[4051]: I0312 18:13:06.028626 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-kwv7s\" (UID: \"51eb717b-d11f-4bc3-8df6-deb51d5889f3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:13:06.028841 master-0 kubenswrapper[4051]: I0312 18:13:06.028647 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e22c7035-4b7a-48cb-9abb-db277b387842-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:13:06.028841 master-0 kubenswrapper[4051]: I0312 18:13:06.028665 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1e2340b-ebca-40de-b1e0-8133999cd860-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-2xr98\" (UID: \"a1e2340b-ebca-40de-b1e0-8133999cd860\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-2xr98" Mar 12 18:13:06.028841 master-0 kubenswrapper[4051]: I0312 18:13:06.028685 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/055f5c67-f512-4510-99c5-e194944b0599-serving-cert\") pod \"service-ca-operator-69b6fc6b88-9xlgl\" (UID: \"055f5c67-f512-4510-99c5-e194944b0599\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-9xlgl" Mar 12 18:13:06.028841 master-0 kubenswrapper[4051]: I0312 18:13:06.028707 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d94dc349-c5cb-4f12-8e48-867030af4981-trusted-ca\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:13:06.028841 master-0 kubenswrapper[4051]: I0312 18:13:06.028741 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:13:06.030357 master-0 kubenswrapper[4051]: I0312 18:13:06.030098 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:13:06.030700 master-0 kubenswrapper[4051]: E0312 18:13:06.030495 4051 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 12 18:13:06.030700 master-0 kubenswrapper[4051]: I0312 18:13:06.030207 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e22c7035-4b7a-48cb-9abb-db277b387842-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:13:06.032593 master-0 kubenswrapper[4051]: E0312 18:13:06.030818 4051 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 12 18:13:06.032593 master-0 kubenswrapper[4051]: E0312 18:13:06.030945 4051 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 12 18:13:06.032593 master-0 kubenswrapper[4051]: E0312 18:13:06.030830 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls podName:d94dc349-c5cb-4f12-8e48-867030af4981 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:06.530582899 +0000 UTC m=+166.009709130 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls") pod "ingress-operator-677db989d6-4527l" (UID: "d94dc349-c5cb-4f12-8e48-867030af4981") : secret "metrics-tls" not found Mar 12 18:13:06.032593 master-0 kubenswrapper[4051]: I0312 18:13:06.031015 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggsdx\" (UniqueName: \"kubernetes.io/projected/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-kube-api-access-ggsdx\") pod \"olm-operator-d64cfc9db-npt4r\" (UID: \"d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r" Mar 12 18:13:06.032593 master-0 kubenswrapper[4051]: I0312 18:13:06.031050 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/055f5c67-f512-4510-99c5-e194944b0599-config\") pod \"service-ca-operator-69b6fc6b88-9xlgl\" (UID: \"055f5c67-f512-4510-99c5-e194944b0599\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-9xlgl" Mar 12 18:13:06.032593 master-0 kubenswrapper[4051]: I0312 18:13:06.031269 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d94dc349-c5cb-4f12-8e48-867030af4981-trusted-ca\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:13:06.032593 master-0 kubenswrapper[4051]: E0312 18:13:06.031553 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert podName:51eb717b-d11f-4bc3-8df6-deb51d5889f3 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:06.53141254 +0000 UTC m=+166.010538771 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-kwv7s" (UID: "51eb717b-d11f-4bc3-8df6-deb51d5889f3") : secret "package-server-manager-serving-cert" not found Mar 12 18:13:06.032593 master-0 kubenswrapper[4051]: E0312 18:13:06.031625 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls podName:e22c7035-4b7a-48cb-9abb-db277b387842 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:06.531581964 +0000 UTC m=+166.010708195 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-l4krq" (UID: "e22c7035-4b7a-48cb-9abb-db277b387842") : secret "image-registry-operator-tls" not found Mar 12 18:13:06.032593 master-0 kubenswrapper[4051]: I0312 18:13:06.031673 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e22c7035-4b7a-48cb-9abb-db277b387842-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:13:06.032593 master-0 kubenswrapper[4051]: I0312 18:13:06.031961 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/37cd9c0a-697e-4e67-932b-b331ff77c8c0-available-featuregates\") pod \"openshift-config-operator-64488f9d78-tjp2j\" (UID: \"37cd9c0a-697e-4e67-932b-b331ff77c8c0\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" Mar 12 18:13:06.032593 master-0 kubenswrapper[4051]: I0312 18:13:06.031982 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37cd9c0a-697e-4e67-932b-b331ff77c8c0-serving-cert\") pod \"openshift-config-operator-64488f9d78-tjp2j\" (UID: \"37cd9c0a-697e-4e67-932b-b331ff77c8c0\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" Mar 12 18:13:06.032593 master-0 kubenswrapper[4051]: I0312 18:13:06.032331 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-m6hsp\" (UID: \"223a548b-a3ad-40dd-82de-e3dbb7f3e4fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-m6hsp" Mar 12 18:13:06.032593 master-0 kubenswrapper[4051]: I0312 18:13:06.032488 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3a2cda2-b70f-4128-a1be-48503f5aad6d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-6nvn4\" (UID: \"f3a2cda2-b70f-4128-a1be-48503f5aad6d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4" Mar 12 18:13:06.032593 master-0 kubenswrapper[4051]: I0312 18:13:06.032478 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-fz79c\" (UID: \"e94d098b-fbcc-4e85-b8ad-42f3a21c822c\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" Mar 12 18:13:06.033185 master-0 kubenswrapper[4051]: I0312 18:13:06.032739 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/37cd9c0a-697e-4e67-932b-b331ff77c8c0-available-featuregates\") pod \"openshift-config-operator-64488f9d78-tjp2j\" (UID: \"37cd9c0a-697e-4e67-932b-b331ff77c8c0\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" Mar 12 18:13:06.033185 master-0 kubenswrapper[4051]: I0312 18:13:06.032672 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bttzm\" (UniqueName: \"kubernetes.io/projected/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-kube-api-access-bttzm\") pod \"cluster-monitoring-operator-674cbfbd9d-fz79c\" (UID: \"e94d098b-fbcc-4e85-b8ad-42f3a21c822c\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" Mar 12 18:13:06.033259 master-0 kubenswrapper[4051]: I0312 18:13:06.033069 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjmcv\" (UniqueName: \"kubernetes.io/projected/d94dc349-c5cb-4f12-8e48-867030af4981-kube-api-access-zjmcv\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:13:06.033458 master-0 kubenswrapper[4051]: I0312 18:13:06.033357 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krrkl\" (UniqueName: \"kubernetes.io/projected/47850839-bb4b-41e9-ac31-f1cabbb4926d-kube-api-access-krrkl\") pod \"catalog-operator-7d9c49f57b-pslh7\" (UID: \"47850839-bb4b-41e9-ac31-f1cabbb4926d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7" Mar 12 18:13:06.033507 master-0 kubenswrapper[4051]: I0312 18:13:06.033269 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab926874-9722-4e65-9084-27b2f9915450-serving-cert\") pod \"kube-apiserver-operator-68bd585b-ddsbn\" (UID: \"ab926874-9722-4e65-9084-27b2f9915450\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-ddsbn" Mar 12 18:13:06.033914 master-0 kubenswrapper[4051]: I0312 18:13:06.033652 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab926874-9722-4e65-9084-27b2f9915450-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-ddsbn\" (UID: \"ab926874-9722-4e65-9084-27b2f9915450\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-ddsbn" Mar 12 18:13:06.035884 master-0 kubenswrapper[4051]: I0312 18:13:06.035852 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1e2340b-ebca-40de-b1e0-8133999cd860-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-2xr98\" (UID: \"a1e2340b-ebca-40de-b1e0-8133999cd860\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-2xr98" Mar 12 18:13:06.035981 master-0 kubenswrapper[4051]: I0312 18:13:06.035944 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/055f5c67-f512-4510-99c5-e194944b0599-serving-cert\") pod \"service-ca-operator-69b6fc6b88-9xlgl\" (UID: \"055f5c67-f512-4510-99c5-e194944b0599\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-9xlgl" Mar 12 18:13:06.036161 master-0 kubenswrapper[4051]: I0312 18:13:06.036116 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ae1240-e04e-48e9-88df-9f1a53508da7-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpb6k\" (UID: \"d4ae1240-e04e-48e9-88df-9f1a53508da7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpb6k" Mar 12 18:13:06.037223 master-0 kubenswrapper[4051]: I0312 18:13:06.036423 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/236f2886-bb69-49a7-9471-36454fd1cbd3-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-6qlzz\" (UID: \"236f2886-bb69-49a7-9471-36454fd1cbd3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6qlzz" Mar 12 18:13:06.037373 master-0 kubenswrapper[4051]: I0312 18:13:06.037356 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-clkx5\" (UID: \"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:13:06.039130 master-0 kubenswrapper[4051]: I0312 18:13:06.039097 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdlxn\" (UniqueName: \"kubernetes.io/projected/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-kube-api-access-fdlxn\") pod \"marketplace-operator-64bf9778cb-clkx5\" (UID: \"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:13:06.039180 master-0 kubenswrapper[4051]: I0312 18:13:06.039128 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-m6hsp\" (UID: \"223a548b-a3ad-40dd-82de-e3dbb7f3e4fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-m6hsp" Mar 12 18:13:06.039336 master-0 kubenswrapper[4051]: I0312 18:13:06.039139 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkt7d\" (UniqueName: \"kubernetes.io/projected/055f5c67-f512-4510-99c5-e194944b0599-kube-api-access-tkt7d\") pod \"service-ca-operator-69b6fc6b88-9xlgl\" (UID: \"055f5c67-f512-4510-99c5-e194944b0599\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-9xlgl" Mar 12 18:13:06.040705 master-0 kubenswrapper[4051]: I0312 18:13:06.039347 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d94dc349-c5cb-4f12-8e48-867030af4981-bound-sa-token\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:13:06.040705 master-0 kubenswrapper[4051]: I0312 18:13:06.039376 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab926874-9722-4e65-9084-27b2f9915450-config\") pod \"kube-apiserver-operator-68bd585b-ddsbn\" (UID: \"ab926874-9722-4e65-9084-27b2f9915450\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-ddsbn" Mar 12 18:13:06.040705 master-0 kubenswrapper[4051]: I0312 18:13:06.039397 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmxc2\" (UniqueName: \"kubernetes.io/projected/e22c7035-4b7a-48cb-9abb-db277b387842-kube-api-access-pmxc2\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:13:06.040705 master-0 kubenswrapper[4051]: I0312 18:13:06.039394 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-fz79c\" (UID: \"e94d098b-fbcc-4e85-b8ad-42f3a21c822c\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" Mar 12 18:13:06.040705 master-0 kubenswrapper[4051]: I0312 18:13:06.040626 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab926874-9722-4e65-9084-27b2f9915450-config\") pod \"kube-apiserver-operator-68bd585b-ddsbn\" (UID: \"ab926874-9722-4e65-9084-27b2f9915450\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-ddsbn" Mar 12 18:13:06.041333 master-0 kubenswrapper[4051]: I0312 18:13:06.041314 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-clkx5\" (UID: \"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:13:06.067805 master-0 kubenswrapper[4051]: I0312 18:13:06.067698 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-w7xvp"] Mar 12 18:13:06.068749 master-0 kubenswrapper[4051]: I0312 18:13:06.068484 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-w7xvp" Mar 12 18:13:06.069193 master-0 kubenswrapper[4051]: I0312 18:13:06.069129 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp"] Mar 12 18:13:06.071451 master-0 kubenswrapper[4051]: I0312 18:13:06.069822 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:06.072696 master-0 kubenswrapper[4051]: I0312 18:13:06.072647 4051 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 12 18:13:06.072953 master-0 kubenswrapper[4051]: I0312 18:13:06.072934 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 12 18:13:06.073172 master-0 kubenswrapper[4051]: I0312 18:13:06.073033 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 12 18:13:06.073172 master-0 kubenswrapper[4051]: I0312 18:13:06.073103 4051 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 12 18:13:06.073329 master-0 kubenswrapper[4051]: I0312 18:13:06.073179 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 12 18:13:06.073424 master-0 kubenswrapper[4051]: I0312 18:13:06.073376 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 12 18:13:06.074610 master-0 kubenswrapper[4051]: I0312 18:13:06.073797 4051 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 12 18:13:06.074610 master-0 kubenswrapper[4051]: I0312 18:13:06.074410 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-kcpg5"] Mar 12 18:13:06.075010 master-0 kubenswrapper[4051]: I0312 18:13:06.074981 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-kcpg5" Mar 12 18:13:06.075093 master-0 kubenswrapper[4051]: I0312 18:13:06.075072 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b"] Mar 12 18:13:06.075620 master-0 kubenswrapper[4051]: I0312 18:13:06.075602 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-589895fbb7-jqj5k"] Mar 12 18:13:06.075750 master-0 kubenswrapper[4051]: I0312 18:13:06.075721 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:13:06.076267 master-0 kubenswrapper[4051]: I0312 18:13:06.076251 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-589895fbb7-jqj5k" Mar 12 18:13:06.076395 master-0 kubenswrapper[4051]: I0312 18:13:06.076370 4051 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 12 18:13:06.076464 master-0 kubenswrapper[4051]: I0312 18:13:06.076373 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b"] Mar 12 18:13:06.076982 master-0 kubenswrapper[4051]: I0312 18:13:06.076948 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:13:06.077574 master-0 kubenswrapper[4051]: I0312 18:13:06.077537 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-649db"] Mar 12 18:13:06.078130 master-0 kubenswrapper[4051]: I0312 18:13:06.078103 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-649db" Mar 12 18:13:06.079424 master-0 kubenswrapper[4051]: I0312 18:13:06.079399 4051 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 12 18:13:06.080539 master-0 kubenswrapper[4051]: I0312 18:13:06.080501 4051 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 12 18:13:06.083342 master-0 kubenswrapper[4051]: I0312 18:13:06.083315 4051 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 12 18:13:06.084759 master-0 kubenswrapper[4051]: I0312 18:13:06.084733 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 12 18:13:06.085643 master-0 kubenswrapper[4051]: I0312 18:13:06.085616 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 12 18:13:06.085643 master-0 kubenswrapper[4051]: I0312 18:13:06.085625 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 12 18:13:06.085745 master-0 kubenswrapper[4051]: I0312 18:13:06.085719 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 12 18:13:06.085810 master-0 kubenswrapper[4051]: I0312 18:13:06.085788 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 12 18:13:06.085966 master-0 kubenswrapper[4051]: I0312 18:13:06.085932 4051 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 12 18:13:06.086024 master-0 kubenswrapper[4051]: I0312 18:13:06.085969 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 12 18:13:06.086221 master-0 kubenswrapper[4051]: I0312 18:13:06.086195 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 12 18:13:06.086322 master-0 kubenswrapper[4051]: I0312 18:13:06.086195 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 12 18:13:06.086420 master-0 kubenswrapper[4051]: I0312 18:13:06.086393 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 12 18:13:06.086458 master-0 kubenswrapper[4051]: I0312 18:13:06.086423 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 12 18:13:06.086487 master-0 kubenswrapper[4051]: I0312 18:13:06.086467 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 12 18:13:06.086732 master-0 kubenswrapper[4051]: I0312 18:13:06.086686 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 12 18:13:06.087237 master-0 kubenswrapper[4051]: I0312 18:13:06.087126 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 12 18:13:06.089462 master-0 kubenswrapper[4051]: I0312 18:13:06.089399 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 12 18:13:06.094126 master-0 kubenswrapper[4051]: I0312 18:13:06.094079 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 12 18:13:06.140392 master-0 kubenswrapper[4051]: I0312 18:13:06.140338 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlf77\" (UniqueName: \"kubernetes.io/projected/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-kube-api-access-wlf77\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:06.140392 master-0 kubenswrapper[4051]: I0312 18:13:06.140385 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:06.140490 master-0 kubenswrapper[4051]: I0312 18:13:06.140422 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e720e1d0-5a6d-4b76-8b25-5963e24950f5-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-w7xvp\" (UID: \"e720e1d0-5a6d-4b76-8b25-5963e24950f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-w7xvp" Mar 12 18:13:06.140637 master-0 kubenswrapper[4051]: I0312 18:13:06.140599 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:06.140804 master-0 kubenswrapper[4051]: I0312 18:13:06.140755 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e720e1d0-5a6d-4b76-8b25-5963e24950f5-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-w7xvp\" (UID: \"e720e1d0-5a6d-4b76-8b25-5963e24950f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-w7xvp" Mar 12 18:13:06.140835 master-0 kubenswrapper[4051]: I0312 18:13:06.140814 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:06.140997 master-0 kubenswrapper[4051]: I0312 18:13:06.140970 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e720e1d0-5a6d-4b76-8b25-5963e24950f5-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-w7xvp\" (UID: \"e720e1d0-5a6d-4b76-8b25-5963e24950f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-w7xvp" Mar 12 18:13:06.241439 master-0 kubenswrapper[4051]: I0312 18:13:06.241345 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e720e1d0-5a6d-4b76-8b25-5963e24950f5-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-w7xvp\" (UID: \"e720e1d0-5a6d-4b76-8b25-5963e24950f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-w7xvp" Mar 12 18:13:06.241439 master-0 kubenswrapper[4051]: I0312 18:13:06.241406 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:06.241746 master-0 kubenswrapper[4051]: E0312 18:13:06.241504 4051 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 12 18:13:06.241746 master-0 kubenswrapper[4051]: E0312 18:13:06.241571 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert podName:306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed nodeName:}" failed. No retries permitted until 2026-03-12 18:13:06.74155321 +0000 UTC m=+166.220679441 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-lqpbp" (UID: "306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed") : secret "performance-addon-operator-webhook-cert" not found Mar 12 18:13:06.241912 master-0 kubenswrapper[4051]: I0312 18:13:06.241807 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e697746f-fb9e-4d10-ab61-33c68e62cc0d-serving-cert\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:13:06.242109 master-0 kubenswrapper[4051]: I0312 18:13:06.242052 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e697746f-fb9e-4d10-ab61-33c68e62cc0d-config\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:13:06.242183 master-0 kubenswrapper[4051]: I0312 18:13:06.242146 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e697746f-fb9e-4d10-ab61-33c68e62cc0d-etcd-ca\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:13:06.242227 master-0 kubenswrapper[4051]: I0312 18:13:06.242204 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/062f1b21-2ffc-47da-8334-427c3b2a1a90-config\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:13:06.242353 master-0 kubenswrapper[4051]: I0312 18:13:06.242331 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/062f1b21-2ffc-47da-8334-427c3b2a1a90-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:13:06.242421 master-0 kubenswrapper[4051]: I0312 18:13:06.242380 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vct98\" (UniqueName: \"kubernetes.io/projected/e697746f-fb9e-4d10-ab61-33c68e62cc0d-kube-api-access-vct98\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:13:06.242644 master-0 kubenswrapper[4051]: I0312 18:13:06.242570 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2skd\" (UniqueName: \"kubernetes.io/projected/875bdfaa-b0a4-4412-a477-c962844e7057-kube-api-access-l2skd\") pod \"multus-admission-controller-8d675b596-kcpg5\" (UID: \"875bdfaa-b0a4-4412-a477-c962844e7057\") " pod="openshift-multus/multus-admission-controller-8d675b596-kcpg5" Mar 12 18:13:06.242743 master-0 kubenswrapper[4051]: I0312 18:13:06.242705 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e720e1d0-5a6d-4b76-8b25-5963e24950f5-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-w7xvp\" (UID: \"e720e1d0-5a6d-4b76-8b25-5963e24950f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-w7xvp" Mar 12 18:13:06.242812 master-0 kubenswrapper[4051]: I0312 18:13:06.242777 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e697746f-fb9e-4d10-ab61-33c68e62cc0d-etcd-client\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:13:06.242894 master-0 kubenswrapper[4051]: I0312 18:13:06.242862 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlf77\" (UniqueName: \"kubernetes.io/projected/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-kube-api-access-wlf77\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:06.242973 master-0 kubenswrapper[4051]: I0312 18:13:06.242942 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:06.243119 master-0 kubenswrapper[4051]: I0312 18:13:06.243033 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e720e1d0-5a6d-4b76-8b25-5963e24950f5-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-w7xvp\" (UID: \"e720e1d0-5a6d-4b76-8b25-5963e24950f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-w7xvp" Mar 12 18:13:06.243119 master-0 kubenswrapper[4051]: I0312 18:13:06.243096 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs\") pod \"multus-admission-controller-8d675b596-kcpg5\" (UID: \"875bdfaa-b0a4-4412-a477-c962844e7057\") " pod="openshift-multus/multus-admission-controller-8d675b596-kcpg5" Mar 12 18:13:06.243258 master-0 kubenswrapper[4051]: I0312 18:13:06.243225 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vr66\" (UniqueName: \"kubernetes.io/projected/45aa4887-c913-4ece-ae34-fcde33832621-kube-api-access-4vr66\") pod \"csi-snapshot-controller-operator-5685fbc7d-649db\" (UID: \"45aa4887-c913-4ece-ae34-fcde33832621\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-649db" Mar 12 18:13:06.243358 master-0 kubenswrapper[4051]: I0312 18:13:06.243330 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e697746f-fb9e-4d10-ab61-33c68e62cc0d-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:13:06.243404 master-0 kubenswrapper[4051]: I0312 18:13:06.243379 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th8tc\" (UniqueName: \"kubernetes.io/projected/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-kube-api-access-th8tc\") pod \"dns-operator-589895fbb7-jqj5k\" (UID: \"8ad05507-e242-4ff8-ae80-c16ff9ee68e2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-jqj5k" Mar 12 18:13:06.243548 master-0 kubenswrapper[4051]: I0312 18:13:06.243494 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn9nf\" (UniqueName: \"kubernetes.io/projected/062f1b21-2ffc-47da-8334-427c3b2a1a90-kube-api-access-jn9nf\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:13:06.243613 master-0 kubenswrapper[4051]: I0312 18:13:06.243585 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/062f1b21-2ffc-47da-8334-427c3b2a1a90-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:13:06.243652 master-0 kubenswrapper[4051]: I0312 18:13:06.243638 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:06.243717 master-0 kubenswrapper[4051]: I0312 18:13:06.243694 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/062f1b21-2ffc-47da-8334-427c3b2a1a90-serving-cert\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:13:06.243842 master-0 kubenswrapper[4051]: I0312 18:13:06.243789 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls\") pod \"dns-operator-589895fbb7-jqj5k\" (UID: \"8ad05507-e242-4ff8-ae80-c16ff9ee68e2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-jqj5k" Mar 12 18:13:06.243893 master-0 kubenswrapper[4051]: E0312 18:13:06.243861 4051 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 12 18:13:06.244013 master-0 kubenswrapper[4051]: E0312 18:13:06.243987 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls podName:306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed nodeName:}" failed. No retries permitted until 2026-03-12 18:13:06.743948221 +0000 UTC m=+166.223074492 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-lqpbp" (UID: "306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed") : secret "node-tuning-operator-tls" not found Mar 12 18:13:06.244066 master-0 kubenswrapper[4051]: I0312 18:13:06.244018 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e720e1d0-5a6d-4b76-8b25-5963e24950f5-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-w7xvp\" (UID: \"e720e1d0-5a6d-4b76-8b25-5963e24950f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-w7xvp" Mar 12 18:13:06.244221 master-0 kubenswrapper[4051]: I0312 18:13:06.244179 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e720e1d0-5a6d-4b76-8b25-5963e24950f5-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-w7xvp\" (UID: \"e720e1d0-5a6d-4b76-8b25-5963e24950f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-w7xvp" Mar 12 18:13:06.245922 master-0 kubenswrapper[4051]: I0312 18:13:06.245871 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:06.344949 master-0 kubenswrapper[4051]: I0312 18:13:06.344883 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e697746f-fb9e-4d10-ab61-33c68e62cc0d-serving-cert\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:13:06.345137 master-0 kubenswrapper[4051]: I0312 18:13:06.344957 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e697746f-fb9e-4d10-ab61-33c68e62cc0d-config\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:13:06.345137 master-0 kubenswrapper[4051]: I0312 18:13:06.344991 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e697746f-fb9e-4d10-ab61-33c68e62cc0d-etcd-ca\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:13:06.345137 master-0 kubenswrapper[4051]: I0312 18:13:06.345013 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/062f1b21-2ffc-47da-8334-427c3b2a1a90-config\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:13:06.345137 master-0 kubenswrapper[4051]: I0312 18:13:06.345061 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/062f1b21-2ffc-47da-8334-427c3b2a1a90-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:13:06.345137 master-0 kubenswrapper[4051]: I0312 18:13:06.345085 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vct98\" (UniqueName: \"kubernetes.io/projected/e697746f-fb9e-4d10-ab61-33c68e62cc0d-kube-api-access-vct98\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:13:06.345533 master-0 kubenswrapper[4051]: I0312 18:13:06.345450 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2skd\" (UniqueName: \"kubernetes.io/projected/875bdfaa-b0a4-4412-a477-c962844e7057-kube-api-access-l2skd\") pod \"multus-admission-controller-8d675b596-kcpg5\" (UID: \"875bdfaa-b0a4-4412-a477-c962844e7057\") " pod="openshift-multus/multus-admission-controller-8d675b596-kcpg5" Mar 12 18:13:06.346050 master-0 kubenswrapper[4051]: I0312 18:13:06.345956 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e697746f-fb9e-4d10-ab61-33c68e62cc0d-etcd-client\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:13:06.346254 master-0 kubenswrapper[4051]: I0312 18:13:06.346195 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs\") pod \"multus-admission-controller-8d675b596-kcpg5\" (UID: \"875bdfaa-b0a4-4412-a477-c962844e7057\") " pod="openshift-multus/multus-admission-controller-8d675b596-kcpg5" Mar 12 18:13:06.346349 master-0 kubenswrapper[4051]: I0312 18:13:06.346319 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/062f1b21-2ffc-47da-8334-427c3b2a1a90-config\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:13:06.346401 master-0 kubenswrapper[4051]: I0312 18:13:06.346362 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vr66\" (UniqueName: \"kubernetes.io/projected/45aa4887-c913-4ece-ae34-fcde33832621-kube-api-access-4vr66\") pod \"csi-snapshot-controller-operator-5685fbc7d-649db\" (UID: \"45aa4887-c913-4ece-ae34-fcde33832621\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-649db" Mar 12 18:13:06.346483 master-0 kubenswrapper[4051]: I0312 18:13:06.346441 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e697746f-fb9e-4d10-ab61-33c68e62cc0d-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:13:06.346625 master-0 kubenswrapper[4051]: E0312 18:13:06.346586 4051 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 12 18:13:06.346657 master-0 kubenswrapper[4051]: I0312 18:13:06.346454 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e697746f-fb9e-4d10-ab61-33c68e62cc0d-etcd-ca\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:13:06.346717 master-0 kubenswrapper[4051]: I0312 18:13:06.346583 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th8tc\" (UniqueName: \"kubernetes.io/projected/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-kube-api-access-th8tc\") pod \"dns-operator-589895fbb7-jqj5k\" (UID: \"8ad05507-e242-4ff8-ae80-c16ff9ee68e2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-jqj5k" Mar 12 18:13:06.346766 master-0 kubenswrapper[4051]: E0312 18:13:06.346701 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs podName:875bdfaa-b0a4-4412-a477-c962844e7057 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:06.846664557 +0000 UTC m=+166.325790818 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs") pod "multus-admission-controller-8d675b596-kcpg5" (UID: "875bdfaa-b0a4-4412-a477-c962844e7057") : secret "multus-admission-controller-secret" not found Mar 12 18:13:06.346766 master-0 kubenswrapper[4051]: I0312 18:13:06.346695 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/062f1b21-2ffc-47da-8334-427c3b2a1a90-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:13:06.346830 master-0 kubenswrapper[4051]: I0312 18:13:06.346792 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn9nf\" (UniqueName: \"kubernetes.io/projected/062f1b21-2ffc-47da-8334-427c3b2a1a90-kube-api-access-jn9nf\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:13:06.346857 master-0 kubenswrapper[4051]: I0312 18:13:06.346842 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/062f1b21-2ffc-47da-8334-427c3b2a1a90-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:13:06.347107 master-0 kubenswrapper[4051]: I0312 18:13:06.347058 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/062f1b21-2ffc-47da-8334-427c3b2a1a90-serving-cert\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:13:06.347180 master-0 kubenswrapper[4051]: I0312 18:13:06.347160 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls\") pod \"dns-operator-589895fbb7-jqj5k\" (UID: \"8ad05507-e242-4ff8-ae80-c16ff9ee68e2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-jqj5k" Mar 12 18:13:06.347429 master-0 kubenswrapper[4051]: E0312 18:13:06.347384 4051 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 12 18:13:06.347539 master-0 kubenswrapper[4051]: E0312 18:13:06.347478 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls podName:8ad05507-e242-4ff8-ae80-c16ff9ee68e2 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:06.847452047 +0000 UTC m=+166.326578318 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls") pod "dns-operator-589895fbb7-jqj5k" (UID: "8ad05507-e242-4ff8-ae80-c16ff9ee68e2") : secret "metrics-tls" not found Mar 12 18:13:06.347759 master-0 kubenswrapper[4051]: I0312 18:13:06.347706 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e697746f-fb9e-4d10-ab61-33c68e62cc0d-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:13:06.348119 master-0 kubenswrapper[4051]: I0312 18:13:06.348059 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/062f1b21-2ffc-47da-8334-427c3b2a1a90-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:13:06.349171 master-0 kubenswrapper[4051]: I0312 18:13:06.349111 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e697746f-fb9e-4d10-ab61-33c68e62cc0d-config\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:13:06.350098 master-0 kubenswrapper[4051]: I0312 18:13:06.350050 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e697746f-fb9e-4d10-ab61-33c68e62cc0d-serving-cert\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:13:06.350783 master-0 kubenswrapper[4051]: I0312 18:13:06.350752 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e697746f-fb9e-4d10-ab61-33c68e62cc0d-etcd-client\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:13:06.352884 master-0 kubenswrapper[4051]: I0312 18:13:06.352836 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/062f1b21-2ffc-47da-8334-427c3b2a1a90-serving-cert\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:13:06.550176 master-0 kubenswrapper[4051]: I0312 18:13:06.550068 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert\") pod \"olm-operator-d64cfc9db-npt4r\" (UID: \"d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r" Mar 12 18:13:06.550176 master-0 kubenswrapper[4051]: I0312 18:13:06.550154 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-fz79c\" (UID: \"e94d098b-fbcc-4e85-b8ad-42f3a21c822c\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" Mar 12 18:13:06.550176 master-0 kubenswrapper[4051]: I0312 18:13:06.550183 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-clkx5\" (UID: \"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:13:06.550612 master-0 kubenswrapper[4051]: I0312 18:13:06.550221 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert\") pod \"catalog-operator-7d9c49f57b-pslh7\" (UID: \"47850839-bb4b-41e9-ac31-f1cabbb4926d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7" Mar 12 18:13:06.550612 master-0 kubenswrapper[4051]: E0312 18:13:06.550420 4051 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 12 18:13:06.550612 master-0 kubenswrapper[4051]: E0312 18:13:06.550472 4051 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 12 18:13:06.550782 master-0 kubenswrapper[4051]: E0312 18:13:06.550483 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics podName:4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:07.550466915 +0000 UTC m=+167.029593156 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-clkx5" (UID: "4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64") : secret "marketplace-operator-metrics" not found Mar 12 18:13:06.550782 master-0 kubenswrapper[4051]: I0312 18:13:06.550732 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-kwv7s\" (UID: \"51eb717b-d11f-4bc3-8df6-deb51d5889f3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:13:06.550903 master-0 kubenswrapper[4051]: E0312 18:13:06.550739 4051 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 12 18:13:06.550903 master-0 kubenswrapper[4051]: E0312 18:13:06.550830 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls podName:e94d098b-fbcc-4e85-b8ad-42f3a21c822c nodeName:}" failed. No retries permitted until 2026-03-12 18:13:07.550818594 +0000 UTC m=+167.029944835 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-fz79c" (UID: "e94d098b-fbcc-4e85-b8ad-42f3a21c822c") : secret "cluster-monitoring-operator-tls" not found Mar 12 18:13:06.550903 master-0 kubenswrapper[4051]: I0312 18:13:06.550801 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:13:06.550903 master-0 kubenswrapper[4051]: E0312 18:13:06.550852 4051 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 12 18:13:06.550903 master-0 kubenswrapper[4051]: I0312 18:13:06.550868 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:13:06.550903 master-0 kubenswrapper[4051]: E0312 18:13:06.550894 4051 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 12 18:13:06.551202 master-0 kubenswrapper[4051]: E0312 18:13:06.550916 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert podName:51eb717b-d11f-4bc3-8df6-deb51d5889f3 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:07.550888516 +0000 UTC m=+167.030014787 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-kwv7s" (UID: "51eb717b-d11f-4bc3-8df6-deb51d5889f3") : secret "package-server-manager-serving-cert" not found Mar 12 18:13:06.551202 master-0 kubenswrapper[4051]: E0312 18:13:06.550806 4051 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 12 18:13:06.551202 master-0 kubenswrapper[4051]: E0312 18:13:06.550932 4051 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 12 18:13:06.551202 master-0 kubenswrapper[4051]: E0312 18:13:06.551027 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert podName:47850839-bb4b-41e9-ac31-f1cabbb4926d nodeName:}" failed. No retries permitted until 2026-03-12 18:13:07.550975498 +0000 UTC m=+167.030101909 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert") pod "catalog-operator-7d9c49f57b-pslh7" (UID: "47850839-bb4b-41e9-ac31-f1cabbb4926d") : secret "catalog-operator-serving-cert" not found Mar 12 18:13:06.551427 master-0 kubenswrapper[4051]: E0312 18:13:06.551235 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert podName:d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:07.551189244 +0000 UTC m=+167.030315805 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert") pod "olm-operator-d64cfc9db-npt4r" (UID: "d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27") : secret "olm-operator-serving-cert" not found Mar 12 18:13:06.551427 master-0 kubenswrapper[4051]: E0312 18:13:06.551266 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls podName:e22c7035-4b7a-48cb-9abb-db277b387842 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:07.551251645 +0000 UTC m=+167.030378106 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-l4krq" (UID: "e22c7035-4b7a-48cb-9abb-db277b387842") : secret "image-registry-operator-tls" not found Mar 12 18:13:06.551427 master-0 kubenswrapper[4051]: E0312 18:13:06.551288 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls podName:d94dc349-c5cb-4f12-8e48-867030af4981 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:07.551280996 +0000 UTC m=+167.030407467 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls") pod "ingress-operator-677db989d6-4527l" (UID: "d94dc349-c5cb-4f12-8e48-867030af4981") : secret "metrics-tls" not found Mar 12 18:13:06.755070 master-0 kubenswrapper[4051]: I0312 18:13:06.754814 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:06.755295 master-0 kubenswrapper[4051]: E0312 18:13:06.755190 4051 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 12 18:13:06.755507 master-0 kubenswrapper[4051]: E0312 18:13:06.755460 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert podName:306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed nodeName:}" failed. No retries permitted until 2026-03-12 18:13:07.755424124 +0000 UTC m=+167.234550385 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-lqpbp" (UID: "306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed") : secret "performance-addon-operator-webhook-cert" not found Mar 12 18:13:06.755635 master-0 kubenswrapper[4051]: I0312 18:13:06.755590 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:06.755837 master-0 kubenswrapper[4051]: E0312 18:13:06.755795 4051 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 12 18:13:06.755900 master-0 kubenswrapper[4051]: E0312 18:13:06.755883 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls podName:306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed nodeName:}" failed. No retries permitted until 2026-03-12 18:13:07.755867955 +0000 UTC m=+167.234994216 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-lqpbp" (UID: "306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed") : secret "node-tuning-operator-tls" not found Mar 12 18:13:06.856754 master-0 kubenswrapper[4051]: I0312 18:13:06.856650 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs\") pod \"multus-admission-controller-8d675b596-kcpg5\" (UID: \"875bdfaa-b0a4-4412-a477-c962844e7057\") " pod="openshift-multus/multus-admission-controller-8d675b596-kcpg5" Mar 12 18:13:06.857666 master-0 kubenswrapper[4051]: E0312 18:13:06.856935 4051 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 12 18:13:06.857666 master-0 kubenswrapper[4051]: E0312 18:13:06.857060 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs podName:875bdfaa-b0a4-4412-a477-c962844e7057 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:07.857028881 +0000 UTC m=+167.336155122 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs") pod "multus-admission-controller-8d675b596-kcpg5" (UID: "875bdfaa-b0a4-4412-a477-c962844e7057") : secret "multus-admission-controller-secret" not found Mar 12 18:13:06.857666 master-0 kubenswrapper[4051]: I0312 18:13:06.857201 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls\") pod \"dns-operator-589895fbb7-jqj5k\" (UID: \"8ad05507-e242-4ff8-ae80-c16ff9ee68e2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-jqj5k" Mar 12 18:13:06.857666 master-0 kubenswrapper[4051]: E0312 18:13:06.857474 4051 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 12 18:13:06.857666 master-0 kubenswrapper[4051]: E0312 18:13:06.857666 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls podName:8ad05507-e242-4ff8-ae80-c16ff9ee68e2 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:07.857636846 +0000 UTC m=+167.336763117 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls") pod "dns-operator-589895fbb7-jqj5k" (UID: "8ad05507-e242-4ff8-ae80-c16ff9ee68e2") : secret "metrics-tls" not found Mar 12 18:13:07.081564 master-0 kubenswrapper[4051]: I0312 18:13:07.078820 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-64bf9778cb-clkx5"] Mar 12 18:13:07.085301 master-0 kubenswrapper[4051]: I0312 18:13:07.085216 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c"] Mar 12 18:13:07.093674 master-0 kubenswrapper[4051]: I0312 18:13:07.093557 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4"] Mar 12 18:13:07.095257 master-0 kubenswrapper[4051]: I0312 18:13:07.095200 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpb6k"] Mar 12 18:13:07.096957 master-0 kubenswrapper[4051]: I0312 18:13:07.096902 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s"] Mar 12 18:13:07.098465 master-0 kubenswrapper[4051]: I0312 18:13:07.098415 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-2xr98"] Mar 12 18:13:07.100202 master-0 kubenswrapper[4051]: I0312 18:13:07.100149 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-69b6fc6b88-9xlgl"] Mar 12 18:13:07.101806 master-0 kubenswrapper[4051]: I0312 18:13:07.101751 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-ddsbn"] Mar 12 18:13:07.103396 master-0 kubenswrapper[4051]: I0312 18:13:07.103358 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-m6hsp"] Mar 12 18:13:07.104937 master-0 kubenswrapper[4051]: I0312 18:13:07.104902 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7"] Mar 12 18:13:07.106442 master-0 kubenswrapper[4051]: I0312 18:13:07.106408 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp"] Mar 12 18:13:07.107967 master-0 kubenswrapper[4051]: I0312 18:13:07.107934 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b"] Mar 12 18:13:07.109700 master-0 kubenswrapper[4051]: I0312 18:13:07.109643 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-649db"] Mar 12 18:13:07.111585 master-0 kubenswrapper[4051]: I0312 18:13:07.111535 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-kcpg5"] Mar 12 18:13:07.126797 master-0 kubenswrapper[4051]: I0312 18:13:07.126702 4051 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4k8wm"] Mar 12 18:13:07.127726 master-0 kubenswrapper[4051]: I0312 18:13:07.127669 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4k8wm" Mar 12 18:13:07.131399 master-0 kubenswrapper[4051]: I0312 18:13:07.131320 4051 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 12 18:13:07.160482 master-0 kubenswrapper[4051]: I0312 18:13:07.160430 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6ggg\" (UniqueName: \"kubernetes.io/projected/236f2886-bb69-49a7-9471-36454fd1cbd3-kube-api-access-b6ggg\") pod \"openshift-apiserver-operator-799b6db4d7-6qlzz\" (UID: \"236f2886-bb69-49a7-9471-36454fd1cbd3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6qlzz" Mar 12 18:13:07.166541 master-0 kubenswrapper[4051]: I0312 18:13:07.165182 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbnx8\" (UniqueName: \"kubernetes.io/projected/51eb717b-d11f-4bc3-8df6-deb51d5889f3-kube-api-access-gbnx8\") pod \"package-server-manager-854648ff6d-kwv7s\" (UID: \"51eb717b-d11f-4bc3-8df6-deb51d5889f3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:13:07.171457 master-0 kubenswrapper[4051]: I0312 18:13:07.171272 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d94dc349-c5cb-4f12-8e48-867030af4981-bound-sa-token\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:13:07.172237 master-0 kubenswrapper[4051]: I0312 18:13:07.172195 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vr66\" (UniqueName: \"kubernetes.io/projected/45aa4887-c913-4ece-ae34-fcde33832621-kube-api-access-4vr66\") pod \"csi-snapshot-controller-operator-5685fbc7d-649db\" (UID: \"45aa4887-c913-4ece-ae34-fcde33832621\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-649db" Mar 12 18:13:07.175120 master-0 kubenswrapper[4051]: I0312 18:13:07.175084 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-677db989d6-4527l"] Mar 12 18:13:07.182763 master-0 kubenswrapper[4051]: I0312 18:13:07.181807 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b"] Mar 12 18:13:07.187915 master-0 kubenswrapper[4051]: I0312 18:13:07.187867 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s55hv\" (UniqueName: \"kubernetes.io/projected/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa-kube-api-access-s55hv\") pod \"openshift-controller-manager-operator-8565d84698-m6hsp\" (UID: \"223a548b-a3ad-40dd-82de-e3dbb7f3e4fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-m6hsp" Mar 12 18:13:07.190457 master-0 kubenswrapper[4051]: I0312 18:13:07.190401 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e22c7035-4b7a-48cb-9abb-db277b387842-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:13:07.190689 master-0 kubenswrapper[4051]: I0312 18:13:07.190662 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-w7xvp"] Mar 12 18:13:07.190971 master-0 kubenswrapper[4051]: I0312 18:13:07.190935 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq"] Mar 12 18:13:07.192645 master-0 kubenswrapper[4051]: I0312 18:13:07.192617 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjmcv\" (UniqueName: \"kubernetes.io/projected/d94dc349-c5cb-4f12-8e48-867030af4981-kube-api-access-zjmcv\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:13:07.194381 master-0 kubenswrapper[4051]: I0312 18:13:07.194333 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn9nf\" (UniqueName: \"kubernetes.io/projected/062f1b21-2ffc-47da-8334-427c3b2a1a90-kube-api-access-jn9nf\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:13:07.195267 master-0 kubenswrapper[4051]: I0312 18:13:07.195231 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlf77\" (UniqueName: \"kubernetes.io/projected/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-kube-api-access-wlf77\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:07.196008 master-0 kubenswrapper[4051]: I0312 18:13:07.195974 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2skd\" (UniqueName: \"kubernetes.io/projected/875bdfaa-b0a4-4412-a477-c962844e7057-kube-api-access-l2skd\") pod \"multus-admission-controller-8d675b596-kcpg5\" (UID: \"875bdfaa-b0a4-4412-a477-c962844e7057\") " pod="openshift-multus/multus-admission-controller-8d675b596-kcpg5" Mar 12 18:13:07.196110 master-0 kubenswrapper[4051]: I0312 18:13:07.196012 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggsdx\" (UniqueName: \"kubernetes.io/projected/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-kube-api-access-ggsdx\") pod \"olm-operator-d64cfc9db-npt4r\" (UID: \"d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r" Mar 12 18:13:07.196190 master-0 kubenswrapper[4051]: I0312 18:13:07.196145 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfpb9\" (UniqueName: \"kubernetes.io/projected/37cd9c0a-697e-4e67-932b-b331ff77c8c0-kube-api-access-pfpb9\") pod \"openshift-config-operator-64488f9d78-tjp2j\" (UID: \"37cd9c0a-697e-4e67-932b-b331ff77c8c0\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" Mar 12 18:13:07.196609 master-0 kubenswrapper[4051]: I0312 18:13:07.196582 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6qlzz"] Mar 12 18:13:07.198287 master-0 kubenswrapper[4051]: I0312 18:13:07.198242 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e720e1d0-5a6d-4b76-8b25-5963e24950f5-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-w7xvp\" (UID: \"e720e1d0-5a6d-4b76-8b25-5963e24950f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-w7xvp" Mar 12 18:13:07.198362 master-0 kubenswrapper[4051]: I0312 18:13:07.198259 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th8tc\" (UniqueName: \"kubernetes.io/projected/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-kube-api-access-th8tc\") pod \"dns-operator-589895fbb7-jqj5k\" (UID: \"8ad05507-e242-4ff8-ae80-c16ff9ee68e2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-jqj5k" Mar 12 18:13:07.198786 master-0 kubenswrapper[4051]: I0312 18:13:07.198758 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab926874-9722-4e65-9084-27b2f9915450-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-ddsbn\" (UID: \"ab926874-9722-4e65-9084-27b2f9915450\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-ddsbn" Mar 12 18:13:07.200996 master-0 kubenswrapper[4051]: I0312 18:13:07.200944 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4ae1240-e04e-48e9-88df-9f1a53508da7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpb6k\" (UID: \"d4ae1240-e04e-48e9-88df-9f1a53508da7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpb6k" Mar 12 18:13:07.201072 master-0 kubenswrapper[4051]: I0312 18:13:07.201056 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vct98\" (UniqueName: \"kubernetes.io/projected/e697746f-fb9e-4d10-ab61-33c68e62cc0d-kube-api-access-vct98\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:13:07.201130 master-0 kubenswrapper[4051]: I0312 18:13:07.201107 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdlxn\" (UniqueName: \"kubernetes.io/projected/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-kube-api-access-fdlxn\") pod \"marketplace-operator-64bf9778cb-clkx5\" (UID: \"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:13:07.201874 master-0 kubenswrapper[4051]: I0312 18:13:07.201838 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmxc2\" (UniqueName: \"kubernetes.io/projected/e22c7035-4b7a-48cb-9abb-db277b387842-kube-api-access-pmxc2\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:13:07.202623 master-0 kubenswrapper[4051]: I0312 18:13:07.202604 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vpbp\" (UniqueName: \"kubernetes.io/projected/a1e2340b-ebca-40de-b1e0-8133999cd860-kube-api-access-6vpbp\") pod \"kube-storage-version-migrator-operator-7f65c457f5-2xr98\" (UID: \"a1e2340b-ebca-40de-b1e0-8133999cd860\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-2xr98" Mar 12 18:13:07.203418 master-0 kubenswrapper[4051]: I0312 18:13:07.203342 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkt7d\" (UniqueName: \"kubernetes.io/projected/055f5c67-f512-4510-99c5-e194944b0599-kube-api-access-tkt7d\") pod \"service-ca-operator-69b6fc6b88-9xlgl\" (UID: \"055f5c67-f512-4510-99c5-e194944b0599\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-9xlgl" Mar 12 18:13:07.203981 master-0 kubenswrapper[4051]: I0312 18:13:07.203937 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krrkl\" (UniqueName: \"kubernetes.io/projected/47850839-bb4b-41e9-ac31-f1cabbb4926d-kube-api-access-krrkl\") pod \"catalog-operator-7d9c49f57b-pslh7\" (UID: \"47850839-bb4b-41e9-ac31-f1cabbb4926d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7" Mar 12 18:13:07.204410 master-0 kubenswrapper[4051]: I0312 18:13:07.204373 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcvfv\" (UniqueName: \"kubernetes.io/projected/f3a2cda2-b70f-4128-a1be-48503f5aad6d-kube-api-access-tcvfv\") pod \"cluster-olm-operator-77899cf6d-6nvn4\" (UID: \"f3a2cda2-b70f-4128-a1be-48503f5aad6d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4" Mar 12 18:13:07.204456 master-0 kubenswrapper[4051]: I0312 18:13:07.204430 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-589895fbb7-jqj5k"] Mar 12 18:13:07.206110 master-0 kubenswrapper[4051]: I0312 18:13:07.206063 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r"] Mar 12 18:13:07.207634 master-0 kubenswrapper[4051]: I0312 18:13:07.207450 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j"] Mar 12 18:13:07.210789 master-0 kubenswrapper[4051]: I0312 18:13:07.210735 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bttzm\" (UniqueName: \"kubernetes.io/projected/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-kube-api-access-bttzm\") pod \"cluster-monitoring-operator-674cbfbd9d-fz79c\" (UID: \"e94d098b-fbcc-4e85-b8ad-42f3a21c822c\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" Mar 12 18:13:07.264762 master-0 kubenswrapper[4051]: I0312 18:13:07.264704 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d92dddc8-a810-43f5-8beb-32d1c8ad8381-host-slash\") pod \"iptables-alerter-4k8wm\" (UID: \"d92dddc8-a810-43f5-8beb-32d1c8ad8381\") " pod="openshift-network-operator/iptables-alerter-4k8wm" Mar 12 18:13:07.265021 master-0 kubenswrapper[4051]: I0312 18:13:07.264791 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d92dddc8-a810-43f5-8beb-32d1c8ad8381-iptables-alerter-script\") pod \"iptables-alerter-4k8wm\" (UID: \"d92dddc8-a810-43f5-8beb-32d1c8ad8381\") " pod="openshift-network-operator/iptables-alerter-4k8wm" Mar 12 18:13:07.265021 master-0 kubenswrapper[4051]: I0312 18:13:07.264883 4051 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l22gw\" (UniqueName: \"kubernetes.io/projected/d92dddc8-a810-43f5-8beb-32d1c8ad8381-kube-api-access-l22gw\") pod \"iptables-alerter-4k8wm\" (UID: \"d92dddc8-a810-43f5-8beb-32d1c8ad8381\") " pod="openshift-network-operator/iptables-alerter-4k8wm" Mar 12 18:13:07.286874 master-0 kubenswrapper[4051]: I0312 18:13:07.286117 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-w7xvp" Mar 12 18:13:07.309471 master-0 kubenswrapper[4051]: I0312 18:13:07.309429 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:13:07.322539 master-0 kubenswrapper[4051]: I0312 18:13:07.322422 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:13:07.329161 master-0 kubenswrapper[4051]: I0312 18:13:07.329044 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-649db" Mar 12 18:13:07.346429 master-0 kubenswrapper[4051]: I0312 18:13:07.345963 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpb6k" Mar 12 18:13:07.365410 master-0 kubenswrapper[4051]: I0312 18:13:07.365347 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d92dddc8-a810-43f5-8beb-32d1c8ad8381-host-slash\") pod \"iptables-alerter-4k8wm\" (UID: \"d92dddc8-a810-43f5-8beb-32d1c8ad8381\") " pod="openshift-network-operator/iptables-alerter-4k8wm" Mar 12 18:13:07.366545 master-0 kubenswrapper[4051]: I0312 18:13:07.365410 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d92dddc8-a810-43f5-8beb-32d1c8ad8381-iptables-alerter-script\") pod \"iptables-alerter-4k8wm\" (UID: \"d92dddc8-a810-43f5-8beb-32d1c8ad8381\") " pod="openshift-network-operator/iptables-alerter-4k8wm" Mar 12 18:13:07.366545 master-0 kubenswrapper[4051]: I0312 18:13:07.365483 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l22gw\" (UniqueName: \"kubernetes.io/projected/d92dddc8-a810-43f5-8beb-32d1c8ad8381-kube-api-access-l22gw\") pod \"iptables-alerter-4k8wm\" (UID: \"d92dddc8-a810-43f5-8beb-32d1c8ad8381\") " pod="openshift-network-operator/iptables-alerter-4k8wm" Mar 12 18:13:07.366545 master-0 kubenswrapper[4051]: I0312 18:13:07.365621 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d92dddc8-a810-43f5-8beb-32d1c8ad8381-host-slash\") pod \"iptables-alerter-4k8wm\" (UID: \"d92dddc8-a810-43f5-8beb-32d1c8ad8381\") " pod="openshift-network-operator/iptables-alerter-4k8wm" Mar 12 18:13:07.366545 master-0 kubenswrapper[4051]: I0312 18:13:07.366424 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d92dddc8-a810-43f5-8beb-32d1c8ad8381-iptables-alerter-script\") pod \"iptables-alerter-4k8wm\" (UID: \"d92dddc8-a810-43f5-8beb-32d1c8ad8381\") " pod="openshift-network-operator/iptables-alerter-4k8wm" Mar 12 18:13:07.385817 master-0 kubenswrapper[4051]: I0312 18:13:07.381091 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4" Mar 12 18:13:07.387458 master-0 kubenswrapper[4051]: I0312 18:13:07.387413 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-2xr98" Mar 12 18:13:07.389680 master-0 kubenswrapper[4051]: I0312 18:13:07.389648 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l22gw\" (UniqueName: \"kubernetes.io/projected/d92dddc8-a810-43f5-8beb-32d1c8ad8381-kube-api-access-l22gw\") pod \"iptables-alerter-4k8wm\" (UID: \"d92dddc8-a810-43f5-8beb-32d1c8ad8381\") " pod="openshift-network-operator/iptables-alerter-4k8wm" Mar 12 18:13:07.404012 master-0 kubenswrapper[4051]: I0312 18:13:07.403928 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6qlzz" Mar 12 18:13:07.414611 master-0 kubenswrapper[4051]: I0312 18:13:07.414169 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-ddsbn" Mar 12 18:13:07.421677 master-0 kubenswrapper[4051]: I0312 18:13:07.421633 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-9xlgl" Mar 12 18:13:07.428371 master-0 kubenswrapper[4051]: I0312 18:13:07.428340 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" Mar 12 18:13:07.433735 master-0 kubenswrapper[4051]: I0312 18:13:07.433456 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-m6hsp" Mar 12 18:13:07.462630 master-0 kubenswrapper[4051]: I0312 18:13:07.462595 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4k8wm" Mar 12 18:13:07.489456 master-0 kubenswrapper[4051]: I0312 18:13:07.489410 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-w7xvp"] Mar 12 18:13:07.510691 master-0 kubenswrapper[4051]: W0312 18:13:07.509838 4051 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode720e1d0_5a6d_4b76_8b25_5963e24950f5.slice/crio-b2474f5d479286c70d652654d0e6946155d42c6b7dd11abc4f60fd4bf3123854 WatchSource:0}: Error finding container b2474f5d479286c70d652654d0e6946155d42c6b7dd11abc4f60fd4bf3123854: Status 404 returned error can't find the container with id b2474f5d479286c70d652654d0e6946155d42c6b7dd11abc4f60fd4bf3123854 Mar 12 18:13:07.574017 master-0 kubenswrapper[4051]: E0312 18:13:07.571666 4051 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 12 18:13:07.574017 master-0 kubenswrapper[4051]: E0312 18:13:07.571735 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert podName:51eb717b-d11f-4bc3-8df6-deb51d5889f3 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:09.571715277 +0000 UTC m=+169.050841508 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-kwv7s" (UID: "51eb717b-d11f-4bc3-8df6-deb51d5889f3") : secret "package-server-manager-serving-cert" not found Mar 12 18:13:07.577335 master-0 kubenswrapper[4051]: I0312 18:13:07.571494 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-kwv7s\" (UID: \"51eb717b-d11f-4bc3-8df6-deb51d5889f3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:13:07.578989 master-0 kubenswrapper[4051]: I0312 18:13:07.578945 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:13:07.579056 master-0 kubenswrapper[4051]: I0312 18:13:07.579010 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:13:07.579152 master-0 kubenswrapper[4051]: I0312 18:13:07.579123 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert\") pod \"olm-operator-d64cfc9db-npt4r\" (UID: \"d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r" Mar 12 18:13:07.579187 master-0 kubenswrapper[4051]: I0312 18:13:07.579174 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-fz79c\" (UID: \"e94d098b-fbcc-4e85-b8ad-42f3a21c822c\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" Mar 12 18:13:07.579218 master-0 kubenswrapper[4051]: I0312 18:13:07.579201 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-clkx5\" (UID: \"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:13:07.579591 master-0 kubenswrapper[4051]: I0312 18:13:07.579241 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert\") pod \"catalog-operator-7d9c49f57b-pslh7\" (UID: \"47850839-bb4b-41e9-ac31-f1cabbb4926d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7" Mar 12 18:13:07.579591 master-0 kubenswrapper[4051]: E0312 18:13:07.579447 4051 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 12 18:13:07.579591 master-0 kubenswrapper[4051]: E0312 18:13:07.579542 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert podName:47850839-bb4b-41e9-ac31-f1cabbb4926d nodeName:}" failed. No retries permitted until 2026-03-12 18:13:09.579504856 +0000 UTC m=+169.058631087 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert") pod "catalog-operator-7d9c49f57b-pslh7" (UID: "47850839-bb4b-41e9-ac31-f1cabbb4926d") : secret "catalog-operator-serving-cert" not found Mar 12 18:13:07.580746 master-0 kubenswrapper[4051]: E0312 18:13:07.579938 4051 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 12 18:13:07.580746 master-0 kubenswrapper[4051]: E0312 18:13:07.580613 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls podName:e22c7035-4b7a-48cb-9abb-db277b387842 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:09.580600224 +0000 UTC m=+169.059726455 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-l4krq" (UID: "e22c7035-4b7a-48cb-9abb-db277b387842") : secret "image-registry-operator-tls" not found Mar 12 18:13:07.580746 master-0 kubenswrapper[4051]: E0312 18:13:07.580665 4051 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 12 18:13:07.580746 master-0 kubenswrapper[4051]: E0312 18:13:07.580685 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls podName:d94dc349-c5cb-4f12-8e48-867030af4981 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:09.580679326 +0000 UTC m=+169.059805557 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls") pod "ingress-operator-677db989d6-4527l" (UID: "d94dc349-c5cb-4f12-8e48-867030af4981") : secret "metrics-tls" not found Mar 12 18:13:07.580746 master-0 kubenswrapper[4051]: E0312 18:13:07.580715 4051 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 12 18:13:07.580746 master-0 kubenswrapper[4051]: E0312 18:13:07.580733 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert podName:d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:09.580727937 +0000 UTC m=+169.059854168 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert") pod "olm-operator-d64cfc9db-npt4r" (UID: "d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27") : secret "olm-operator-serving-cert" not found Mar 12 18:13:07.580914 master-0 kubenswrapper[4051]: E0312 18:13:07.580848 4051 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 12 18:13:07.580914 master-0 kubenswrapper[4051]: E0312 18:13:07.580892 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics podName:4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:09.580879771 +0000 UTC m=+169.060006002 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-clkx5" (UID: "4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64") : secret "marketplace-operator-metrics" not found Mar 12 18:13:07.580914 master-0 kubenswrapper[4051]: E0312 18:13:07.580901 4051 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 12 18:13:07.581005 master-0 kubenswrapper[4051]: E0312 18:13:07.580928 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls podName:e94d098b-fbcc-4e85-b8ad-42f3a21c822c nodeName:}" failed. No retries permitted until 2026-03-12 18:13:09.580921152 +0000 UTC m=+169.060047383 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-fz79c" (UID: "e94d098b-fbcc-4e85-b8ad-42f3a21c822c") : secret "cluster-monitoring-operator-tls" not found Mar 12 18:13:07.599905 master-0 kubenswrapper[4051]: I0312 18:13:07.598733 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-649db"] Mar 12 18:13:07.619868 master-0 kubenswrapper[4051]: I0312 18:13:07.617999 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b"] Mar 12 18:13:07.631155 master-0 kubenswrapper[4051]: I0312 18:13:07.630996 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4"] Mar 12 18:13:07.655486 master-0 kubenswrapper[4051]: I0312 18:13:07.654831 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-w7xvp" event={"ID":"e720e1d0-5a6d-4b76-8b25-5963e24950f5","Type":"ContainerStarted","Data":"b2474f5d479286c70d652654d0e6946155d42c6b7dd11abc4f60fd4bf3123854"} Mar 12 18:13:07.656826 master-0 kubenswrapper[4051]: I0312 18:13:07.656797 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4k8wm" event={"ID":"d92dddc8-a810-43f5-8beb-32d1c8ad8381","Type":"ContainerStarted","Data":"419bbcc10e95d196cd0f08dbf057bbc2aa7a617fdfb7f0d1b356baa7bbabca04"} Mar 12 18:13:07.662475 master-0 kubenswrapper[4051]: I0312 18:13:07.662424 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpb6k"] Mar 12 18:13:07.674935 master-0 kubenswrapper[4051]: W0312 18:13:07.674804 4051 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4ae1240_e04e_48e9_88df_9f1a53508da7.slice/crio-38c299b0655225599ee9b03928de90a7927480cc6329508c816e9e361bbcfa16 WatchSource:0}: Error finding container 38c299b0655225599ee9b03928de90a7927480cc6329508c816e9e361bbcfa16: Status 404 returned error can't find the container with id 38c299b0655225599ee9b03928de90a7927480cc6329508c816e9e361bbcfa16 Mar 12 18:13:07.771975 master-0 kubenswrapper[4051]: I0312 18:13:07.771929 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b"] Mar 12 18:13:07.777662 master-0 kubenswrapper[4051]: W0312 18:13:07.777638 4051 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod062f1b21_2ffc_47da_8334_427c3b2a1a90.slice/crio-91d19dd0041e348f5ab95fd10ff19be4195ac501d593d4346b94e73b4b7bfba3 WatchSource:0}: Error finding container 91d19dd0041e348f5ab95fd10ff19be4195ac501d593d4346b94e73b4b7bfba3: Status 404 returned error can't find the container with id 91d19dd0041e348f5ab95fd10ff19be4195ac501d593d4346b94e73b4b7bfba3 Mar 12 18:13:07.784904 master-0 kubenswrapper[4051]: I0312 18:13:07.784828 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:07.785008 master-0 kubenswrapper[4051]: I0312 18:13:07.784984 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:07.785160 master-0 kubenswrapper[4051]: E0312 18:13:07.785081 4051 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 12 18:13:07.785160 master-0 kubenswrapper[4051]: E0312 18:13:07.785132 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls podName:306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed nodeName:}" failed. No retries permitted until 2026-03-12 18:13:09.785117491 +0000 UTC m=+169.264243722 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-lqpbp" (UID: "306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed") : secret "node-tuning-operator-tls" not found Mar 12 18:13:07.785374 master-0 kubenswrapper[4051]: E0312 18:13:07.785310 4051 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 12 18:13:07.785506 master-0 kubenswrapper[4051]: E0312 18:13:07.785489 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert podName:306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed nodeName:}" failed. No retries permitted until 2026-03-12 18:13:09.7854693 +0000 UTC m=+169.264595541 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-lqpbp" (UID: "306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed") : secret "performance-addon-operator-webhook-cert" not found Mar 12 18:13:07.886216 master-0 kubenswrapper[4051]: I0312 18:13:07.885959 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs\") pod \"multus-admission-controller-8d675b596-kcpg5\" (UID: \"875bdfaa-b0a4-4412-a477-c962844e7057\") " pod="openshift-multus/multus-admission-controller-8d675b596-kcpg5" Mar 12 18:13:07.886216 master-0 kubenswrapper[4051]: I0312 18:13:07.886053 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls\") pod \"dns-operator-589895fbb7-jqj5k\" (UID: \"8ad05507-e242-4ff8-ae80-c16ff9ee68e2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-jqj5k" Mar 12 18:13:07.886216 master-0 kubenswrapper[4051]: E0312 18:13:07.886188 4051 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 12 18:13:07.887647 master-0 kubenswrapper[4051]: E0312 18:13:07.886270 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs podName:875bdfaa-b0a4-4412-a477-c962844e7057 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:09.886245296 +0000 UTC m=+169.365371567 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs") pod "multus-admission-controller-8d675b596-kcpg5" (UID: "875bdfaa-b0a4-4412-a477-c962844e7057") : secret "multus-admission-controller-secret" not found Mar 12 18:13:07.887647 master-0 kubenswrapper[4051]: E0312 18:13:07.886373 4051 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 12 18:13:07.887647 master-0 kubenswrapper[4051]: E0312 18:13:07.886472 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls podName:8ad05507-e242-4ff8-ae80-c16ff9ee68e2 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:09.886450931 +0000 UTC m=+169.365577282 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls") pod "dns-operator-589895fbb7-jqj5k" (UID: "8ad05507-e242-4ff8-ae80-c16ff9ee68e2") : secret "metrics-tls" not found Mar 12 18:13:07.919490 master-0 kubenswrapper[4051]: I0312 18:13:07.919426 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-69b6fc6b88-9xlgl"] Mar 12 18:13:07.921701 master-0 kubenswrapper[4051]: I0312 18:13:07.921666 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6qlzz"] Mar 12 18:13:07.927603 master-0 kubenswrapper[4051]: I0312 18:13:07.927565 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-2xr98"] Mar 12 18:13:07.928707 master-0 kubenswrapper[4051]: W0312 18:13:07.928663 4051 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod055f5c67_f512_4510_99c5_e194944b0599.slice/crio-60ce66e2a62ed17df4cad9067d0bb6d4940a38dc4b5a5337ba95a9117aca3c70 WatchSource:0}: Error finding container 60ce66e2a62ed17df4cad9067d0bb6d4940a38dc4b5a5337ba95a9117aca3c70: Status 404 returned error can't find the container with id 60ce66e2a62ed17df4cad9067d0bb6d4940a38dc4b5a5337ba95a9117aca3c70 Mar 12 18:13:07.929683 master-0 kubenswrapper[4051]: W0312 18:13:07.929648 4051 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod236f2886_bb69_49a7_9471_36454fd1cbd3.slice/crio-bafb4a547df5e8f39b94a88d98e85d34e5a0230468f2013bc4da6ee9fbc59ee3 WatchSource:0}: Error finding container bafb4a547df5e8f39b94a88d98e85d34e5a0230468f2013bc4da6ee9fbc59ee3: Status 404 returned error can't find the container with id bafb4a547df5e8f39b94a88d98e85d34e5a0230468f2013bc4da6ee9fbc59ee3 Mar 12 18:13:07.935894 master-0 kubenswrapper[4051]: W0312 18:13:07.935828 4051 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1e2340b_ebca_40de_b1e0_8133999cd860.slice/crio-e54bd9f4ed4a4d50ffb03de0e433f3897b33df6a46c77fdb71d0900fa9a91e17 WatchSource:0}: Error finding container e54bd9f4ed4a4d50ffb03de0e433f3897b33df6a46c77fdb71d0900fa9a91e17: Status 404 returned error can't find the container with id e54bd9f4ed4a4d50ffb03de0e433f3897b33df6a46c77fdb71d0900fa9a91e17 Mar 12 18:13:07.966814 master-0 kubenswrapper[4051]: I0312 18:13:07.966282 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j"] Mar 12 18:13:07.967537 master-0 kubenswrapper[4051]: I0312 18:13:07.967479 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-m6hsp"] Mar 12 18:13:07.968366 master-0 kubenswrapper[4051]: I0312 18:13:07.968324 4051 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-ddsbn"] Mar 12 18:13:07.971386 master-0 kubenswrapper[4051]: W0312 18:13:07.971352 4051 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod223a548b_a3ad_40dd_82de_e3dbb7f3e4fa.slice/crio-64939e6ed4a35637d0a2c2bc028af5d4314b96efb512849766779eb1c4382a35 WatchSource:0}: Error finding container 64939e6ed4a35637d0a2c2bc028af5d4314b96efb512849766779eb1c4382a35: Status 404 returned error can't find the container with id 64939e6ed4a35637d0a2c2bc028af5d4314b96efb512849766779eb1c4382a35 Mar 12 18:13:07.972854 master-0 kubenswrapper[4051]: W0312 18:13:07.972815 4051 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab926874_9722_4e65_9084_27b2f9915450.slice/crio-ecf7670cd0c657ac23db39730253e7de6d6d1e9634e025fe447cc9e07fe1d91a WatchSource:0}: Error finding container ecf7670cd0c657ac23db39730253e7de6d6d1e9634e025fe447cc9e07fe1d91a: Status 404 returned error can't find the container with id ecf7670cd0c657ac23db39730253e7de6d6d1e9634e025fe447cc9e07fe1d91a Mar 12 18:13:08.669956 master-0 kubenswrapper[4051]: I0312 18:13:08.669908 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" event={"ID":"062f1b21-2ffc-47da-8334-427c3b2a1a90","Type":"ContainerStarted","Data":"91d19dd0041e348f5ab95fd10ff19be4195ac501d593d4346b94e73b4b7bfba3"} Mar 12 18:13:08.671863 master-0 kubenswrapper[4051]: I0312 18:13:08.671837 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-649db" event={"ID":"45aa4887-c913-4ece-ae34-fcde33832621","Type":"ContainerStarted","Data":"e3cce5ce786ddb4af71a8112135cad1426f074e7b12c67ae740271017aa946b3"} Mar 12 18:13:08.676397 master-0 kubenswrapper[4051]: I0312 18:13:08.676301 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" event={"ID":"e697746f-fb9e-4d10-ab61-33c68e62cc0d","Type":"ContainerStarted","Data":"1c062db0efa15fd6679d2718a5857a9b8db81f25fe5e3a47bd35e6f192db0dd6"} Mar 12 18:13:08.689261 master-0 kubenswrapper[4051]: I0312 18:13:08.689194 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-2xr98" event={"ID":"a1e2340b-ebca-40de-b1e0-8133999cd860","Type":"ContainerStarted","Data":"e54bd9f4ed4a4d50ffb03de0e433f3897b33df6a46c77fdb71d0900fa9a91e17"} Mar 12 18:13:08.690529 master-0 kubenswrapper[4051]: I0312 18:13:08.690482 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4" event={"ID":"f3a2cda2-b70f-4128-a1be-48503f5aad6d","Type":"ContainerStarted","Data":"ab67b82c7d40212f9275c2d813cf0ddfb1de4b0eb68ab01e5d797fad81a0d351"} Mar 12 18:13:08.691781 master-0 kubenswrapper[4051]: I0312 18:13:08.691711 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6qlzz" event={"ID":"236f2886-bb69-49a7-9471-36454fd1cbd3","Type":"ContainerStarted","Data":"bafb4a547df5e8f39b94a88d98e85d34e5a0230468f2013bc4da6ee9fbc59ee3"} Mar 12 18:13:08.693337 master-0 kubenswrapper[4051]: I0312 18:13:08.693273 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-ddsbn" event={"ID":"ab926874-9722-4e65-9084-27b2f9915450","Type":"ContainerStarted","Data":"f47fabdc4bdd8a3562bf6c4bb328b7b2603314ba7c3e007528769af4852f929f"} Mar 12 18:13:08.693337 master-0 kubenswrapper[4051]: I0312 18:13:08.693298 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-ddsbn" event={"ID":"ab926874-9722-4e65-9084-27b2f9915450","Type":"ContainerStarted","Data":"ecf7670cd0c657ac23db39730253e7de6d6d1e9634e025fe447cc9e07fe1d91a"} Mar 12 18:13:08.694618 master-0 kubenswrapper[4051]: I0312 18:13:08.694414 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-m6hsp" event={"ID":"223a548b-a3ad-40dd-82de-e3dbb7f3e4fa","Type":"ContainerStarted","Data":"64939e6ed4a35637d0a2c2bc028af5d4314b96efb512849766779eb1c4382a35"} Mar 12 18:13:08.696136 master-0 kubenswrapper[4051]: I0312 18:13:08.696106 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" event={"ID":"37cd9c0a-697e-4e67-932b-b331ff77c8c0","Type":"ContainerStarted","Data":"1d311f9e8cf6ad3cdfb6335b00f9729ed813d3bacc476060ca21b806ee856231"} Mar 12 18:13:08.697274 master-0 kubenswrapper[4051]: I0312 18:13:08.697239 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-9xlgl" event={"ID":"055f5c67-f512-4510-99c5-e194944b0599","Type":"ContainerStarted","Data":"60ce66e2a62ed17df4cad9067d0bb6d4940a38dc4b5a5337ba95a9117aca3c70"} Mar 12 18:13:08.698756 master-0 kubenswrapper[4051]: I0312 18:13:08.698724 4051 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpb6k" event={"ID":"d4ae1240-e04e-48e9-88df-9f1a53508da7","Type":"ContainerStarted","Data":"38c299b0655225599ee9b03928de90a7927480cc6329508c816e9e361bbcfa16"} Mar 12 18:13:08.707829 master-0 kubenswrapper[4051]: I0312 18:13:08.707698 4051 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-ddsbn" podStartSLOduration=125.707671069 podStartE2EDuration="2m5.707671069s" podCreationTimestamp="2026-03-12 18:11:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:13:08.70689304 +0000 UTC m=+168.186019271" watchObservedRunningTime="2026-03-12 18:13:08.707671069 +0000 UTC m=+168.186797300" Mar 12 18:13:09.603446 master-0 kubenswrapper[4051]: I0312 18:13:09.603391 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert\") pod \"olm-operator-d64cfc9db-npt4r\" (UID: \"d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r" Mar 12 18:13:09.603446 master-0 kubenswrapper[4051]: I0312 18:13:09.603450 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-fz79c\" (UID: \"e94d098b-fbcc-4e85-b8ad-42f3a21c822c\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" Mar 12 18:13:09.604147 master-0 kubenswrapper[4051]: E0312 18:13:09.603565 4051 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 12 18:13:09.604147 master-0 kubenswrapper[4051]: I0312 18:13:09.603611 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-clkx5\" (UID: \"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:13:09.604147 master-0 kubenswrapper[4051]: E0312 18:13:09.603634 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert podName:d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:13.603615531 +0000 UTC m=+173.082741762 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert") pod "olm-operator-d64cfc9db-npt4r" (UID: "d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27") : secret "olm-operator-serving-cert" not found Mar 12 18:13:09.604147 master-0 kubenswrapper[4051]: E0312 18:13:09.603659 4051 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 12 18:13:09.604147 master-0 kubenswrapper[4051]: I0312 18:13:09.603684 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert\") pod \"catalog-operator-7d9c49f57b-pslh7\" (UID: \"47850839-bb4b-41e9-ac31-f1cabbb4926d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7" Mar 12 18:13:09.604147 master-0 kubenswrapper[4051]: E0312 18:13:09.603576 4051 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 12 18:13:09.604147 master-0 kubenswrapper[4051]: E0312 18:13:09.603709 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics podName:4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:13.603692423 +0000 UTC m=+173.082818764 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-clkx5" (UID: "4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64") : secret "marketplace-operator-metrics" not found Mar 12 18:13:09.604147 master-0 kubenswrapper[4051]: E0312 18:13:09.603739 4051 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 12 18:13:09.604147 master-0 kubenswrapper[4051]: E0312 18:13:09.603754 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls podName:e94d098b-fbcc-4e85-b8ad-42f3a21c822c nodeName:}" failed. No retries permitted until 2026-03-12 18:13:13.603736275 +0000 UTC m=+173.082862626 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-fz79c" (UID: "e94d098b-fbcc-4e85-b8ad-42f3a21c822c") : secret "cluster-monitoring-operator-tls" not found Mar 12 18:13:09.604147 master-0 kubenswrapper[4051]: I0312 18:13:09.603814 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-kwv7s\" (UID: \"51eb717b-d11f-4bc3-8df6-deb51d5889f3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:13:09.604147 master-0 kubenswrapper[4051]: I0312 18:13:09.603840 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:13:09.604147 master-0 kubenswrapper[4051]: E0312 18:13:09.603847 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert podName:47850839-bb4b-41e9-ac31-f1cabbb4926d nodeName:}" failed. No retries permitted until 2026-03-12 18:13:13.603838737 +0000 UTC m=+173.082964968 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert") pod "catalog-operator-7d9c49f57b-pslh7" (UID: "47850839-bb4b-41e9-ac31-f1cabbb4926d") : secret "catalog-operator-serving-cert" not found Mar 12 18:13:09.604147 master-0 kubenswrapper[4051]: I0312 18:13:09.603863 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:13:09.604147 master-0 kubenswrapper[4051]: E0312 18:13:09.603892 4051 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 12 18:13:09.604147 master-0 kubenswrapper[4051]: E0312 18:13:09.603916 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert podName:51eb717b-d11f-4bc3-8df6-deb51d5889f3 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:13.603909659 +0000 UTC m=+173.083035890 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-kwv7s" (UID: "51eb717b-d11f-4bc3-8df6-deb51d5889f3") : secret "package-server-manager-serving-cert" not found Mar 12 18:13:09.605864 master-0 kubenswrapper[4051]: E0312 18:13:09.603960 4051 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 12 18:13:09.605864 master-0 kubenswrapper[4051]: E0312 18:13:09.603970 4051 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 12 18:13:09.605864 master-0 kubenswrapper[4051]: E0312 18:13:09.603978 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls podName:e22c7035-4b7a-48cb-9abb-db277b387842 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:13.603973061 +0000 UTC m=+173.083099292 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-l4krq" (UID: "e22c7035-4b7a-48cb-9abb-db277b387842") : secret "image-registry-operator-tls" not found Mar 12 18:13:09.605864 master-0 kubenswrapper[4051]: E0312 18:13:09.603993 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls podName:d94dc349-c5cb-4f12-8e48-867030af4981 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:13.603985991 +0000 UTC m=+173.083112352 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls") pod "ingress-operator-677db989d6-4527l" (UID: "d94dc349-c5cb-4f12-8e48-867030af4981") : secret "metrics-tls" not found Mar 12 18:13:09.806879 master-0 kubenswrapper[4051]: I0312 18:13:09.806827 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:09.807287 master-0 kubenswrapper[4051]: E0312 18:13:09.807266 4051 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 12 18:13:09.807344 master-0 kubenswrapper[4051]: E0312 18:13:09.807326 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert podName:306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed nodeName:}" failed. No retries permitted until 2026-03-12 18:13:13.807311418 +0000 UTC m=+173.286437649 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-lqpbp" (UID: "306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed") : secret "performance-addon-operator-webhook-cert" not found Mar 12 18:13:09.807593 master-0 kubenswrapper[4051]: I0312 18:13:09.807571 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:09.807656 master-0 kubenswrapper[4051]: E0312 18:13:09.807645 4051 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 12 18:13:09.807712 master-0 kubenswrapper[4051]: E0312 18:13:09.807675 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls podName:306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed nodeName:}" failed. No retries permitted until 2026-03-12 18:13:13.807664677 +0000 UTC m=+173.286790908 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-lqpbp" (UID: "306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed") : secret "node-tuning-operator-tls" not found Mar 12 18:13:09.908583 master-0 kubenswrapper[4051]: I0312 18:13:09.908447 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs\") pod \"multus-admission-controller-8d675b596-kcpg5\" (UID: \"875bdfaa-b0a4-4412-a477-c962844e7057\") " pod="openshift-multus/multus-admission-controller-8d675b596-kcpg5" Mar 12 18:13:09.908583 master-0 kubenswrapper[4051]: I0312 18:13:09.908581 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls\") pod \"dns-operator-589895fbb7-jqj5k\" (UID: \"8ad05507-e242-4ff8-ae80-c16ff9ee68e2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-jqj5k" Mar 12 18:13:09.908753 master-0 kubenswrapper[4051]: E0312 18:13:09.908740 4051 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 12 18:13:09.908828 master-0 kubenswrapper[4051]: E0312 18:13:09.908797 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls podName:8ad05507-e242-4ff8-ae80-c16ff9ee68e2 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:13.908781601 +0000 UTC m=+173.387907832 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls") pod "dns-operator-589895fbb7-jqj5k" (UID: "8ad05507-e242-4ff8-ae80-c16ff9ee68e2") : secret "metrics-tls" not found Mar 12 18:13:09.908875 master-0 kubenswrapper[4051]: E0312 18:13:09.908866 4051 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 12 18:13:09.908908 master-0 kubenswrapper[4051]: E0312 18:13:09.908892 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs podName:875bdfaa-b0a4-4412-a477-c962844e7057 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:13.908884714 +0000 UTC m=+173.388010945 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs") pod "multus-admission-controller-8d675b596-kcpg5" (UID: "875bdfaa-b0a4-4412-a477-c962844e7057") : secret "multus-admission-controller-secret" not found Mar 12 18:13:12.164918 master-0 kubenswrapper[4051]: I0312 18:13:12.164420 4051 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:13.673314 master-0 kubenswrapper[4051]: I0312 18:13:13.673163 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:13:13.673314 master-0 kubenswrapper[4051]: I0312 18:13:13.673212 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:13:13.673314 master-0 kubenswrapper[4051]: I0312 18:13:13.673255 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert\") pod \"olm-operator-d64cfc9db-npt4r\" (UID: \"d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r" Mar 12 18:13:13.673830 master-0 kubenswrapper[4051]: E0312 18:13:13.673451 4051 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 12 18:13:13.673830 master-0 kubenswrapper[4051]: E0312 18:13:13.673497 4051 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 12 18:13:13.673830 master-0 kubenswrapper[4051]: E0312 18:13:13.673546 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls podName:d94dc349-c5cb-4f12-8e48-867030af4981 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:21.673527276 +0000 UTC m=+181.152653507 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls") pod "ingress-operator-677db989d6-4527l" (UID: "d94dc349-c5cb-4f12-8e48-867030af4981") : secret "metrics-tls" not found Mar 12 18:13:13.673830 master-0 kubenswrapper[4051]: E0312 18:13:13.673591 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert podName:d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:21.673569987 +0000 UTC m=+181.152696218 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert") pod "olm-operator-d64cfc9db-npt4r" (UID: "d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27") : secret "olm-operator-serving-cert" not found Mar 12 18:13:13.673830 master-0 kubenswrapper[4051]: E0312 18:13:13.673507 4051 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 12 18:13:13.673830 master-0 kubenswrapper[4051]: E0312 18:13:13.673648 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls podName:e22c7035-4b7a-48cb-9abb-db277b387842 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:21.673635939 +0000 UTC m=+181.152762170 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-l4krq" (UID: "e22c7035-4b7a-48cb-9abb-db277b387842") : secret "image-registry-operator-tls" not found Mar 12 18:13:13.674298 master-0 kubenswrapper[4051]: I0312 18:13:13.674101 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-fz79c\" (UID: \"e94d098b-fbcc-4e85-b8ad-42f3a21c822c\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" Mar 12 18:13:13.674298 master-0 kubenswrapper[4051]: I0312 18:13:13.674149 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-clkx5\" (UID: \"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:13:13.674298 master-0 kubenswrapper[4051]: I0312 18:13:13.674180 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert\") pod \"catalog-operator-7d9c49f57b-pslh7\" (UID: \"47850839-bb4b-41e9-ac31-f1cabbb4926d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7" Mar 12 18:13:13.674298 master-0 kubenswrapper[4051]: E0312 18:13:13.674185 4051 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 12 18:13:13.674298 master-0 kubenswrapper[4051]: I0312 18:13:13.674231 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-kwv7s\" (UID: \"51eb717b-d11f-4bc3-8df6-deb51d5889f3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:13:13.674298 master-0 kubenswrapper[4051]: E0312 18:13:13.674247 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls podName:e94d098b-fbcc-4e85-b8ad-42f3a21c822c nodeName:}" failed. No retries permitted until 2026-03-12 18:13:21.674239044 +0000 UTC m=+181.153365275 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-fz79c" (UID: "e94d098b-fbcc-4e85-b8ad-42f3a21c822c") : secret "cluster-monitoring-operator-tls" not found Mar 12 18:13:13.674587 master-0 kubenswrapper[4051]: E0312 18:13:13.674325 4051 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 12 18:13:13.674587 master-0 kubenswrapper[4051]: E0312 18:13:13.674349 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert podName:47850839-bb4b-41e9-ac31-f1cabbb4926d nodeName:}" failed. No retries permitted until 2026-03-12 18:13:21.674340927 +0000 UTC m=+181.153467158 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert") pod "catalog-operator-7d9c49f57b-pslh7" (UID: "47850839-bb4b-41e9-ac31-f1cabbb4926d") : secret "catalog-operator-serving-cert" not found Mar 12 18:13:13.674587 master-0 kubenswrapper[4051]: E0312 18:13:13.674404 4051 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 12 18:13:13.674587 master-0 kubenswrapper[4051]: E0312 18:13:13.674425 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert podName:51eb717b-d11f-4bc3-8df6-deb51d5889f3 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:21.674418839 +0000 UTC m=+181.153545070 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-kwv7s" (UID: "51eb717b-d11f-4bc3-8df6-deb51d5889f3") : secret "package-server-manager-serving-cert" not found Mar 12 18:13:13.674587 master-0 kubenswrapper[4051]: E0312 18:13:13.674473 4051 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 12 18:13:13.674587 master-0 kubenswrapper[4051]: E0312 18:13:13.674498 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics podName:4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:21.674492511 +0000 UTC m=+181.153618742 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-clkx5" (UID: "4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64") : secret "marketplace-operator-metrics" not found Mar 12 18:13:13.875500 master-0 kubenswrapper[4051]: I0312 18:13:13.875454 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:13.875706 master-0 kubenswrapper[4051]: I0312 18:13:13.875542 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:13.875706 master-0 kubenswrapper[4051]: E0312 18:13:13.875677 4051 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 12 18:13:13.875791 master-0 kubenswrapper[4051]: E0312 18:13:13.875770 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls podName:306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed nodeName:}" failed. No retries permitted until 2026-03-12 18:13:21.875747565 +0000 UTC m=+181.354873896 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-lqpbp" (UID: "306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed") : secret "node-tuning-operator-tls" not found Mar 12 18:13:13.875834 master-0 kubenswrapper[4051]: E0312 18:13:13.875811 4051 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 12 18:13:13.875873 master-0 kubenswrapper[4051]: E0312 18:13:13.875850 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert podName:306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed nodeName:}" failed. No retries permitted until 2026-03-12 18:13:21.875839267 +0000 UTC m=+181.354965598 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-lqpbp" (UID: "306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed") : secret "performance-addon-operator-webhook-cert" not found Mar 12 18:13:13.977244 master-0 kubenswrapper[4051]: I0312 18:13:13.976631 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls\") pod \"dns-operator-589895fbb7-jqj5k\" (UID: \"8ad05507-e242-4ff8-ae80-c16ff9ee68e2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-jqj5k" Mar 12 18:13:13.977244 master-0 kubenswrapper[4051]: I0312 18:13:13.976771 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs\") pod \"multus-admission-controller-8d675b596-kcpg5\" (UID: \"875bdfaa-b0a4-4412-a477-c962844e7057\") " pod="openshift-multus/multus-admission-controller-8d675b596-kcpg5" Mar 12 18:13:13.977244 master-0 kubenswrapper[4051]: E0312 18:13:13.976821 4051 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 12 18:13:13.977244 master-0 kubenswrapper[4051]: E0312 18:13:13.976912 4051 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 12 18:13:13.977244 master-0 kubenswrapper[4051]: E0312 18:13:13.976971 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls podName:8ad05507-e242-4ff8-ae80-c16ff9ee68e2 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:21.976949902 +0000 UTC m=+181.456076133 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls") pod "dns-operator-589895fbb7-jqj5k" (UID: "8ad05507-e242-4ff8-ae80-c16ff9ee68e2") : secret "metrics-tls" not found Mar 12 18:13:13.977244 master-0 kubenswrapper[4051]: E0312 18:13:13.977007 4051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs podName:875bdfaa-b0a4-4412-a477-c962844e7057 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:21.976981843 +0000 UTC m=+181.456108074 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs") pod "multus-admission-controller-8d675b596-kcpg5" (UID: "875bdfaa-b0a4-4412-a477-c962844e7057") : secret "multus-admission-controller-secret" not found Mar 12 18:13:15.758639 master-0 kubenswrapper[4051]: I0312 18:13:15.755372 4051 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptrtx\" (UniqueName: \"kubernetes.io/projected/33feec78-4592-4343-965b-aa1b7044fcf3-kube-api-access-ptrtx\") pod \"network-check-target-cpthp\" (UID: \"33feec78-4592-4343-965b-aa1b7044fcf3\") " pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:13:15.777731 master-0 kubenswrapper[4051]: I0312 18:13:15.774717 4051 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptrtx\" (UniqueName: \"kubernetes.io/projected/33feec78-4592-4343-965b-aa1b7044fcf3-kube-api-access-ptrtx\") pod \"network-check-target-cpthp\" (UID: \"33feec78-4592-4343-965b-aa1b7044fcf3\") " pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:13:15.912254 master-0 kubenswrapper[4051]: I0312 18:13:15.912200 4051 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:13:19.427589 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 12 18:13:19.454401 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 12 18:13:19.454724 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 12 18:13:19.456406 master-0 systemd[1]: kubelet.service: Consumed 9.710s CPU time. Mar 12 18:13:19.472467 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 12 18:13:19.582844 master-0 kubenswrapper[7337]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 18:13:19.582844 master-0 kubenswrapper[7337]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 12 18:13:19.582844 master-0 kubenswrapper[7337]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 18:13:19.583782 master-0 kubenswrapper[7337]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 18:13:19.583782 master-0 kubenswrapper[7337]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 12 18:13:19.583782 master-0 kubenswrapper[7337]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 18:13:19.583782 master-0 kubenswrapper[7337]: I0312 18:13:19.583104 7337 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 18:13:19.589307 master-0 kubenswrapper[7337]: W0312 18:13:19.589264 7337 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 18:13:19.589307 master-0 kubenswrapper[7337]: W0312 18:13:19.589302 7337 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 18:13:19.589395 master-0 kubenswrapper[7337]: W0312 18:13:19.589317 7337 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 18:13:19.589395 master-0 kubenswrapper[7337]: W0312 18:13:19.589328 7337 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 18:13:19.589395 master-0 kubenswrapper[7337]: W0312 18:13:19.589338 7337 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 18:13:19.589395 master-0 kubenswrapper[7337]: W0312 18:13:19.589348 7337 feature_gate.go:330] unrecognized feature gate: Example Mar 12 18:13:19.589395 master-0 kubenswrapper[7337]: W0312 18:13:19.589357 7337 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 18:13:19.589395 master-0 kubenswrapper[7337]: W0312 18:13:19.589366 7337 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 18:13:19.589395 master-0 kubenswrapper[7337]: W0312 18:13:19.589375 7337 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 18:13:19.589395 master-0 kubenswrapper[7337]: W0312 18:13:19.589383 7337 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 18:13:19.589395 master-0 kubenswrapper[7337]: W0312 18:13:19.589392 7337 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 18:13:19.589395 master-0 kubenswrapper[7337]: W0312 18:13:19.589402 7337 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 18:13:19.589675 master-0 kubenswrapper[7337]: W0312 18:13:19.589412 7337 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 18:13:19.589675 master-0 kubenswrapper[7337]: W0312 18:13:19.589421 7337 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 18:13:19.589675 master-0 kubenswrapper[7337]: W0312 18:13:19.589429 7337 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 18:13:19.589675 master-0 kubenswrapper[7337]: W0312 18:13:19.589438 7337 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 18:13:19.589675 master-0 kubenswrapper[7337]: W0312 18:13:19.589446 7337 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 18:13:19.589675 master-0 kubenswrapper[7337]: W0312 18:13:19.589455 7337 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 18:13:19.589675 master-0 kubenswrapper[7337]: W0312 18:13:19.589464 7337 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 18:13:19.589675 master-0 kubenswrapper[7337]: W0312 18:13:19.589472 7337 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 18:13:19.589675 master-0 kubenswrapper[7337]: W0312 18:13:19.589481 7337 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 18:13:19.589675 master-0 kubenswrapper[7337]: W0312 18:13:19.589490 7337 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 18:13:19.589675 master-0 kubenswrapper[7337]: W0312 18:13:19.589498 7337 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 18:13:19.589675 master-0 kubenswrapper[7337]: W0312 18:13:19.589507 7337 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 18:13:19.589675 master-0 kubenswrapper[7337]: W0312 18:13:19.589579 7337 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 18:13:19.589675 master-0 kubenswrapper[7337]: W0312 18:13:19.589589 7337 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 18:13:19.589675 master-0 kubenswrapper[7337]: W0312 18:13:19.589598 7337 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 18:13:19.589675 master-0 kubenswrapper[7337]: W0312 18:13:19.589618 7337 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 18:13:19.589675 master-0 kubenswrapper[7337]: W0312 18:13:19.589628 7337 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 18:13:19.589675 master-0 kubenswrapper[7337]: W0312 18:13:19.589637 7337 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 18:13:19.589675 master-0 kubenswrapper[7337]: W0312 18:13:19.589645 7337 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 18:13:19.589675 master-0 kubenswrapper[7337]: W0312 18:13:19.589654 7337 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 18:13:19.590288 master-0 kubenswrapper[7337]: W0312 18:13:19.589662 7337 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 18:13:19.590288 master-0 kubenswrapper[7337]: W0312 18:13:19.589670 7337 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 18:13:19.590288 master-0 kubenswrapper[7337]: W0312 18:13:19.589679 7337 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 18:13:19.590288 master-0 kubenswrapper[7337]: W0312 18:13:19.589689 7337 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 18:13:19.590288 master-0 kubenswrapper[7337]: W0312 18:13:19.589698 7337 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 18:13:19.590288 master-0 kubenswrapper[7337]: W0312 18:13:19.589711 7337 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 18:13:19.590288 master-0 kubenswrapper[7337]: W0312 18:13:19.589722 7337 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 18:13:19.590288 master-0 kubenswrapper[7337]: W0312 18:13:19.589731 7337 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 18:13:19.590288 master-0 kubenswrapper[7337]: W0312 18:13:19.589740 7337 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 18:13:19.590288 master-0 kubenswrapper[7337]: W0312 18:13:19.589753 7337 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 18:13:19.590288 master-0 kubenswrapper[7337]: W0312 18:13:19.589764 7337 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 18:13:19.590288 master-0 kubenswrapper[7337]: W0312 18:13:19.589774 7337 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 18:13:19.590288 master-0 kubenswrapper[7337]: W0312 18:13:19.589783 7337 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 18:13:19.590288 master-0 kubenswrapper[7337]: W0312 18:13:19.589793 7337 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 18:13:19.590288 master-0 kubenswrapper[7337]: W0312 18:13:19.589805 7337 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 18:13:19.590288 master-0 kubenswrapper[7337]: W0312 18:13:19.589814 7337 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 18:13:19.590288 master-0 kubenswrapper[7337]: W0312 18:13:19.589823 7337 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 18:13:19.590288 master-0 kubenswrapper[7337]: W0312 18:13:19.589832 7337 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 18:13:19.590288 master-0 kubenswrapper[7337]: W0312 18:13:19.589840 7337 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 18:13:19.590288 master-0 kubenswrapper[7337]: W0312 18:13:19.589849 7337 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 18:13:19.590793 master-0 kubenswrapper[7337]: W0312 18:13:19.589858 7337 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 18:13:19.590793 master-0 kubenswrapper[7337]: W0312 18:13:19.589867 7337 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 18:13:19.590793 master-0 kubenswrapper[7337]: W0312 18:13:19.589875 7337 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 18:13:19.590793 master-0 kubenswrapper[7337]: W0312 18:13:19.589883 7337 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 18:13:19.590793 master-0 kubenswrapper[7337]: W0312 18:13:19.589892 7337 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 18:13:19.590793 master-0 kubenswrapper[7337]: W0312 18:13:19.589901 7337 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 18:13:19.590793 master-0 kubenswrapper[7337]: W0312 18:13:19.589912 7337 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 18:13:19.590793 master-0 kubenswrapper[7337]: W0312 18:13:19.589924 7337 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 18:13:19.590793 master-0 kubenswrapper[7337]: W0312 18:13:19.589935 7337 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 18:13:19.590793 master-0 kubenswrapper[7337]: W0312 18:13:19.589944 7337 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 18:13:19.590793 master-0 kubenswrapper[7337]: W0312 18:13:19.589954 7337 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 18:13:19.590793 master-0 kubenswrapper[7337]: W0312 18:13:19.589963 7337 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 18:13:19.590793 master-0 kubenswrapper[7337]: W0312 18:13:19.589971 7337 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 18:13:19.590793 master-0 kubenswrapper[7337]: W0312 18:13:19.589982 7337 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 18:13:19.590793 master-0 kubenswrapper[7337]: W0312 18:13:19.589991 7337 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 18:13:19.590793 master-0 kubenswrapper[7337]: W0312 18:13:19.589999 7337 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 18:13:19.590793 master-0 kubenswrapper[7337]: W0312 18:13:19.590008 7337 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 18:13:19.590793 master-0 kubenswrapper[7337]: W0312 18:13:19.590016 7337 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 18:13:19.590793 master-0 kubenswrapper[7337]: W0312 18:13:19.590024 7337 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 18:13:19.590793 master-0 kubenswrapper[7337]: W0312 18:13:19.590033 7337 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 18:13:19.591328 master-0 kubenswrapper[7337]: I0312 18:13:19.590193 7337 flags.go:64] FLAG: --address="0.0.0.0" Mar 12 18:13:19.591328 master-0 kubenswrapper[7337]: I0312 18:13:19.590210 7337 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 12 18:13:19.591328 master-0 kubenswrapper[7337]: I0312 18:13:19.590226 7337 flags.go:64] FLAG: --anonymous-auth="true" Mar 12 18:13:19.591328 master-0 kubenswrapper[7337]: I0312 18:13:19.590239 7337 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 12 18:13:19.591328 master-0 kubenswrapper[7337]: I0312 18:13:19.590251 7337 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 12 18:13:19.591328 master-0 kubenswrapper[7337]: I0312 18:13:19.590263 7337 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 12 18:13:19.591328 master-0 kubenswrapper[7337]: I0312 18:13:19.590285 7337 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 12 18:13:19.591328 master-0 kubenswrapper[7337]: I0312 18:13:19.590298 7337 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 12 18:13:19.591328 master-0 kubenswrapper[7337]: I0312 18:13:19.590309 7337 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 12 18:13:19.591328 master-0 kubenswrapper[7337]: I0312 18:13:19.590320 7337 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 12 18:13:19.591328 master-0 kubenswrapper[7337]: I0312 18:13:19.590330 7337 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 12 18:13:19.591328 master-0 kubenswrapper[7337]: I0312 18:13:19.590341 7337 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 12 18:13:19.591328 master-0 kubenswrapper[7337]: I0312 18:13:19.590352 7337 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 12 18:13:19.591328 master-0 kubenswrapper[7337]: I0312 18:13:19.590362 7337 flags.go:64] FLAG: --cgroup-root="" Mar 12 18:13:19.591328 master-0 kubenswrapper[7337]: I0312 18:13:19.590371 7337 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 12 18:13:19.591328 master-0 kubenswrapper[7337]: I0312 18:13:19.590381 7337 flags.go:64] FLAG: --client-ca-file="" Mar 12 18:13:19.591328 master-0 kubenswrapper[7337]: I0312 18:13:19.590391 7337 flags.go:64] FLAG: --cloud-config="" Mar 12 18:13:19.591328 master-0 kubenswrapper[7337]: I0312 18:13:19.590401 7337 flags.go:64] FLAG: --cloud-provider="" Mar 12 18:13:19.591328 master-0 kubenswrapper[7337]: I0312 18:13:19.590411 7337 flags.go:64] FLAG: --cluster-dns="[]" Mar 12 18:13:19.591328 master-0 kubenswrapper[7337]: I0312 18:13:19.590423 7337 flags.go:64] FLAG: --cluster-domain="" Mar 12 18:13:19.591328 master-0 kubenswrapper[7337]: I0312 18:13:19.590433 7337 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 12 18:13:19.591328 master-0 kubenswrapper[7337]: I0312 18:13:19.590443 7337 flags.go:64] FLAG: --config-dir="" Mar 12 18:13:19.591328 master-0 kubenswrapper[7337]: I0312 18:13:19.590453 7337 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 12 18:13:19.591328 master-0 kubenswrapper[7337]: I0312 18:13:19.590464 7337 flags.go:64] FLAG: --container-log-max-files="5" Mar 12 18:13:19.591328 master-0 kubenswrapper[7337]: I0312 18:13:19.590477 7337 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 12 18:13:19.591956 master-0 kubenswrapper[7337]: I0312 18:13:19.590488 7337 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 12 18:13:19.591956 master-0 kubenswrapper[7337]: I0312 18:13:19.590498 7337 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 12 18:13:19.591956 master-0 kubenswrapper[7337]: I0312 18:13:19.590509 7337 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 12 18:13:19.591956 master-0 kubenswrapper[7337]: I0312 18:13:19.590544 7337 flags.go:64] FLAG: --contention-profiling="false" Mar 12 18:13:19.591956 master-0 kubenswrapper[7337]: I0312 18:13:19.590554 7337 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 12 18:13:19.591956 master-0 kubenswrapper[7337]: I0312 18:13:19.590564 7337 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 12 18:13:19.591956 master-0 kubenswrapper[7337]: I0312 18:13:19.590574 7337 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 12 18:13:19.591956 master-0 kubenswrapper[7337]: I0312 18:13:19.590584 7337 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 12 18:13:19.591956 master-0 kubenswrapper[7337]: I0312 18:13:19.590596 7337 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 12 18:13:19.591956 master-0 kubenswrapper[7337]: I0312 18:13:19.590607 7337 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 12 18:13:19.591956 master-0 kubenswrapper[7337]: I0312 18:13:19.590711 7337 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 12 18:13:19.591956 master-0 kubenswrapper[7337]: I0312 18:13:19.590725 7337 flags.go:64] FLAG: --enable-load-reader="false" Mar 12 18:13:19.591956 master-0 kubenswrapper[7337]: I0312 18:13:19.590736 7337 flags.go:64] FLAG: --enable-server="true" Mar 12 18:13:19.591956 master-0 kubenswrapper[7337]: I0312 18:13:19.590745 7337 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 12 18:13:19.591956 master-0 kubenswrapper[7337]: I0312 18:13:19.590761 7337 flags.go:64] FLAG: --event-burst="100" Mar 12 18:13:19.591956 master-0 kubenswrapper[7337]: I0312 18:13:19.590772 7337 flags.go:64] FLAG: --event-qps="50" Mar 12 18:13:19.591956 master-0 kubenswrapper[7337]: I0312 18:13:19.590782 7337 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 12 18:13:19.591956 master-0 kubenswrapper[7337]: I0312 18:13:19.590792 7337 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 12 18:13:19.591956 master-0 kubenswrapper[7337]: I0312 18:13:19.590803 7337 flags.go:64] FLAG: --eviction-hard="" Mar 12 18:13:19.591956 master-0 kubenswrapper[7337]: I0312 18:13:19.590817 7337 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 12 18:13:19.591956 master-0 kubenswrapper[7337]: I0312 18:13:19.590827 7337 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 12 18:13:19.591956 master-0 kubenswrapper[7337]: I0312 18:13:19.590837 7337 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 12 18:13:19.591956 master-0 kubenswrapper[7337]: I0312 18:13:19.590847 7337 flags.go:64] FLAG: --eviction-soft="" Mar 12 18:13:19.591956 master-0 kubenswrapper[7337]: I0312 18:13:19.590857 7337 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 12 18:13:19.591956 master-0 kubenswrapper[7337]: I0312 18:13:19.590867 7337 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 12 18:13:19.592588 master-0 kubenswrapper[7337]: I0312 18:13:19.590877 7337 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 12 18:13:19.592588 master-0 kubenswrapper[7337]: I0312 18:13:19.590887 7337 flags.go:64] FLAG: --experimental-mounter-path="" Mar 12 18:13:19.592588 master-0 kubenswrapper[7337]: I0312 18:13:19.590897 7337 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 12 18:13:19.592588 master-0 kubenswrapper[7337]: I0312 18:13:19.590907 7337 flags.go:64] FLAG: --fail-swap-on="true" Mar 12 18:13:19.592588 master-0 kubenswrapper[7337]: I0312 18:13:19.590916 7337 flags.go:64] FLAG: --feature-gates="" Mar 12 18:13:19.592588 master-0 kubenswrapper[7337]: I0312 18:13:19.590928 7337 flags.go:64] FLAG: --file-check-frequency="20s" Mar 12 18:13:19.592588 master-0 kubenswrapper[7337]: I0312 18:13:19.590939 7337 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 12 18:13:19.592588 master-0 kubenswrapper[7337]: I0312 18:13:19.590949 7337 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 12 18:13:19.592588 master-0 kubenswrapper[7337]: I0312 18:13:19.590961 7337 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 12 18:13:19.592588 master-0 kubenswrapper[7337]: I0312 18:13:19.590971 7337 flags.go:64] FLAG: --healthz-port="10248" Mar 12 18:13:19.592588 master-0 kubenswrapper[7337]: I0312 18:13:19.590981 7337 flags.go:64] FLAG: --help="false" Mar 12 18:13:19.592588 master-0 kubenswrapper[7337]: I0312 18:13:19.590991 7337 flags.go:64] FLAG: --hostname-override="" Mar 12 18:13:19.592588 master-0 kubenswrapper[7337]: I0312 18:13:19.591001 7337 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 12 18:13:19.592588 master-0 kubenswrapper[7337]: I0312 18:13:19.591011 7337 flags.go:64] FLAG: --http-check-frequency="20s" Mar 12 18:13:19.592588 master-0 kubenswrapper[7337]: I0312 18:13:19.591021 7337 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 12 18:13:19.592588 master-0 kubenswrapper[7337]: I0312 18:13:19.591031 7337 flags.go:64] FLAG: --image-credential-provider-config="" Mar 12 18:13:19.592588 master-0 kubenswrapper[7337]: I0312 18:13:19.591041 7337 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 12 18:13:19.592588 master-0 kubenswrapper[7337]: I0312 18:13:19.591051 7337 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 12 18:13:19.592588 master-0 kubenswrapper[7337]: I0312 18:13:19.591060 7337 flags.go:64] FLAG: --image-service-endpoint="" Mar 12 18:13:19.592588 master-0 kubenswrapper[7337]: I0312 18:13:19.591070 7337 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 12 18:13:19.592588 master-0 kubenswrapper[7337]: I0312 18:13:19.591080 7337 flags.go:64] FLAG: --kube-api-burst="100" Mar 12 18:13:19.592588 master-0 kubenswrapper[7337]: I0312 18:13:19.591089 7337 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 12 18:13:19.592588 master-0 kubenswrapper[7337]: I0312 18:13:19.591100 7337 flags.go:64] FLAG: --kube-api-qps="50" Mar 12 18:13:19.592588 master-0 kubenswrapper[7337]: I0312 18:13:19.591110 7337 flags.go:64] FLAG: --kube-reserved="" Mar 12 18:13:19.592588 master-0 kubenswrapper[7337]: I0312 18:13:19.591121 7337 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 12 18:13:19.593180 master-0 kubenswrapper[7337]: I0312 18:13:19.591132 7337 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 12 18:13:19.593180 master-0 kubenswrapper[7337]: I0312 18:13:19.591142 7337 flags.go:64] FLAG: --kubelet-cgroups="" Mar 12 18:13:19.593180 master-0 kubenswrapper[7337]: I0312 18:13:19.591151 7337 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 12 18:13:19.593180 master-0 kubenswrapper[7337]: I0312 18:13:19.591164 7337 flags.go:64] FLAG: --lock-file="" Mar 12 18:13:19.593180 master-0 kubenswrapper[7337]: I0312 18:13:19.591174 7337 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 12 18:13:19.593180 master-0 kubenswrapper[7337]: I0312 18:13:19.591184 7337 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 12 18:13:19.593180 master-0 kubenswrapper[7337]: I0312 18:13:19.591194 7337 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 12 18:13:19.593180 master-0 kubenswrapper[7337]: I0312 18:13:19.591209 7337 flags.go:64] FLAG: --log-json-split-stream="false" Mar 12 18:13:19.593180 master-0 kubenswrapper[7337]: I0312 18:13:19.591219 7337 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 12 18:13:19.593180 master-0 kubenswrapper[7337]: I0312 18:13:19.591229 7337 flags.go:64] FLAG: --log-text-split-stream="false" Mar 12 18:13:19.593180 master-0 kubenswrapper[7337]: I0312 18:13:19.591239 7337 flags.go:64] FLAG: --logging-format="text" Mar 12 18:13:19.593180 master-0 kubenswrapper[7337]: I0312 18:13:19.591249 7337 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 12 18:13:19.593180 master-0 kubenswrapper[7337]: I0312 18:13:19.591259 7337 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 12 18:13:19.593180 master-0 kubenswrapper[7337]: I0312 18:13:19.591269 7337 flags.go:64] FLAG: --manifest-url="" Mar 12 18:13:19.593180 master-0 kubenswrapper[7337]: I0312 18:13:19.591279 7337 flags.go:64] FLAG: --manifest-url-header="" Mar 12 18:13:19.593180 master-0 kubenswrapper[7337]: I0312 18:13:19.591291 7337 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 12 18:13:19.593180 master-0 kubenswrapper[7337]: I0312 18:13:19.591302 7337 flags.go:64] FLAG: --max-open-files="1000000" Mar 12 18:13:19.593180 master-0 kubenswrapper[7337]: I0312 18:13:19.591313 7337 flags.go:64] FLAG: --max-pods="110" Mar 12 18:13:19.593180 master-0 kubenswrapper[7337]: I0312 18:13:19.591323 7337 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 12 18:13:19.593180 master-0 kubenswrapper[7337]: I0312 18:13:19.591334 7337 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 12 18:13:19.593180 master-0 kubenswrapper[7337]: I0312 18:13:19.591345 7337 flags.go:64] FLAG: --memory-manager-policy="None" Mar 12 18:13:19.593180 master-0 kubenswrapper[7337]: I0312 18:13:19.591355 7337 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 12 18:13:19.593180 master-0 kubenswrapper[7337]: I0312 18:13:19.591365 7337 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 12 18:13:19.593180 master-0 kubenswrapper[7337]: I0312 18:13:19.591375 7337 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 12 18:13:19.593747 master-0 kubenswrapper[7337]: I0312 18:13:19.591385 7337 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 12 18:13:19.593747 master-0 kubenswrapper[7337]: I0312 18:13:19.591407 7337 flags.go:64] FLAG: --node-status-max-images="50" Mar 12 18:13:19.593747 master-0 kubenswrapper[7337]: I0312 18:13:19.591417 7337 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 12 18:13:19.593747 master-0 kubenswrapper[7337]: I0312 18:13:19.591427 7337 flags.go:64] FLAG: --oom-score-adj="-999" Mar 12 18:13:19.593747 master-0 kubenswrapper[7337]: I0312 18:13:19.591437 7337 flags.go:64] FLAG: --pod-cidr="" Mar 12 18:13:19.593747 master-0 kubenswrapper[7337]: I0312 18:13:19.591446 7337 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3" Mar 12 18:13:19.593747 master-0 kubenswrapper[7337]: I0312 18:13:19.591462 7337 flags.go:64] FLAG: --pod-manifest-path="" Mar 12 18:13:19.593747 master-0 kubenswrapper[7337]: I0312 18:13:19.591471 7337 flags.go:64] FLAG: --pod-max-pids="-1" Mar 12 18:13:19.593747 master-0 kubenswrapper[7337]: I0312 18:13:19.591481 7337 flags.go:64] FLAG: --pods-per-core="0" Mar 12 18:13:19.593747 master-0 kubenswrapper[7337]: I0312 18:13:19.591491 7337 flags.go:64] FLAG: --port="10250" Mar 12 18:13:19.593747 master-0 kubenswrapper[7337]: I0312 18:13:19.591501 7337 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 12 18:13:19.593747 master-0 kubenswrapper[7337]: I0312 18:13:19.591512 7337 flags.go:64] FLAG: --provider-id="" Mar 12 18:13:19.593747 master-0 kubenswrapper[7337]: I0312 18:13:19.591549 7337 flags.go:64] FLAG: --qos-reserved="" Mar 12 18:13:19.593747 master-0 kubenswrapper[7337]: I0312 18:13:19.591560 7337 flags.go:64] FLAG: --read-only-port="10255" Mar 12 18:13:19.593747 master-0 kubenswrapper[7337]: I0312 18:13:19.591572 7337 flags.go:64] FLAG: --register-node="true" Mar 12 18:13:19.593747 master-0 kubenswrapper[7337]: I0312 18:13:19.591582 7337 flags.go:64] FLAG: --register-schedulable="true" Mar 12 18:13:19.593747 master-0 kubenswrapper[7337]: I0312 18:13:19.591592 7337 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 12 18:13:19.593747 master-0 kubenswrapper[7337]: I0312 18:13:19.591608 7337 flags.go:64] FLAG: --registry-burst="10" Mar 12 18:13:19.593747 master-0 kubenswrapper[7337]: I0312 18:13:19.591617 7337 flags.go:64] FLAG: --registry-qps="5" Mar 12 18:13:19.593747 master-0 kubenswrapper[7337]: I0312 18:13:19.591627 7337 flags.go:64] FLAG: --reserved-cpus="" Mar 12 18:13:19.593747 master-0 kubenswrapper[7337]: I0312 18:13:19.591636 7337 flags.go:64] FLAG: --reserved-memory="" Mar 12 18:13:19.593747 master-0 kubenswrapper[7337]: I0312 18:13:19.591649 7337 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 12 18:13:19.593747 master-0 kubenswrapper[7337]: I0312 18:13:19.591660 7337 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 12 18:13:19.593747 master-0 kubenswrapper[7337]: I0312 18:13:19.591670 7337 flags.go:64] FLAG: --rotate-certificates="false" Mar 12 18:13:19.594343 master-0 kubenswrapper[7337]: I0312 18:13:19.591679 7337 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 12 18:13:19.594343 master-0 kubenswrapper[7337]: I0312 18:13:19.591689 7337 flags.go:64] FLAG: --runonce="false" Mar 12 18:13:19.594343 master-0 kubenswrapper[7337]: I0312 18:13:19.591699 7337 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 12 18:13:19.594343 master-0 kubenswrapper[7337]: I0312 18:13:19.591709 7337 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 12 18:13:19.594343 master-0 kubenswrapper[7337]: I0312 18:13:19.591719 7337 flags.go:64] FLAG: --seccomp-default="false" Mar 12 18:13:19.594343 master-0 kubenswrapper[7337]: I0312 18:13:19.591729 7337 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 12 18:13:19.594343 master-0 kubenswrapper[7337]: I0312 18:13:19.591739 7337 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 12 18:13:19.594343 master-0 kubenswrapper[7337]: I0312 18:13:19.591749 7337 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 12 18:13:19.594343 master-0 kubenswrapper[7337]: I0312 18:13:19.591770 7337 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 12 18:13:19.594343 master-0 kubenswrapper[7337]: I0312 18:13:19.591780 7337 flags.go:64] FLAG: --storage-driver-password="root" Mar 12 18:13:19.594343 master-0 kubenswrapper[7337]: I0312 18:13:19.591791 7337 flags.go:64] FLAG: --storage-driver-secure="false" Mar 12 18:13:19.594343 master-0 kubenswrapper[7337]: I0312 18:13:19.591800 7337 flags.go:64] FLAG: --storage-driver-table="stats" Mar 12 18:13:19.594343 master-0 kubenswrapper[7337]: I0312 18:13:19.591810 7337 flags.go:64] FLAG: --storage-driver-user="root" Mar 12 18:13:19.594343 master-0 kubenswrapper[7337]: I0312 18:13:19.591820 7337 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 12 18:13:19.594343 master-0 kubenswrapper[7337]: I0312 18:13:19.591830 7337 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 12 18:13:19.594343 master-0 kubenswrapper[7337]: I0312 18:13:19.591840 7337 flags.go:64] FLAG: --system-cgroups="" Mar 12 18:13:19.594343 master-0 kubenswrapper[7337]: I0312 18:13:19.591850 7337 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 12 18:13:19.594343 master-0 kubenswrapper[7337]: I0312 18:13:19.591865 7337 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 12 18:13:19.594343 master-0 kubenswrapper[7337]: I0312 18:13:19.591875 7337 flags.go:64] FLAG: --tls-cert-file="" Mar 12 18:13:19.594343 master-0 kubenswrapper[7337]: I0312 18:13:19.591885 7337 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 12 18:13:19.594343 master-0 kubenswrapper[7337]: I0312 18:13:19.591897 7337 flags.go:64] FLAG: --tls-min-version="" Mar 12 18:13:19.594343 master-0 kubenswrapper[7337]: I0312 18:13:19.591907 7337 flags.go:64] FLAG: --tls-private-key-file="" Mar 12 18:13:19.594343 master-0 kubenswrapper[7337]: I0312 18:13:19.591917 7337 flags.go:64] FLAG: --topology-manager-policy="none" Mar 12 18:13:19.594343 master-0 kubenswrapper[7337]: I0312 18:13:19.591926 7337 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 12 18:13:19.594343 master-0 kubenswrapper[7337]: I0312 18:13:19.591938 7337 flags.go:64] FLAG: --topology-manager-scope="container" Mar 12 18:13:19.594919 master-0 kubenswrapper[7337]: I0312 18:13:19.591948 7337 flags.go:64] FLAG: --v="2" Mar 12 18:13:19.594919 master-0 kubenswrapper[7337]: I0312 18:13:19.591961 7337 flags.go:64] FLAG: --version="false" Mar 12 18:13:19.594919 master-0 kubenswrapper[7337]: I0312 18:13:19.591973 7337 flags.go:64] FLAG: --vmodule="" Mar 12 18:13:19.594919 master-0 kubenswrapper[7337]: I0312 18:13:19.591986 7337 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 12 18:13:19.594919 master-0 kubenswrapper[7337]: I0312 18:13:19.591997 7337 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 12 18:13:19.594919 master-0 kubenswrapper[7337]: W0312 18:13:19.592251 7337 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 18:13:19.594919 master-0 kubenswrapper[7337]: W0312 18:13:19.592265 7337 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 18:13:19.594919 master-0 kubenswrapper[7337]: W0312 18:13:19.592277 7337 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 18:13:19.594919 master-0 kubenswrapper[7337]: W0312 18:13:19.592286 7337 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 18:13:19.594919 master-0 kubenswrapper[7337]: W0312 18:13:19.592295 7337 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 18:13:19.594919 master-0 kubenswrapper[7337]: W0312 18:13:19.592305 7337 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 18:13:19.594919 master-0 kubenswrapper[7337]: W0312 18:13:19.592314 7337 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 18:13:19.594919 master-0 kubenswrapper[7337]: W0312 18:13:19.592326 7337 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 18:13:19.594919 master-0 kubenswrapper[7337]: W0312 18:13:19.592337 7337 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 18:13:19.594919 master-0 kubenswrapper[7337]: W0312 18:13:19.592348 7337 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 18:13:19.594919 master-0 kubenswrapper[7337]: W0312 18:13:19.592358 7337 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 18:13:19.594919 master-0 kubenswrapper[7337]: W0312 18:13:19.592368 7337 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 18:13:19.594919 master-0 kubenswrapper[7337]: W0312 18:13:19.592377 7337 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 18:13:19.594919 master-0 kubenswrapper[7337]: W0312 18:13:19.592387 7337 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 18:13:19.594919 master-0 kubenswrapper[7337]: W0312 18:13:19.592396 7337 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 18:13:19.594919 master-0 kubenswrapper[7337]: W0312 18:13:19.592406 7337 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 18:13:19.595410 master-0 kubenswrapper[7337]: W0312 18:13:19.592415 7337 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 18:13:19.595410 master-0 kubenswrapper[7337]: W0312 18:13:19.592425 7337 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 18:13:19.595410 master-0 kubenswrapper[7337]: W0312 18:13:19.592434 7337 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 18:13:19.595410 master-0 kubenswrapper[7337]: W0312 18:13:19.592442 7337 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 18:13:19.595410 master-0 kubenswrapper[7337]: W0312 18:13:19.592452 7337 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 18:13:19.595410 master-0 kubenswrapper[7337]: W0312 18:13:19.592461 7337 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 18:13:19.595410 master-0 kubenswrapper[7337]: W0312 18:13:19.592469 7337 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 18:13:19.595410 master-0 kubenswrapper[7337]: W0312 18:13:19.592478 7337 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 18:13:19.595410 master-0 kubenswrapper[7337]: W0312 18:13:19.592487 7337 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 18:13:19.595410 master-0 kubenswrapper[7337]: W0312 18:13:19.592496 7337 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 18:13:19.595410 master-0 kubenswrapper[7337]: W0312 18:13:19.592504 7337 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 18:13:19.595410 master-0 kubenswrapper[7337]: W0312 18:13:19.592536 7337 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 18:13:19.595410 master-0 kubenswrapper[7337]: W0312 18:13:19.592545 7337 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 18:13:19.595410 master-0 kubenswrapper[7337]: W0312 18:13:19.592554 7337 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 18:13:19.595410 master-0 kubenswrapper[7337]: W0312 18:13:19.592564 7337 feature_gate.go:330] unrecognized feature gate: Example Mar 12 18:13:19.595410 master-0 kubenswrapper[7337]: W0312 18:13:19.592574 7337 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 18:13:19.595410 master-0 kubenswrapper[7337]: W0312 18:13:19.592582 7337 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 18:13:19.595410 master-0 kubenswrapper[7337]: W0312 18:13:19.592591 7337 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 18:13:19.595410 master-0 kubenswrapper[7337]: W0312 18:13:19.592599 7337 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 18:13:19.595410 master-0 kubenswrapper[7337]: W0312 18:13:19.592609 7337 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 18:13:19.595932 master-0 kubenswrapper[7337]: W0312 18:13:19.592670 7337 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 18:13:19.595932 master-0 kubenswrapper[7337]: W0312 18:13:19.592682 7337 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 18:13:19.595932 master-0 kubenswrapper[7337]: W0312 18:13:19.592690 7337 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 18:13:19.595932 master-0 kubenswrapper[7337]: W0312 18:13:19.592699 7337 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 18:13:19.595932 master-0 kubenswrapper[7337]: W0312 18:13:19.592708 7337 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 18:13:19.595932 master-0 kubenswrapper[7337]: W0312 18:13:19.592717 7337 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 18:13:19.595932 master-0 kubenswrapper[7337]: W0312 18:13:19.592725 7337 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 18:13:19.595932 master-0 kubenswrapper[7337]: W0312 18:13:19.592734 7337 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 18:13:19.595932 master-0 kubenswrapper[7337]: W0312 18:13:19.592742 7337 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 18:13:19.595932 master-0 kubenswrapper[7337]: W0312 18:13:19.592792 7337 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 18:13:19.595932 master-0 kubenswrapper[7337]: W0312 18:13:19.592802 7337 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 18:13:19.595932 master-0 kubenswrapper[7337]: W0312 18:13:19.592812 7337 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 18:13:19.595932 master-0 kubenswrapper[7337]: W0312 18:13:19.592821 7337 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 18:13:19.595932 master-0 kubenswrapper[7337]: W0312 18:13:19.592833 7337 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 18:13:19.595932 master-0 kubenswrapper[7337]: W0312 18:13:19.592844 7337 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 18:13:19.595932 master-0 kubenswrapper[7337]: W0312 18:13:19.592856 7337 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 18:13:19.595932 master-0 kubenswrapper[7337]: W0312 18:13:19.592865 7337 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 18:13:19.595932 master-0 kubenswrapper[7337]: W0312 18:13:19.592874 7337 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 18:13:19.595932 master-0 kubenswrapper[7337]: W0312 18:13:19.592883 7337 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 18:13:19.596436 master-0 kubenswrapper[7337]: W0312 18:13:19.592894 7337 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 18:13:19.596436 master-0 kubenswrapper[7337]: W0312 18:13:19.592906 7337 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 18:13:19.596436 master-0 kubenswrapper[7337]: W0312 18:13:19.592916 7337 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 18:13:19.596436 master-0 kubenswrapper[7337]: W0312 18:13:19.592924 7337 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 18:13:19.596436 master-0 kubenswrapper[7337]: W0312 18:13:19.592932 7337 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 18:13:19.596436 master-0 kubenswrapper[7337]: W0312 18:13:19.592941 7337 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 18:13:19.596436 master-0 kubenswrapper[7337]: W0312 18:13:19.592950 7337 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 18:13:19.596436 master-0 kubenswrapper[7337]: W0312 18:13:19.592958 7337 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 18:13:19.596436 master-0 kubenswrapper[7337]: W0312 18:13:19.592967 7337 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 18:13:19.596436 master-0 kubenswrapper[7337]: W0312 18:13:19.592976 7337 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 18:13:19.596436 master-0 kubenswrapper[7337]: W0312 18:13:19.592985 7337 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 18:13:19.596436 master-0 kubenswrapper[7337]: W0312 18:13:19.592995 7337 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 18:13:19.596436 master-0 kubenswrapper[7337]: W0312 18:13:19.593005 7337 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 18:13:19.596436 master-0 kubenswrapper[7337]: W0312 18:13:19.593013 7337 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 18:13:19.596436 master-0 kubenswrapper[7337]: W0312 18:13:19.593022 7337 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 18:13:19.596436 master-0 kubenswrapper[7337]: W0312 18:13:19.593031 7337 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 18:13:19.596436 master-0 kubenswrapper[7337]: W0312 18:13:19.593039 7337 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 18:13:19.596906 master-0 kubenswrapper[7337]: I0312 18:13:19.593067 7337 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 18:13:19.599566 master-0 kubenswrapper[7337]: I0312 18:13:19.599526 7337 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 12 18:13:19.599566 master-0 kubenswrapper[7337]: I0312 18:13:19.599560 7337 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 18:13:19.599688 master-0 kubenswrapper[7337]: W0312 18:13:19.599667 7337 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 18:13:19.599688 master-0 kubenswrapper[7337]: W0312 18:13:19.599680 7337 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 18:13:19.599688 master-0 kubenswrapper[7337]: W0312 18:13:19.599684 7337 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 18:13:19.599688 master-0 kubenswrapper[7337]: W0312 18:13:19.599689 7337 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 18:13:19.599809 master-0 kubenswrapper[7337]: W0312 18:13:19.599693 7337 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 18:13:19.599809 master-0 kubenswrapper[7337]: W0312 18:13:19.599698 7337 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 18:13:19.599809 master-0 kubenswrapper[7337]: W0312 18:13:19.599702 7337 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 18:13:19.599809 master-0 kubenswrapper[7337]: W0312 18:13:19.599706 7337 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 18:13:19.599809 master-0 kubenswrapper[7337]: W0312 18:13:19.599710 7337 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 18:13:19.599809 master-0 kubenswrapper[7337]: W0312 18:13:19.599713 7337 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 18:13:19.599809 master-0 kubenswrapper[7337]: W0312 18:13:19.599717 7337 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 18:13:19.599809 master-0 kubenswrapper[7337]: W0312 18:13:19.599720 7337 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 18:13:19.599809 master-0 kubenswrapper[7337]: W0312 18:13:19.599724 7337 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 18:13:19.599809 master-0 kubenswrapper[7337]: W0312 18:13:19.599728 7337 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 18:13:19.599809 master-0 kubenswrapper[7337]: W0312 18:13:19.599732 7337 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 18:13:19.599809 master-0 kubenswrapper[7337]: W0312 18:13:19.599737 7337 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 18:13:19.599809 master-0 kubenswrapper[7337]: W0312 18:13:19.599743 7337 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 18:13:19.599809 master-0 kubenswrapper[7337]: W0312 18:13:19.599747 7337 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 18:13:19.599809 master-0 kubenswrapper[7337]: W0312 18:13:19.599752 7337 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 18:13:19.599809 master-0 kubenswrapper[7337]: W0312 18:13:19.599756 7337 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 18:13:19.599809 master-0 kubenswrapper[7337]: W0312 18:13:19.599760 7337 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 18:13:19.599809 master-0 kubenswrapper[7337]: W0312 18:13:19.599764 7337 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 18:13:19.599809 master-0 kubenswrapper[7337]: W0312 18:13:19.599768 7337 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 18:13:19.600279 master-0 kubenswrapper[7337]: W0312 18:13:19.599772 7337 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 18:13:19.600279 master-0 kubenswrapper[7337]: W0312 18:13:19.599777 7337 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 18:13:19.600279 master-0 kubenswrapper[7337]: W0312 18:13:19.599781 7337 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 18:13:19.600279 master-0 kubenswrapper[7337]: W0312 18:13:19.599785 7337 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 18:13:19.600279 master-0 kubenswrapper[7337]: W0312 18:13:19.599789 7337 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 18:13:19.600279 master-0 kubenswrapper[7337]: W0312 18:13:19.599792 7337 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 18:13:19.600279 master-0 kubenswrapper[7337]: W0312 18:13:19.599796 7337 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 18:13:19.600279 master-0 kubenswrapper[7337]: W0312 18:13:19.599800 7337 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 18:13:19.600279 master-0 kubenswrapper[7337]: W0312 18:13:19.599803 7337 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 18:13:19.600279 master-0 kubenswrapper[7337]: W0312 18:13:19.599807 7337 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 18:13:19.600279 master-0 kubenswrapper[7337]: W0312 18:13:19.599811 7337 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 18:13:19.600279 master-0 kubenswrapper[7337]: W0312 18:13:19.599817 7337 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 18:13:19.600279 master-0 kubenswrapper[7337]: W0312 18:13:19.599821 7337 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 18:13:19.600279 master-0 kubenswrapper[7337]: W0312 18:13:19.599825 7337 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 18:13:19.600279 master-0 kubenswrapper[7337]: W0312 18:13:19.599830 7337 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 18:13:19.600279 master-0 kubenswrapper[7337]: W0312 18:13:19.599834 7337 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 18:13:19.600279 master-0 kubenswrapper[7337]: W0312 18:13:19.599838 7337 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 18:13:19.600279 master-0 kubenswrapper[7337]: W0312 18:13:19.599842 7337 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 18:13:19.600279 master-0 kubenswrapper[7337]: W0312 18:13:19.599845 7337 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 18:13:19.600279 master-0 kubenswrapper[7337]: W0312 18:13:19.599849 7337 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 18:13:19.600854 master-0 kubenswrapper[7337]: W0312 18:13:19.599853 7337 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 18:13:19.600854 master-0 kubenswrapper[7337]: W0312 18:13:19.599856 7337 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 18:13:19.600854 master-0 kubenswrapper[7337]: W0312 18:13:19.599860 7337 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 18:13:19.600854 master-0 kubenswrapper[7337]: W0312 18:13:19.599865 7337 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 18:13:19.600854 master-0 kubenswrapper[7337]: W0312 18:13:19.599869 7337 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 18:13:19.600854 master-0 kubenswrapper[7337]: W0312 18:13:19.599873 7337 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 18:13:19.600854 master-0 kubenswrapper[7337]: W0312 18:13:19.599877 7337 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 18:13:19.600854 master-0 kubenswrapper[7337]: W0312 18:13:19.599881 7337 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 18:13:19.600854 master-0 kubenswrapper[7337]: W0312 18:13:19.599884 7337 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 18:13:19.600854 master-0 kubenswrapper[7337]: W0312 18:13:19.599888 7337 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 18:13:19.600854 master-0 kubenswrapper[7337]: W0312 18:13:19.599892 7337 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 18:13:19.600854 master-0 kubenswrapper[7337]: W0312 18:13:19.599895 7337 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 18:13:19.600854 master-0 kubenswrapper[7337]: W0312 18:13:19.599899 7337 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 18:13:19.600854 master-0 kubenswrapper[7337]: W0312 18:13:19.599903 7337 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 18:13:19.600854 master-0 kubenswrapper[7337]: W0312 18:13:19.599907 7337 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 18:13:19.600854 master-0 kubenswrapper[7337]: W0312 18:13:19.599912 7337 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 18:13:19.600854 master-0 kubenswrapper[7337]: W0312 18:13:19.599917 7337 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 18:13:19.600854 master-0 kubenswrapper[7337]: W0312 18:13:19.599921 7337 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 18:13:19.600854 master-0 kubenswrapper[7337]: W0312 18:13:19.599925 7337 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 18:13:19.601349 master-0 kubenswrapper[7337]: W0312 18:13:19.599929 7337 feature_gate.go:330] unrecognized feature gate: Example Mar 12 18:13:19.601349 master-0 kubenswrapper[7337]: W0312 18:13:19.599932 7337 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 18:13:19.601349 master-0 kubenswrapper[7337]: W0312 18:13:19.599936 7337 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 18:13:19.601349 master-0 kubenswrapper[7337]: W0312 18:13:19.599941 7337 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 18:13:19.601349 master-0 kubenswrapper[7337]: W0312 18:13:19.599944 7337 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 18:13:19.601349 master-0 kubenswrapper[7337]: W0312 18:13:19.599948 7337 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 18:13:19.601349 master-0 kubenswrapper[7337]: W0312 18:13:19.599952 7337 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 18:13:19.601349 master-0 kubenswrapper[7337]: W0312 18:13:19.599956 7337 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 18:13:19.601349 master-0 kubenswrapper[7337]: W0312 18:13:19.599960 7337 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 18:13:19.601349 master-0 kubenswrapper[7337]: W0312 18:13:19.599964 7337 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 18:13:19.601349 master-0 kubenswrapper[7337]: I0312 18:13:19.599970 7337 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 18:13:19.601349 master-0 kubenswrapper[7337]: W0312 18:13:19.600110 7337 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 18:13:19.601349 master-0 kubenswrapper[7337]: W0312 18:13:19.600119 7337 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 18:13:19.601349 master-0 kubenswrapper[7337]: W0312 18:13:19.600123 7337 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 18:13:19.601349 master-0 kubenswrapper[7337]: W0312 18:13:19.600127 7337 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 18:13:19.601739 master-0 kubenswrapper[7337]: W0312 18:13:19.600131 7337 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 18:13:19.601739 master-0 kubenswrapper[7337]: W0312 18:13:19.600136 7337 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 18:13:19.601739 master-0 kubenswrapper[7337]: W0312 18:13:19.600142 7337 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 18:13:19.601739 master-0 kubenswrapper[7337]: W0312 18:13:19.600146 7337 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 18:13:19.601739 master-0 kubenswrapper[7337]: W0312 18:13:19.600149 7337 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 18:13:19.601739 master-0 kubenswrapper[7337]: W0312 18:13:19.600153 7337 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 18:13:19.601739 master-0 kubenswrapper[7337]: W0312 18:13:19.600157 7337 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 18:13:19.601739 master-0 kubenswrapper[7337]: W0312 18:13:19.600161 7337 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 18:13:19.601739 master-0 kubenswrapper[7337]: W0312 18:13:19.600164 7337 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 18:13:19.601739 master-0 kubenswrapper[7337]: W0312 18:13:19.600168 7337 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 18:13:19.601739 master-0 kubenswrapper[7337]: W0312 18:13:19.600171 7337 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 18:13:19.601739 master-0 kubenswrapper[7337]: W0312 18:13:19.600175 7337 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 18:13:19.601739 master-0 kubenswrapper[7337]: W0312 18:13:19.600180 7337 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 18:13:19.601739 master-0 kubenswrapper[7337]: W0312 18:13:19.600184 7337 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 18:13:19.601739 master-0 kubenswrapper[7337]: W0312 18:13:19.600188 7337 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 18:13:19.601739 master-0 kubenswrapper[7337]: W0312 18:13:19.600191 7337 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 18:13:19.601739 master-0 kubenswrapper[7337]: W0312 18:13:19.600195 7337 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 18:13:19.601739 master-0 kubenswrapper[7337]: W0312 18:13:19.600198 7337 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 18:13:19.601739 master-0 kubenswrapper[7337]: W0312 18:13:19.600202 7337 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 18:13:19.602272 master-0 kubenswrapper[7337]: W0312 18:13:19.600205 7337 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 18:13:19.602272 master-0 kubenswrapper[7337]: W0312 18:13:19.600209 7337 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 18:13:19.602272 master-0 kubenswrapper[7337]: W0312 18:13:19.600212 7337 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 18:13:19.602272 master-0 kubenswrapper[7337]: W0312 18:13:19.600216 7337 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 18:13:19.602272 master-0 kubenswrapper[7337]: W0312 18:13:19.600220 7337 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 18:13:19.602272 master-0 kubenswrapper[7337]: W0312 18:13:19.600225 7337 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 18:13:19.602272 master-0 kubenswrapper[7337]: W0312 18:13:19.600231 7337 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 18:13:19.602272 master-0 kubenswrapper[7337]: W0312 18:13:19.600235 7337 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 18:13:19.602272 master-0 kubenswrapper[7337]: W0312 18:13:19.600240 7337 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 18:13:19.602272 master-0 kubenswrapper[7337]: W0312 18:13:19.600244 7337 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 18:13:19.602272 master-0 kubenswrapper[7337]: W0312 18:13:19.600249 7337 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 18:13:19.602272 master-0 kubenswrapper[7337]: W0312 18:13:19.600253 7337 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 18:13:19.602272 master-0 kubenswrapper[7337]: W0312 18:13:19.600256 7337 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 18:13:19.602272 master-0 kubenswrapper[7337]: W0312 18:13:19.600261 7337 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 18:13:19.602272 master-0 kubenswrapper[7337]: W0312 18:13:19.600265 7337 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 18:13:19.602272 master-0 kubenswrapper[7337]: W0312 18:13:19.600269 7337 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 18:13:19.602272 master-0 kubenswrapper[7337]: W0312 18:13:19.600273 7337 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 18:13:19.602272 master-0 kubenswrapper[7337]: W0312 18:13:19.600277 7337 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 18:13:19.602272 master-0 kubenswrapper[7337]: W0312 18:13:19.600281 7337 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 18:13:19.602798 master-0 kubenswrapper[7337]: W0312 18:13:19.600285 7337 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 18:13:19.602798 master-0 kubenswrapper[7337]: W0312 18:13:19.600288 7337 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 18:13:19.602798 master-0 kubenswrapper[7337]: W0312 18:13:19.600292 7337 feature_gate.go:330] unrecognized feature gate: Example Mar 12 18:13:19.602798 master-0 kubenswrapper[7337]: W0312 18:13:19.600296 7337 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 18:13:19.602798 master-0 kubenswrapper[7337]: W0312 18:13:19.600300 7337 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 18:13:19.602798 master-0 kubenswrapper[7337]: W0312 18:13:19.600304 7337 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 18:13:19.602798 master-0 kubenswrapper[7337]: W0312 18:13:19.600308 7337 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 18:13:19.602798 master-0 kubenswrapper[7337]: W0312 18:13:19.600311 7337 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 18:13:19.602798 master-0 kubenswrapper[7337]: W0312 18:13:19.600317 7337 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 18:13:19.602798 master-0 kubenswrapper[7337]: W0312 18:13:19.600321 7337 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 18:13:19.602798 master-0 kubenswrapper[7337]: W0312 18:13:19.600325 7337 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 18:13:19.602798 master-0 kubenswrapper[7337]: W0312 18:13:19.600330 7337 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 18:13:19.602798 master-0 kubenswrapper[7337]: W0312 18:13:19.600333 7337 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 18:13:19.602798 master-0 kubenswrapper[7337]: W0312 18:13:19.600337 7337 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 18:13:19.602798 master-0 kubenswrapper[7337]: W0312 18:13:19.600341 7337 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 18:13:19.602798 master-0 kubenswrapper[7337]: W0312 18:13:19.600345 7337 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 18:13:19.602798 master-0 kubenswrapper[7337]: W0312 18:13:19.600349 7337 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 18:13:19.602798 master-0 kubenswrapper[7337]: W0312 18:13:19.600352 7337 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 18:13:19.602798 master-0 kubenswrapper[7337]: W0312 18:13:19.600356 7337 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 18:13:19.602798 master-0 kubenswrapper[7337]: W0312 18:13:19.600360 7337 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 18:13:19.603275 master-0 kubenswrapper[7337]: W0312 18:13:19.600363 7337 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 18:13:19.603275 master-0 kubenswrapper[7337]: W0312 18:13:19.600367 7337 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 18:13:19.603275 master-0 kubenswrapper[7337]: W0312 18:13:19.600370 7337 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 18:13:19.603275 master-0 kubenswrapper[7337]: W0312 18:13:19.600374 7337 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 18:13:19.603275 master-0 kubenswrapper[7337]: W0312 18:13:19.600378 7337 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 18:13:19.603275 master-0 kubenswrapper[7337]: W0312 18:13:19.600381 7337 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 18:13:19.603275 master-0 kubenswrapper[7337]: W0312 18:13:19.600385 7337 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 18:13:19.603275 master-0 kubenswrapper[7337]: W0312 18:13:19.600389 7337 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 18:13:19.603275 master-0 kubenswrapper[7337]: W0312 18:13:19.600393 7337 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 18:13:19.603275 master-0 kubenswrapper[7337]: W0312 18:13:19.600397 7337 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 18:13:19.603275 master-0 kubenswrapper[7337]: I0312 18:13:19.600403 7337 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 18:13:19.603275 master-0 kubenswrapper[7337]: I0312 18:13:19.600564 7337 server.go:940] "Client rotation is on, will bootstrap in background" Mar 12 18:13:19.603275 master-0 kubenswrapper[7337]: I0312 18:13:19.602002 7337 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 12 18:13:19.603275 master-0 kubenswrapper[7337]: I0312 18:13:19.602057 7337 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 12 18:13:19.603275 master-0 kubenswrapper[7337]: I0312 18:13:19.602225 7337 server.go:997] "Starting client certificate rotation" Mar 12 18:13:19.603636 master-0 kubenswrapper[7337]: I0312 18:13:19.602234 7337 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 12 18:13:19.603636 master-0 kubenswrapper[7337]: I0312 18:13:19.602750 7337 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 18:13:19.603925 master-0 kubenswrapper[7337]: I0312 18:13:19.603818 7337 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 18:13:19.604077 master-0 kubenswrapper[7337]: I0312 18:13:19.603930 7337 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-13 18:02:20 +0000 UTC, rotation deadline is 2026-03-13 13:50:23.858394669 +0000 UTC Mar 12 18:13:19.604077 master-0 kubenswrapper[7337]: I0312 18:13:19.604049 7337 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 19h37m4.254349915s for next certificate rotation Mar 12 18:13:19.608583 master-0 kubenswrapper[7337]: I0312 18:13:19.608549 7337 log.go:25] "Validated CRI v1 runtime API" Mar 12 18:13:19.611425 master-0 kubenswrapper[7337]: I0312 18:13:19.611369 7337 log.go:25] "Validated CRI v1 image API" Mar 12 18:13:19.612397 master-0 kubenswrapper[7337]: I0312 18:13:19.612335 7337 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 12 18:13:19.618333 master-0 kubenswrapper[7337]: I0312 18:13:19.618257 7337 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 f6c40199-182a-4be5-87d7-87de18d890be:/dev/vda3] Mar 12 18:13:19.619151 master-0 kubenswrapper[7337]: I0312 18:13:19.618337 7337 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/16d696d609e4d9275dce2cfecd0a4d1078c8e60ea7f137e92635d2bfd874a46b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/16d696d609e4d9275dce2cfecd0a4d1078c8e60ea7f137e92635d2bfd874a46b/userdata/shm major:0 minor:129 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1c062db0efa15fd6679d2718a5857a9b8db81f25fe5e3a47bd35e6f192db0dd6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1c062db0efa15fd6679d2718a5857a9b8db81f25fe5e3a47bd35e6f192db0dd6/userdata/shm major:0 minor:250 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1d311f9e8cf6ad3cdfb6335b00f9729ed813d3bacc476060ca21b806ee856231/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1d311f9e8cf6ad3cdfb6335b00f9729ed813d3bacc476060ca21b806ee856231/userdata/shm major:0 minor:267 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/38c299b0655225599ee9b03928de90a7927480cc6329508c816e9e361bbcfa16/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/38c299b0655225599ee9b03928de90a7927480cc6329508c816e9e361bbcfa16/userdata/shm major:0 minor:256 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/419bbcc10e95d196cd0f08dbf057bbc2aa7a617fdfb7f0d1b356baa7bbabca04/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/419bbcc10e95d196cd0f08dbf057bbc2aa7a617fdfb7f0d1b356baa7bbabca04/userdata/shm major:0 minor:273 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/427d9fb4da53770ccc67c12bc3aaae3e507cd75207365e40c1611eeaaf274b84/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/427d9fb4da53770ccc67c12bc3aaae3e507cd75207365e40c1611eeaaf274b84/userdata/shm major:0 minor:47 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/60ce66e2a62ed17df4cad9067d0bb6d4940a38dc4b5a5337ba95a9117aca3c70/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/60ce66e2a62ed17df4cad9067d0bb6d4940a38dc4b5a5337ba95a9117aca3c70/userdata/shm major:0 minor:265 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/64939e6ed4a35637d0a2c2bc028af5d4314b96efb512849766779eb1c4382a35/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/64939e6ed4a35637d0a2c2bc028af5d4314b96efb512849766779eb1c4382a35/userdata/shm major:0 minor:270 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/64955db5addf9b64f24ce95166a3106b2564db66c223ef67752e78909dc304ef/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/64955db5addf9b64f24ce95166a3106b2564db66c223ef67752e78909dc304ef/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6d3dbd3e29a6d7e3e111f4c45f534b1d831fee55b19f442dc477ede7e14f8ccb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6d3dbd3e29a6d7e3e111f4c45f534b1d831fee55b19f442dc477ede7e14f8ccb/userdata/shm major:0 minor:155 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6dd6381115d9cbf9ba7c1a108737553c31041609750cc0e631e36ed92f66311d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6dd6381115d9cbf9ba7c1a108737553c31041609750cc0e631e36ed92f66311d/userdata/shm major:0 minor:100 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/74bbf11cd33cced50ba626f06b188adf24ce7f72b1161eb2c06db1ce6ae46dd5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/74bbf11cd33cced50ba626f06b188adf24ce7f72b1161eb2c06db1ce6ae46dd5/userdata/shm major:0 minor:139 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/91bbe32b85272f0f9f735ba2a67b1085ea37b3016231eb6d6938a08eed1a3b9d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/91bbe32b85272f0f9f735ba2a67b1085ea37b3016231eb6d6938a08eed1a3b9d/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/91d19dd0041e348f5ab95fd10ff19be4195ac501d593d4346b94e73b4b7bfba3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/91d19dd0041e348f5ab95fd10ff19be4195ac501d593d4346b94e73b4b7bfba3/userdata/shm major:0 minor:252 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ab67b82c7d40212f9275c2d813cf0ddfb1de4b0eb68ab01e5d797fad81a0d351/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ab67b82c7d40212f9275c2d813cf0ddfb1de4b0eb68ab01e5d797fad81a0d351/userdata/shm major:0 minor:258 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b2474f5d479286c70d652654d0e6946155d42c6b7dd11abc4f60fd4bf3123854/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b2474f5d479286c70d652654d0e6946155d42c6b7dd11abc4f60fd4bf3123854/userdata/shm major:0 minor:248 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bafb4a547df5e8f39b94a88d98e85d34e5a0230468f2013bc4da6ee9fbc59ee3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bafb4a547df5e8f39b94a88d98e85d34e5a0230468f2013bc4da6ee9fbc59ee3/userdata/shm major:0 minor:263 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c14a32fc9f0111ac97c8d0756c820cfe5f40ed691d6de42ac60400f58318b138/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c14a32fc9f0111ac97c8d0756c820cfe5f40ed691d6de42ac60400f58318b138/userdata/shm major:0 minor:114 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c5d8b743e37e43da0e4ff17c103781e7e406b75fffac74b32a3f5490d58a4481/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c5d8b743e37e43da0e4ff17c103781e7e406b75fffac74b32a3f5490d58a4481/userdata/shm major:0 minor:46 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e3cce5ce786ddb4af71a8112135cad1426f074e7b12c67ae740271017aa946b3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e3cce5ce786ddb4af71a8112135cad1426f074e7b12c67ae740271017aa946b3/userdata/shm major:0 minor:254 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e54bd9f4ed4a4d50ffb03de0e433f3897b33df6a46c77fdb71d0900fa9a91e17/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e54bd9f4ed4a4d50ffb03de0e433f3897b33df6a46c77fdb71d0900fa9a91e17/userdata/shm major:0 minor:261 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e54cdbac1902d131e189d00f45d878f018504994bb1378cf2e83bf1f9b2a651b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e54cdbac1902d131e189d00f45d878f018504994bb1378cf2e83bf1f9b2a651b/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e7b7f3534352e488adef6510c4ad914236d57c0ac52f6e0d4e107e52563cb840/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e7b7f3534352e488adef6510c4ad914236d57c0ac52f6e0d4e107e52563cb840/userdata/shm major:0 minor:119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ecf7670cd0c657ac23db39730253e7de6d6d1e9634e025fe447cc9e07fe1d91a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ecf7670cd0c657ac23db39730253e7de6d6d1e9634e025fe447cc9e07fe1d91a/userdata/shm major:0 minor:266 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/00755a4e-124c-4a51-b1c5-7c505b3637a8/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/00755a4e-124c-4a51-b1c5-7c505b3637a8/volumes/kubernetes.io~projected/kube-api-access major:0 minor:99 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/055f5c67-f512-4510-99c5-e194944b0599/volumes/kubernetes.io~projected/kube-api-access-tkt7d:{mountpoint:/var/lib/kubelet/pods/055f5c67-f512-4510-99c5-e194944b0599/volumes/kubernetes.io~projected/kube-api-access-tkt7d major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/055f5c67-f512-4510-99c5-e194944b0599/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/055f5c67-f512-4510-99c5-e194944b0599/volumes/kubernetes.io~secret/serving-cert major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/062f1b21-2ffc-47da-8334-427c3b2a1a90/volumes/kubernetes.io~projected/kube-api-access-jn9nf:{mountpoint:/var/lib/kubelet/pods/062f1b21-2ffc-47da-8334-427c3b2a1a90/volumes/kubernetes.io~projected/kube-api-access-jn9nf major:0 minor:236 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/062f1b21-2ffc-47da-8334-427c3b2a1a90/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/062f1b21-2ffc-47da-8334-427c3b2a1a90/volumes/kubernetes.io~secret/serving-cert major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa/volumes/kubernetes.io~projected/kube-api-access-s55hv:{mountpoint:/var/lib/kubelet/pods/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa/volumes/kubernetes.io~projected/kube-api-access-s55hv major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa/volumes/kubernetes.io~secret/serving-cert major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/236f2886-bb69-49a7-9471-36454fd1cbd3/volumes/kubernetes.io~projected/kube-api-access-b6ggg:{mountpoint:/var/lib/kubelet/pods/236f2886-bb69-49a7-9471-36454fd1cbd3/volumes/kubernetes.io~projected/kube-api-access-b6ggg major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/236f2886-bb69-49a7-9471-36454fd1cbd3/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/236f2886-bb69-49a7-9471-36454fd1cbd3/volumes/kubernetes.io~secret/serving-cert major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed/volumes/kubernetes.io~projected/kube-api-access-wlf77:{mountpoint:/var/lib/kubelet/pods/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed/volumes/kubernetes.io~projected/kube-api-access-wlf77 major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/33feec78-4592-4343-965b-aa1b7044fcf3/volumes/kubernetes.io~projected/kube-api-access-ptrtx:{mountpoint:/var/lib/kubelet/pods/33feec78-4592-4343-965b-aa1b7044fcf3/volumes/kubernetes.io~projected/kube-api-access-ptrtx major:0 minor:303 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/37cd9c0a-697e-4e67-932b-b331ff77c8c0/volumes/kubernetes.io~projected/kube-api-access-pfpb9:{mountpoint:/var/lib/kubelet/pods/37cd9c0a-697e-4e67-932b-b331ff77c8c0/volumes/kubernetes.io~projected/kube-api-access-pfpb9 major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/37cd9c0a-697e-4e67-932b-b331ff77c8c0/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/37cd9c0a-697e-4e67-932b-b331ff77c8c0/volumes/kubernetes.io~secret/serving-cert major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/38a4bf73-479e-4bbf-9aa3-639fc288c8bc/volumes/kubernetes.io~projected/kube-api-access-2pn9h:{mountpoint:/var/lib/kubelet/pods/38a4bf73-479e-4bbf-9aa3-639fc288c8bc/volumes/kubernetes.io~projected/kube-api-access-2pn9h major:0 minor:105 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/455f0aad-add2-49d0-995c-f92467bce2d6/volumes/kubernetes.io~projected/kube-api-access-pxsgv:{mountpoint:/var/lib/kubelet/pods/455f0aad-add2-49d0-995c-f92467bce2d6/volumes/kubernetes.io~projected/kube-api-access-pxsgv major:0 minor:118 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/45aa4887-c913-4ece-ae34-fcde33832621/volumes/kubernetes.io~projected/kube-api-access-4vr66:{mountpoint:/var/lib/kubelet/pods/45aa4887-c913-4ece-ae34-fcde33832621/volumes/kubernetes.io~projected/kube-api-access-4vr66 major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/47850839-bb4b-41e9-ac31-f1cabbb4926d/volumes/kubernetes.io~projected/kube-api-access-krrkl:{mountpoint:/var/lib/kubelet/pods/47850839-bb4b-41e9-ac31-f1cabbb4926d/volumes/kubernetes.io~projected/kube-api-access-krrkl major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64/volumes/kubernetes.io~projected/kube-api-access-fdlxn:{mountpoint:/var/lib/kubelet/pods/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64/volumes/kubernetes.io~projected/kube-api-access-fdlxn major:0 minor:238 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/51eb717b-d11f-4bc3-8df6-deb51d5889f3/volumes/kubernetes.io~projected/kube-api-access-gbnx8:{mountpoint:/var/lib/kubelet/pods/51eb717b-d11f-4bc3-8df6-deb51d5889f3/volumes/kubernetes.io~projected/kube-api-access-gbnx8 major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f/volumes/kubernetes.io~projected/kube-api-access-9jgbv:{mountpoint:/var/lib/kubelet/pods/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f/volumes/kubernetes.io~projected/kube-api-access-9jgbv major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/74eb1407-de29-42e5-9e6c-ce1bec3a9d80/volumes/kubernetes.io~projected/kube-api-access-b6ggc:{mountpoint:/var/lib/kubelet/pods/74eb1407-de29-42e5-9e6c-ce1bec3a9d80/volumes/kubernetes.io~projected/kube-api-access-b6ggc major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/74eb1407-de29-42e5-9e6c-ce1bec3a9d80/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/74eb1407-de29-42e5-9e6c-ce1bec3a9d80/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/875bdfaa-b0a4-4412-a477-c962844e7057/volumes/kubernetes.io~projected/kube-api-access-l2skd:{mountpoint:/var/lib/kubelet/pods/875bdfaa-b0a4-4412-a477-c962844e7057/volumes/kubernetes.io~projected/kube-api-access-l2skd major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8ad05507-e242-4ff8-ae80-c16ff9ee68e2/volumes/kubernetes.io~projected/kube-api-access-th8tc:{mountpoint:/var/lib/kubelet/pods/8ad05507-e242-4ff8-ae80-c16ff9ee68e2/volumes/kubernetes.io~projected/kube-api-access-th8tc major:0 minor:234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe/volumes/kubernetes.io~projected/kube-api-access-tdlcw:{mountpoint:/var/lib/kubelet/pods/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe/volumes/kubernetes.io~projected/kube-api-access-tdlcw major:0 minor:142 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe/volumes/kubernetes.io~secret/webhook-cert major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a1e2340b-ebca-40de-b1e0-8133999cd860/volumes/kubernetes.io~projected/kube-api-access-6vpbp:{mountpoint:/var/lib/kubelet/pods/a1e2340b-ebca-40de-b1e0-8133999cd860/volumes/kubernetes.io~projected/kube-api-access-6vpbp major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a1e2340b-ebca-40de-b1e0-8133999cd860/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a1e2340b-ebca-40de-b1e0-8133999cd860/volumes/kubernetes.io~secret/serving-cert major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab926874-9722-4e65-9084-27b2f9915450/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/ab926874-9722-4e65-9084-27b2f9915450/volumes/kubernetes.io~projected/kube-api-access major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab926874-9722-4e65-9084-27b2f9915450/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/ab926874-9722-4e65-9084-27b2f9915450/volumes/kubernetes.io~secret/serving-cert major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b6d288e3-8e73-44d2-874d-64c6c98dd991/volumes/kubernetes.io~projected/kube-api-access-vdb9w:{mountpoint:/var/lib/kubelet/pods/b6d288e3-8e73-44d2-874d-64c6c98dd991/volumes/kubernetes.io~projected/kube-api-access-vdb9w major:0 minor:94 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b6d288e3-8e73-44d2-874d-64c6c98dd991/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/b6d288e3-8e73-44d2-874d-64c6c98dd991/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b8dd13a7-10e5-431b-8d30-405dcfea02f5/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/b8dd13a7-10e5-431b-8d30-405dcfea02f5/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b8dd13a7-10e5-431b-8d30-405dcfea02f5/volumes/kubernetes.io~projected/kube-api-access-7rhmv:{mountpoint:/var/lib/kubelet/pods/b8dd13a7-10e5-431b-8d30-405dcfea02f5/volumes/kubernetes.io~projected/kube-api-access-7rhmv major:0 minor:154 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b8dd13a7-10e5-431b-8d30-405dcfea02f5/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/b8dd13a7-10e5-431b-8d30-405dcfea02f5/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:128 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d4ae1240-e04e-48e9-88df-9f1a53508da7/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/d4ae1240-e04e-48e9-88df-9f1a53508da7/volumes/kubernetes.io~projected/kube-api-access major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d4ae1240-e04e-48e9-88df-9f1a53508da7/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/d4ae1240-e04e-48e9-88df-9f1a53508da7/volumes/kubernetes.io~secret/serving-cert major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27/volumes/kubernetes.io~projected/kube-api-access-ggsdx:{mountpoint:/var/lib/kubelet/pods/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27/volumes/kubernetes.io~projected/kube-api-access-ggsdx major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d92dddc8-a810-43f5-8beb-32d1c8ad8381/volumes/kubernetes.io~projected/kube-api-access-l22gw:{mountpoint:/var/lib/kubelet/pods/d92dddc8-a810-43f5-8beb-32d1c8ad8381/volumes/kubernetes.io~projected/kube-api-access-l22gw major:0 minor:259 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d94dc349-c5cb-4f12-8e48-867030af4981/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/d94dc349-c5cb-4f12-8e48-867030af4981/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d94dc349-c5cb-4f12-8e48-867030af4981/volumes/kubernetes.io~projected/kube-api-access-zjmcv:{mountpoint:/var/lib/kubelet/pods/d94dc349-c5cb-4f12-8e48-867030af4981/volumes/kubernetes.io~projected/kube-api-access-zjmcv major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e22c7035-4b7a-48cb-9abb-db277b387842/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/e22c7035-4b7a-48cb-9abb-db277b387842/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e22c7035-4b7a-48cb-9abb-db277b387842/volumes/kubernetes.io~projected/kube-api-access-pmxc2:{mountpoint:/var/lib/kubelet/pods/e22c7035-4b7a-48cb-9abb-db277b387842/volumes/kubernetes.io~projected/kube-api-access-pmxc2 major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e697746f-fb9e-4d10-ab61-33c68e62cc0d/volumes/kubernetes.io~projected/kube-api-access-vct98:{mountpoint:/var/lib/kubelet/pods/e697746f-fb9e-4d10-ab61-33c68e62cc0d/volumes/kubernetes.io~projected/kube-api-access-vct98 major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e697746f-fb9e-4d10-ab61-33c68e62cc0d/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/e697746f-fb9e-4d10-ab61-33c68e62cc0d/volumes/kubernetes.io~secret/etcd-client major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e697746f-fb9e-4d10-ab61-33c68e62cc0d/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/e697746f-fb9e-4d10-ab61-33c68e62cc0d/volumes/kubernetes.io~secret/serving-cert major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e720e1d0-5a6d-4b76-8b25-5963e24950f5/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/e720e1d0-5a6d-4b76-8b25-5963e24950f5/volumes/kubernetes.io~projected/kube-api-access major:0 minor:237 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e720e1d0-5a6d-4b76-8b25-5963e24950f5/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/e720e1d0-5a6d-4b76-8b25-5963e24950f5/volumes/kubernetes.io~secret/serving-cert major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e94d098b-fbcc-4e85-b8ad-42f3a21c822c/volumes/kubernetes.io~projected/kube-api-access-bttzm:{mountpoint:/var/lib/kubelet/pods/e94d098b-fbcc-4e85-b8ad-42f3a21c822c/volumes/kubernetes.io~projected/kube-api-access-bttzm major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f3a2cda2-b70f-4128-a1be-48503f5aad6d/volumes/kubernetes.io~projected/kube-api-access-tcvfv:{mountpoint:/var/lib/kubelet/pods/f3a2cda2-b70f-4128-a1be-48503f5aad6d/volumes/kubernetes.io~projected/kube-api-access-tcvfv major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f3a2cda2-b70f-4128-a1be-48503f5aad6d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/f3a2cda2-b70f-4128-a1be-48503f5aad6d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:209 fsType:tmpfs blockSize:0} overlay_0-102:{mountpoint:/var/lib/containers/storage/overlay/a8504c483985228e2d897d5ece91eddd9fe04544f33ed9f4b9ac4b0460f0572a/merged major:0 minor:102 fsType:overlay blockSize:0} overlay_0-108:{mountpoint:/var/lib/containers/storage/overlay/6b0935ec0fc04041b960e0bc7609d11eab20682ecbdf8ca09e5d6903c92a6411/merged major:0 minor:108 fsType:overlay blockSize:0} overlay_0-116:{mountpoint:/var/lib/containers/storage/overlay/82ba9c4fd49a59d0cdd8eba05d118f1d1403ef2b808d2d424c75960a19da6d1f/merged major:0 minor:116 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/dfc1e70ca13e8da6d7c2b972d46299d953bc4ff10b781bd2b729a96cd65af0ed/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-126:{mountpoint:/var/lib/containers/storage/overlay/ddf1ba66b634ea9fc324503b8f9ab91efddfa13e156b03f5c42e50a647deca92/merged major:0 minor:126 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/b9b8b4f08169d5b3e74369964977fbcea297f27f297d1bdf787d9fc3146ced77/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/39bae134f9616b4a4f81b1af4a5119919edc7e715d523ae1f61fc329467783f9/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-140:{mountpoint:/var/lib/containers/storage/overlay/d1d38b2b45dd2db2ac3e33cd0eef40fb393159cf5dc601701fe8c0635c01a573/merged major:0 minor:140 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/0bb0cd2e98961ae08c6588cdb473e9d54bd91b8459125a5c0f6c588ea7219762/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/22d3ac8e990991d11f31184e3294feab8367c6ed93e97de2b64ee7eb2d8aba64/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/097b11f1a18300c10666973051c3f9e5ee4bccd132ccdff7dd8d9bf0d6cab5ae/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-157:{mountpoint:/var/lib/containers/storage/overlay/9bd132699f02291c19806591c148e6e9f1f0e3bd2810e5856b1d9312daaa9ca6/merged major:0 minor:157 fsType:overlay blockSize:0} overlay_0-161:{mountpoint:/var/lib/containers/storage/overlay/e9e321af552b6e609f75f6138a0f054a4f4070f673fecf67ec1af4a7791c8a97/merged major:0 minor:161 fsType:overlay blockSize:0} overlay_0-163:{mountpoint:/var/lib/containers/storage/overlay/1522914cd1e2dd357a5f5ce3092cd08ead429ab4652bd609ddd2f79909308bd7/merged major:0 minor:163 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/0900f0696f41ffe2827a2fa162692f814c6b1b229f0817b2765b2e2a2f4612e1/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/c253a661aaf79ea0fdd6321517c52ae3bd2133a1c8982d681597541ef4aa6d11/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/090160045baa9c9ab628db4398cc9a9d9b79b81201df27d2d1b4d755428b1bcd/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/25f6090a7ec28bbb6d6587e8fed46798c9b9f27cf65595ce2170b3290e27f323/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/726805a19ef7e927132b5a62815aac2f7bc824232ed0ec6dcb120ca010c52ad3/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-199:{mountpoint:/var/lib/containers/storage/overlay/484e02bc1b42d53fe3dcd90c7f0bf8f16ca57e67db2395357ad7b36bfc144a57/merged major:0 minor:199 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/2ce43df53d040b0aa42fa1fd2348ee54c05aed1a36b755db3145a6a57009b1f3/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-275:{mountpoint:/var/lib/containers/storage/overlay/87c976a446109b642a55333ab6a4a46c2552db7c164023ca94467c07edb4cbd2/merged major:0 minor:275 fsType:overlay blockSize:0} overlay_0-277:{mountpoint:/var/lib/containers/storage/overlay/0e9cb6e5334b6a2fd45c68e74d83ddc6a91fd792768955088a2fc65e191228c4/merged major:0 minor:277 fsType:overlay blockSize:0} overlay_0-279:{mountpoint:/var/lib/containers/storage/overlay/98d3f89d4db84dbfd4968faf1ef60db906bc92832526a46fb426ca559eef0926/merged major:0 minor:279 fsType:overlay blockSize:0} overlay_0-281:{mountpoint:/var/lib/containers/storage/overlay/a1055561ff2c9b975b1b4fd1470252ea957ef5e3b10629f33cc43ce1c5bddd27/merged major:0 minor:281 fsType:overlay blockSize:0} overlay_0-283:{mountpoint:/var/lib/containers/storage/overlay/ac3c93217cba6c7f28ba74ed339343b92a756e831ea07e619c1b00e4666385b9/merged major:0 minor:283 fsType:overlay blockSize:0} overlay_0-285:{mountpoint:/var/lib/containers/storage/overlay/1687b04533d7dc73dd19ac3dde58b4a79320c7ed4e6f89cd6c99f2279eefe795/merged major:0 minor:285 fsType:overlay blockSize:0} overlay_0-287:{mountpoint:/var/lib/containers/storage/overlay/73f33ff0b86328d126ba6882afab65e4840169f07d24a9e88a6862e58049a2db/merged major:0 minor:287 fsType:overlay blockSize:0} overlay_0-289:{mountpoint:/var/lib/containers/storage/overlay/ad71dd1a19f03919642815b1d29ae232d1c1ce8d0cca0c9a5e1a8ae7722119a7/merged major:0 minor:289 fsType:overlay blockSize:0} overlay_0-291:{mountpoint:/var/lib/containers/storage/overlay/f0d884186b58a64b8103be46174399cafa61a06673ca35fd83c2b8cd39dd141e/merged major:0 minor:291 fsType:overlay blockSize:0} overlay_0-293:{mountpoint:/var/lib/containers/storage/overlay/608d8bf93339fa7cdab3c40878d8a317c58d41650130cf63f956eeda6435d6e8/merged major:0 minor:293 fsType:overlay blockSize:0} overlay_0-295:{mountpoint:/var/lib/containers/storage/overlay/9141e82f378b50bcf266ddb61b58593898d3282ae9e4a34bb49d6c32424b253d/merged major:0 minor:295 fsType:overlay blockSize:0} overlay_0-297:{mountpoint:/var/lib/containers/storage/overlay/4247a682b6f1fcffc1ef63b27ed59f9dad47559afd4aafb068bc59a3aeefdd9f/merged major:0 minor:297 fsType:overlay blockSize:0} overlay_0-299:{mountpoint:/var/lib/containers/storage/overlay/c5056014724f1870a8faf1d1aebe97f63adac83f4d6011411616718c822a3bd5/merged major:0 minor:299 fsType:overlay blockSize:0} overlay_0-301:{mountpoint:/var/lib/containers/storage/overlay/d847d208cd4d8e5460a84271728b8d03769a77bc0f3b4cbd9b12991e87cd4aed/merged major:0 minor:301 fsType:overlay blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/6e8510fe9530d4ec0303881614e0df8bcc781199f607c3846a4117c7286b5195/merged major:0 minor:44 fsType:overlay blockSize:0} overlay_0-49:{mountpoint:/var/lib/containers/storage/overlay/602ef2bfe644ec882c64b6c8ac4b623a0ba493f9c0178e05419ef8854d81ec19/merged major:0 minor:49 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/8c773cec56df11684d32c945fff551ec6e9c46d41d18a9d253bd1059e3ff9e95/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/41f5b0865ac7e30a85faf188414b4bf195c8e02f06f7a6ebe9bfe31ee5f074bb/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/3afb3fc4e4e93a3bc92fe51ca92e593c5b65f0427c568ea5e87995e439b25acb/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/103af20093bc1d23eaaff4c14d5c92eb9abdfbc9965730fc3dc641cd05af778b/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/7c43ba194085d31187adcc304fc9f319767dcd1b59d39e491bae0860ecabdb26/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-68:{mountpoint:/var/lib/containers/storage/overlay/7f7104d7f69248c33198daa4b57f13bf8a65eaaba4358cc6377007fa28bdd6da/merged major:0 minor:68 fsType:overlay blockSize:0} overlay_0-79:{mountpoint:/var/lib/containers/storage/overlay/7b2fceab1e7e90679b2cc9d1c4b3fbd32d64280d5daee5cb8fd6962dd9b3344a/merged major:0 minor:79 fsType:overlay blockSize:0} overlay_0-81:{mountpoint:/var/lib/containers/storage/overlay/330c95c7f995f1246d85b6bf01654df3e7075067eb0d369ec0396b876f339337/merged major:0 minor:81 fsType:overlay blockSize:0} overlay_0-84:{mountpoint:/var/lib/containers/storage/overlay/e6bc0645f5dfe66beffcc4bdd82c9a0da5dff9e06b593f439230d6d5d85c669f/merged major:0 minor:84 fsType:overlay blockSize:0} overlay_0-86:{mountpoint:/var/lib/containers/storage/overlay/4dfd6d18ce4ee05b49c2cd7d14751e016beec8ffdf588ebc50f0fac298520587/merged major:0 minor:86 fsType:overlay blockSize:0} overlay_0-98:{mountpoint:/var/lib/containers/storage/overlay/d507999c2255e488350ca5cc9bcdaee817df76fa314b2593d404fef7317ebc73/merged major:0 minor:98 fsType:overlay blockSize:0}] Mar 12 18:13:19.652572 master-0 kubenswrapper[7337]: I0312 18:13:19.651702 7337 manager.go:217] Machine: {Timestamp:2026-03-12 18:13:19.649709776 +0000 UTC m=+0.118310753 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654116352 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:14bcec6218994562885f2bb31137a053 SystemUUID:14bcec62-1899-4562-885f-2bb31137a053 BootID:8ea9dfaa-21ba-4398-883d-eae43b35536d Filesystems:[{Device:overlay_0-140 DeviceMajor:0 DeviceMinor:140 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e720e1d0-5a6d-4b76-8b25-5963e24950f5/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:220 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e7b7f3534352e488adef6510c4ad914236d57c0ac52f6e0d4e107e52563cb840/userdata/shm DeviceMajor:0 DeviceMinor:119 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/37cd9c0a-697e-4e67-932b-b331ff77c8c0/volumes/kubernetes.io~projected/kube-api-access-pfpb9 DeviceMajor:0 DeviceMinor:233 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/a1e2340b-ebca-40de-b1e0-8133999cd860/volumes/kubernetes.io~projected/kube-api-access-6vpbp DeviceMajor:0 DeviceMinor:245 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ecf7670cd0c657ac23db39730253e7de6d6d1e9634e025fe447cc9e07fe1d91a/userdata/shm DeviceMajor:0 DeviceMinor:266 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e54bd9f4ed4a4d50ffb03de0e433f3897b33df6a46c77fdb71d0900fa9a91e17/userdata/shm DeviceMajor:0 DeviceMinor:261 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1d311f9e8cf6ad3cdfb6335b00f9729ed813d3bacc476060ca21b806ee856231/userdata/shm DeviceMajor:0 DeviceMinor:267 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e94d098b-fbcc-4e85-b8ad-42f3a21c822c/volumes/kubernetes.io~projected/kube-api-access-bttzm DeviceMajor:0 DeviceMinor:247 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-49 DeviceMajor:0 DeviceMinor:49 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-98 DeviceMajor:0 DeviceMinor:98 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a1e2340b-ebca-40de-b1e0-8133999cd860/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:217 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/ab926874-9722-4e65-9084-27b2f9915450/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:242 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/47850839-bb4b-41e9-ac31-f1cabbb4926d/volumes/kubernetes.io~projected/kube-api-access-krrkl DeviceMajor:0 DeviceMinor:246 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e3cce5ce786ddb4af71a8112135cad1426f074e7b12c67ae740271017aa946b3/userdata/shm DeviceMajor:0 DeviceMinor:254 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-108 DeviceMajor:0 DeviceMinor:108 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-199 DeviceMajor:0 DeviceMinor:199 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-289 DeviceMajor:0 DeviceMinor:289 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/74eb1407-de29-42e5-9e6c-ce1bec3a9d80/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/f3a2cda2-b70f-4128-a1be-48503f5aad6d/volumes/kubernetes.io~projected/kube-api-access-tcvfv DeviceMajor:0 DeviceMinor:241 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-301 DeviceMajor:0 DeviceMinor:301 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-287 DeviceMajor:0 DeviceMinor:287 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/062f1b21-2ffc-47da-8334-427c3b2a1a90/volumes/kubernetes.io~projected/kube-api-access-jn9nf DeviceMajor:0 DeviceMinor:236 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-126 DeviceMajor:0 DeviceMinor:126 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e22c7035-4b7a-48cb-9abb-db277b387842/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:231 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827056128 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe/volumes/kubernetes.io~projected/kube-api-access-tdlcw DeviceMajor:0 DeviceMinor:142 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-161 DeviceMajor:0 DeviceMinor:161 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b8dd13a7-10e5-431b-8d30-405dcfea02f5/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/run/containers/storage/overlay-containers/419bbcc10e95d196cd0f08dbf057bbc2aa7a617fdfb7f0d1b356baa7bbabca04/userdata/shm DeviceMajor:0 DeviceMinor:273 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-283 DeviceMajor:0 DeviceMinor:283 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c5d8b743e37e43da0e4ff17c103781e7e406b75fffac74b32a3f5490d58a4481/userdata/shm DeviceMajor:0 DeviceMinor:46 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-68 DeviceMajor:0 DeviceMinor:68 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/455f0aad-add2-49d0-995c-f92467bce2d6/volumes/kubernetes.io~projected/kube-api-access-pxsgv DeviceMajor:0 DeviceMinor:118 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/74bbf11cd33cced50ba626f06b188adf24ce7f72b1161eb2c06db1ce6ae46dd5/userdata/shm DeviceMajor:0 DeviceMinor:139 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/062f1b21-2ffc-47da-8334-427c3b2a1a90/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:223 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-84 DeviceMajor:0 DeviceMinor:84 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e697746f-fb9e-4d10-ab61-33c68e62cc0d/volumes/kubernetes.io~projected/kube-api-access-vct98 DeviceMajor:0 DeviceMinor:240 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/00755a4e-124c-4a51-b1c5-7c505b3637a8/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:99 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/37cd9c0a-697e-4e67-932b-b331ff77c8c0/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:214 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/91d19dd0041e348f5ab95fd10ff19be4195ac501d593d4346b94e73b4b7bfba3/userdata/shm DeviceMajor:0 DeviceMinor:252 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/38a4bf73-479e-4bbf-9aa3-639fc288c8bc/volumes/kubernetes.io~projected/kube-api-access-2pn9h DeviceMajor:0 DeviceMinor:105 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f/volumes/kubernetes.io~projected/kube-api-access-9jgbv DeviceMajor:0 DeviceMinor:123 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/b8dd13a7-10e5-431b-8d30-405dcfea02f5/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:128 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/236f2886-bb69-49a7-9471-36454fd1cbd3/volumes/kubernetes.io~projected/kube-api-access-b6ggg DeviceMajor:0 DeviceMinor:224 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/d94dc349-c5cb-4f12-8e48-867030af4981/volumes/kubernetes.io~projected/kube-api-access-zjmcv DeviceMajor:0 DeviceMinor:228 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bafb4a547df5e8f39b94a88d98e85d34e5a0230468f2013bc4da6ee9fbc59ee3/userdata/shm DeviceMajor:0 DeviceMinor:263 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/d92dddc8-a810-43f5-8beb-32d1c8ad8381/volumes/kubernetes.io~projected/kube-api-access-l22gw DeviceMajor:0 DeviceMinor:259 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-291 DeviceMajor:0 DeviceMinor:291 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e697746f-fb9e-4d10-ab61-33c68e62cc0d/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:222 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-293 DeviceMajor:0 DeviceMinor:293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/74eb1407-de29-42e5-9e6c-ce1bec3a9d80/volumes/kubernetes.io~projected/kube-api-access-b6ggc DeviceMajor:0 DeviceMinor:125 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:138 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/d4ae1240-e04e-48e9-88df-9f1a53508da7/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:239 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1c062db0efa15fd6679d2718a5857a9b8db81f25fe5e3a47bd35e6f192db0dd6/userdata/shm DeviceMajor:0 DeviceMinor:250 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e54cdbac1902d131e189d00f45d878f018504994bb1378cf2e83bf1f9b2a651b/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa/volumes/kubernetes.io~projected/kube-api-access-s55hv DeviceMajor:0 DeviceMinor:230 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64/volumes/kubernetes.io~projected/kube-api-access-fdlxn DeviceMajor:0 DeviceMinor:238 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/055f5c67-f512-4510-99c5-e194944b0599/volumes/kubernetes.io~projected/kube-api-access-tkt7d DeviceMajor:0 DeviceMinor:243 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/e22c7035-4b7a-48cb-9abb-db277b387842/volumes/kubernetes.io~projected/kube-api-access-pmxc2 DeviceMajor:0 DeviceMinor:244 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/38c299b0655225599ee9b03928de90a7927480cc6329508c816e9e361bbcfa16/userdata/shm DeviceMajor:0 DeviceMinor:256 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/run/containers/storage/overlay-containers/64955db5addf9b64f24ce95166a3106b2564db66c223ef67752e78909dc304ef/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed/volumes/kubernetes.io~projected/kube-api-access-wlf77 DeviceMajor:0 DeviceMinor:232 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27/volumes/kubernetes.io~projected/kube-api-access-ggsdx DeviceMajor:0 DeviceMinor:235 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-299 DeviceMajor:0 DeviceMinor:299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/33feec78-4592-4343-965b-aa1b7044fcf3/volumes/kubernetes.io~projected/kube-api-access-ptrtx DeviceMajor:0 DeviceMinor:303 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6dd6381115d9cbf9ba7c1a108737553c31041609750cc0e631e36ed92f66311d/userdata/shm DeviceMajor:0 DeviceMinor:100 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-116 DeviceMajor:0 DeviceMinor:116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d4ae1240-e04e-48e9-88df-9f1a53508da7/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:213 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/f3a2cda2-b70f-4128-a1be-48503f5aad6d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:209 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-81 DeviceMajor:0 DeviceMinor:81 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/236f2886-bb69-49a7-9471-36454fd1cbd3/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:215 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-279 DeviceMajor:0 DeviceMinor:279 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/91bbe32b85272f0f9f735ba2a67b1085ea37b3016231eb6d6938a08eed1a3b9d/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/b6d288e3-8e73-44d2-874d-64c6c98dd991/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-157 DeviceMajor:0 DeviceMinor:157 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-163 DeviceMajor:0 DeviceMinor:163 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:219 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/51eb717b-d11f-4bc3-8df6-deb51d5889f3/volumes/kubernetes.io~projected/kube-api-access-gbnx8 DeviceMajor:0 DeviceMinor:225 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-297 DeviceMajor:0 DeviceMinor:297 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ab67b82c7d40212f9275c2d813cf0ddfb1de4b0eb68ab01e5d797fad81a0d351/userdata/shm DeviceMajor:0 DeviceMinor:258 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-275 DeviceMajor:0 DeviceMinor:275 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/427d9fb4da53770ccc67c12bc3aaae3e507cd75207365e40c1611eeaaf274b84/userdata/shm DeviceMajor:0 DeviceMinor:47 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/b6d288e3-8e73-44d2-874d-64c6c98dd991/volumes/kubernetes.io~projected/kube-api-access-vdb9w DeviceMajor:0 DeviceMinor:94 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c14a32fc9f0111ac97c8d0756c820cfe5f40ed691d6de42ac60400f58318b138/userdata/shm DeviceMajor:0 DeviceMinor:114 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/16d696d609e4d9275dce2cfecd0a4d1078c8e60ea7f137e92635d2bfd874a46b/userdata/shm DeviceMajor:0 DeviceMinor:129 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/055f5c67-f512-4510-99c5-e194944b0599/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:218 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-79 DeviceMajor:0 DeviceMinor:79 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-102 DeviceMajor:0 DeviceMinor:102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e697746f-fb9e-4d10-ab61-33c68e62cc0d/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:221 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/875bdfaa-b0a4-4412-a477-c962844e7057/volumes/kubernetes.io~projected/kube-api-access-l2skd DeviceMajor:0 DeviceMinor:229 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/64939e6ed4a35637d0a2c2bc028af5d4314b96efb512849766779eb1c4382a35/userdata/shm DeviceMajor:0 DeviceMinor:270 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6d3dbd3e29a6d7e3e111f4c45f534b1d831fee55b19f442dc477ede7e14f8ccb/userdata/shm DeviceMajor:0 DeviceMinor:155 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/ab926874-9722-4e65-9084-27b2f9915450/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:216 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/d94dc349-c5cb-4f12-8e48-867030af4981/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:226 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/8ad05507-e242-4ff8-ae80-c16ff9ee68e2/volumes/kubernetes.io~projected/kube-api-access-th8tc DeviceMajor:0 DeviceMinor:234 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b2474f5d479286c70d652654d0e6946155d42c6b7dd11abc4f60fd4bf3123854/userdata/shm DeviceMajor:0 DeviceMinor:248 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/b8dd13a7-10e5-431b-8d30-405dcfea02f5/volumes/kubernetes.io~projected/kube-api-access-7rhmv DeviceMajor:0 DeviceMinor:154 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/45aa4887-c913-4ece-ae34-fcde33832621/volumes/kubernetes.io~projected/kube-api-access-4vr66 DeviceMajor:0 DeviceMinor:227 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/e720e1d0-5a6d-4b76-8b25-5963e24950f5/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:237 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-277 DeviceMajor:0 DeviceMinor:277 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-281 DeviceMajor:0 DeviceMinor:281 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-86 DeviceMajor:0 DeviceMinor:86 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/60ce66e2a62ed17df4cad9067d0bb6d4940a38dc4b5a5337ba95a9117aca3c70/userdata/shm DeviceMajor:0 DeviceMinor:265 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-285 DeviceMajor:0 DeviceMinor:285 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-295 DeviceMajor:0 DeviceMinor:295 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:1c062db0efa15fd MacAddress:e2:61:e0:b0:ad:06 Speed:10000 Mtu:8900} {Name:1d311f9e8cf6ad3 MacAddress:e2:21:fb:1d:46:fd Speed:10000 Mtu:8900} {Name:38c299b06552255 MacAddress:36:f7:78:59:28:d4 Speed:10000 Mtu:8900} {Name:60ce66e2a62ed17 MacAddress:f6:c8:94:ad:11:f9 Speed:10000 Mtu:8900} {Name:64939e6ed4a3563 MacAddress:2e:ef:be:ec:43:7b Speed:10000 Mtu:8900} {Name:91d19dd0041e348 MacAddress:ba:26:c2:c0:47:23 Speed:10000 Mtu:8900} {Name:ab67b82c7d40212 MacAddress:9e:8a:9a:15:68:3e Speed:10000 Mtu:8900} {Name:b2474f5d479286c MacAddress:6e:fb:ed:80:6c:ec Speed:10000 Mtu:8900} {Name:bafb4a547df5e8f MacAddress:da:8f:77:d0:de:ff Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:4a:86:78:1f:90:b3 Speed:0 Mtu:8900} {Name:e3cce5ce786ddb4 MacAddress:32:69:3f:87:54:50 Speed:10000 Mtu:8900} {Name:e54bd9f4ed4a4d5 MacAddress:9e:7a:3c:3e:9c:b3 Speed:10000 Mtu:8900} {Name:ecf7670cd0c657a MacAddress:12:0a:1a:01:46:d0 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:b1:d2:12 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:2e:3d:5d Speed:-1 Mtu:9000} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:6a:b6:1d:75:96:b7 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654116352 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 12 18:13:19.652572 master-0 kubenswrapper[7337]: I0312 18:13:19.652399 7337 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 12 18:13:19.652572 master-0 kubenswrapper[7337]: I0312 18:13:19.652578 7337 manager.go:233] Version: {KernelVersion:5.14.0-427.111.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602172219-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 12 18:13:19.653037 master-0 kubenswrapper[7337]: I0312 18:13:19.652950 7337 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 12 18:13:19.653381 master-0 kubenswrapper[7337]: I0312 18:13:19.653125 7337 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 18:13:19.653381 master-0 kubenswrapper[7337]: I0312 18:13:19.653158 7337 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 18:13:19.653381 master-0 kubenswrapper[7337]: I0312 18:13:19.653384 7337 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 18:13:19.653539 master-0 kubenswrapper[7337]: I0312 18:13:19.653396 7337 container_manager_linux.go:303] "Creating device plugin manager" Mar 12 18:13:19.653539 master-0 kubenswrapper[7337]: I0312 18:13:19.653406 7337 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 12 18:13:19.653539 master-0 kubenswrapper[7337]: I0312 18:13:19.653430 7337 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 12 18:13:19.654119 master-0 kubenswrapper[7337]: I0312 18:13:19.653648 7337 state_mem.go:36] "Initialized new in-memory state store" Mar 12 18:13:19.654119 master-0 kubenswrapper[7337]: I0312 18:13:19.653745 7337 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 12 18:13:19.654119 master-0 kubenswrapper[7337]: I0312 18:13:19.653817 7337 kubelet.go:418] "Attempting to sync node with API server" Mar 12 18:13:19.654119 master-0 kubenswrapper[7337]: I0312 18:13:19.653852 7337 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 18:13:19.654119 master-0 kubenswrapper[7337]: I0312 18:13:19.653908 7337 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 12 18:13:19.654119 master-0 kubenswrapper[7337]: I0312 18:13:19.653923 7337 kubelet.go:324] "Adding apiserver pod source" Mar 12 18:13:19.655654 master-0 kubenswrapper[7337]: I0312 18:13:19.654836 7337 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 18:13:19.656424 master-0 kubenswrapper[7337]: I0312 18:13:19.656394 7337 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 12 18:13:19.656657 master-0 kubenswrapper[7337]: I0312 18:13:19.656636 7337 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 12 18:13:19.656963 master-0 kubenswrapper[7337]: I0312 18:13:19.656933 7337 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 12 18:13:19.657149 master-0 kubenswrapper[7337]: I0312 18:13:19.657120 7337 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 12 18:13:19.657192 master-0 kubenswrapper[7337]: I0312 18:13:19.657151 7337 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 12 18:13:19.657192 master-0 kubenswrapper[7337]: I0312 18:13:19.657164 7337 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 12 18:13:19.657192 master-0 kubenswrapper[7337]: I0312 18:13:19.657176 7337 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 12 18:13:19.657192 master-0 kubenswrapper[7337]: I0312 18:13:19.657188 7337 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 12 18:13:19.657317 master-0 kubenswrapper[7337]: I0312 18:13:19.657197 7337 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 12 18:13:19.657317 master-0 kubenswrapper[7337]: I0312 18:13:19.657212 7337 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 12 18:13:19.657317 master-0 kubenswrapper[7337]: I0312 18:13:19.657225 7337 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 12 18:13:19.657317 master-0 kubenswrapper[7337]: I0312 18:13:19.657245 7337 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 12 18:13:19.657317 master-0 kubenswrapper[7337]: I0312 18:13:19.657256 7337 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 12 18:13:19.657317 master-0 kubenswrapper[7337]: I0312 18:13:19.657271 7337 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 12 18:13:19.657317 master-0 kubenswrapper[7337]: I0312 18:13:19.657288 7337 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 12 18:13:19.657539 master-0 kubenswrapper[7337]: I0312 18:13:19.657335 7337 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 12 18:13:19.657865 master-0 kubenswrapper[7337]: I0312 18:13:19.657837 7337 server.go:1280] "Started kubelet" Mar 12 18:13:19.659568 master-0 kubenswrapper[7337]: I0312 18:13:19.658045 7337 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 18:13:19.659568 master-0 kubenswrapper[7337]: I0312 18:13:19.658259 7337 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 12 18:13:19.659568 master-0 kubenswrapper[7337]: I0312 18:13:19.658064 7337 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 18:13:19.658664 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 12 18:13:19.662809 master-0 kubenswrapper[7337]: I0312 18:13:19.662771 7337 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 18:13:19.665979 master-0 kubenswrapper[7337]: I0312 18:13:19.665540 7337 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 12 18:13:19.665979 master-0 kubenswrapper[7337]: I0312 18:13:19.665582 7337 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 18:13:19.667525 master-0 kubenswrapper[7337]: I0312 18:13:19.666010 7337 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-13 18:02:20 +0000 UTC, rotation deadline is 2026-03-13 15:22:34.270544317 +0000 UTC Mar 12 18:13:19.667525 master-0 kubenswrapper[7337]: I0312 18:13:19.666043 7337 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 21h9m14.604504385s for next certificate rotation Mar 12 18:13:19.667525 master-0 kubenswrapper[7337]: I0312 18:13:19.666841 7337 server.go:449] "Adding debug handlers to kubelet server" Mar 12 18:13:19.668544 master-0 kubenswrapper[7337]: I0312 18:13:19.667822 7337 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 12 18:13:19.668544 master-0 kubenswrapper[7337]: I0312 18:13:19.667840 7337 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 12 18:13:19.668544 master-0 kubenswrapper[7337]: E0312 18:13:19.667843 7337 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 18:13:19.668544 master-0 kubenswrapper[7337]: I0312 18:13:19.667885 7337 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 12 18:13:19.668544 master-0 kubenswrapper[7337]: I0312 18:13:19.668023 7337 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 12 18:13:19.668883 master-0 kubenswrapper[7337]: I0312 18:13:19.668798 7337 factory.go:55] Registering systemd factory Mar 12 18:13:19.668883 master-0 kubenswrapper[7337]: I0312 18:13:19.668822 7337 factory.go:221] Registration of the systemd container factory successfully Mar 12 18:13:19.669555 master-0 kubenswrapper[7337]: I0312 18:13:19.669129 7337 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 12 18:13:19.672560 master-0 kubenswrapper[7337]: I0312 18:13:19.670665 7337 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 12 18:13:19.672560 master-0 kubenswrapper[7337]: I0312 18:13:19.670862 7337 factory.go:153] Registering CRI-O factory Mar 12 18:13:19.672560 master-0 kubenswrapper[7337]: I0312 18:13:19.670877 7337 factory.go:221] Registration of the crio container factory successfully Mar 12 18:13:19.672560 master-0 kubenswrapper[7337]: I0312 18:13:19.671557 7337 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 12 18:13:19.672560 master-0 kubenswrapper[7337]: I0312 18:13:19.671604 7337 factory.go:103] Registering Raw factory Mar 12 18:13:19.672560 master-0 kubenswrapper[7337]: I0312 18:13:19.671629 7337 manager.go:1196] Started watching for new ooms in manager Mar 12 18:13:19.672861 master-0 kubenswrapper[7337]: I0312 18:13:19.672852 7337 manager.go:319] Starting recovery of all containers Mar 12 18:13:19.676471 master-0 kubenswrapper[7337]: I0312 18:13:19.676417 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed" volumeName="kubernetes.io/configmap/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-trusted-ca" seLinuxMountContext="" Mar 12 18:13:19.676471 master-0 kubenswrapper[7337]: I0312 18:13:19.676474 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74eb1407-de29-42e5-9e6c-ce1bec3a9d80" volumeName="kubernetes.io/configmap/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-env-overrides" seLinuxMountContext="" Mar 12 18:13:19.676710 master-0 kubenswrapper[7337]: I0312 18:13:19.676489 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e22c7035-4b7a-48cb-9abb-db277b387842" volumeName="kubernetes.io/projected/e22c7035-4b7a-48cb-9abb-db277b387842-kube-api-access-pmxc2" seLinuxMountContext="" Mar 12 18:13:19.676710 master-0 kubenswrapper[7337]: I0312 18:13:19.676504 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e697746f-fb9e-4d10-ab61-33c68e62cc0d" volumeName="kubernetes.io/configmap/e697746f-fb9e-4d10-ab61-33c68e62cc0d-etcd-service-ca" seLinuxMountContext="" Mar 12 18:13:19.676710 master-0 kubenswrapper[7337]: I0312 18:13:19.676605 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e720e1d0-5a6d-4b76-8b25-5963e24950f5" volumeName="kubernetes.io/secret/e720e1d0-5a6d-4b76-8b25-5963e24950f5-serving-cert" seLinuxMountContext="" Mar 12 18:13:19.676710 master-0 kubenswrapper[7337]: I0312 18:13:19.676622 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="055f5c67-f512-4510-99c5-e194944b0599" volumeName="kubernetes.io/secret/055f5c67-f512-4510-99c5-e194944b0599-serving-cert" seLinuxMountContext="" Mar 12 18:13:19.676710 master-0 kubenswrapper[7337]: I0312 18:13:19.676640 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="455f0aad-add2-49d0-995c-f92467bce2d6" volumeName="kubernetes.io/projected/455f0aad-add2-49d0-995c-f92467bce2d6-kube-api-access-pxsgv" seLinuxMountContext="" Mar 12 18:13:19.676710 master-0 kubenswrapper[7337]: I0312 18:13:19.676656 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="47850839-bb4b-41e9-ac31-f1cabbb4926d" volumeName="kubernetes.io/projected/47850839-bb4b-41e9-ac31-f1cabbb4926d-kube-api-access-krrkl" seLinuxMountContext="" Mar 12 18:13:19.676710 master-0 kubenswrapper[7337]: I0312 18:13:19.676676 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3a2cda2-b70f-4128-a1be-48503f5aad6d" volumeName="kubernetes.io/secret/f3a2cda2-b70f-4128-a1be-48503f5aad6d-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 12 18:13:19.676710 master-0 kubenswrapper[7337]: I0312 18:13:19.676689 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="38a4bf73-479e-4bbf-9aa3-639fc288c8bc" volumeName="kubernetes.io/configmap/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-cni-binary-copy" seLinuxMountContext="" Mar 12 18:13:19.676710 master-0 kubenswrapper[7337]: I0312 18:13:19.676704 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="455f0aad-add2-49d0-995c-f92467bce2d6" volumeName="kubernetes.io/configmap/455f0aad-add2-49d0-995c-f92467bce2d6-cni-binary-copy" seLinuxMountContext="" Mar 12 18:13:19.677013 master-0 kubenswrapper[7337]: I0312 18:13:19.676722 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="455f0aad-add2-49d0-995c-f92467bce2d6" volumeName="kubernetes.io/configmap/455f0aad-add2-49d0-995c-f92467bce2d6-cni-sysctl-allowlist" seLinuxMountContext="" Mar 12 18:13:19.677013 master-0 kubenswrapper[7337]: I0312 18:13:19.676739 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe" volumeName="kubernetes.io/configmap/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-ovnkube-identity-cm" seLinuxMountContext="" Mar 12 18:13:19.677013 master-0 kubenswrapper[7337]: I0312 18:13:19.676761 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27" volumeName="kubernetes.io/projected/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-kube-api-access-ggsdx" seLinuxMountContext="" Mar 12 18:13:19.677013 master-0 kubenswrapper[7337]: I0312 18:13:19.676776 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e94d098b-fbcc-4e85-b8ad-42f3a21c822c" volumeName="kubernetes.io/projected/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-kube-api-access-bttzm" seLinuxMountContext="" Mar 12 18:13:19.677013 master-0 kubenswrapper[7337]: I0312 18:13:19.676807 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="062f1b21-2ffc-47da-8334-427c3b2a1a90" volumeName="kubernetes.io/configmap/062f1b21-2ffc-47da-8334-427c3b2a1a90-config" seLinuxMountContext="" Mar 12 18:13:19.677013 master-0 kubenswrapper[7337]: I0312 18:13:19.676821 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="236f2886-bb69-49a7-9471-36454fd1cbd3" volumeName="kubernetes.io/secret/236f2886-bb69-49a7-9471-36454fd1cbd3-serving-cert" seLinuxMountContext="" Mar 12 18:13:19.677013 master-0 kubenswrapper[7337]: I0312 18:13:19.676839 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b8dd13a7-10e5-431b-8d30-405dcfea02f5" volumeName="kubernetes.io/configmap/b8dd13a7-10e5-431b-8d30-405dcfea02f5-env-overrides" seLinuxMountContext="" Mar 12 18:13:19.677013 master-0 kubenswrapper[7337]: I0312 18:13:19.676852 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e94d098b-fbcc-4e85-b8ad-42f3a21c822c" volumeName="kubernetes.io/configmap/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-telemetry-config" seLinuxMountContext="" Mar 12 18:13:19.677013 master-0 kubenswrapper[7337]: I0312 18:13:19.676864 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="062f1b21-2ffc-47da-8334-427c3b2a1a90" volumeName="kubernetes.io/configmap/062f1b21-2ffc-47da-8334-427c3b2a1a90-trusted-ca-bundle" seLinuxMountContext="" Mar 12 18:13:19.677013 master-0 kubenswrapper[7337]: I0312 18:13:19.676880 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37cd9c0a-697e-4e67-932b-b331ff77c8c0" volumeName="kubernetes.io/projected/37cd9c0a-697e-4e67-932b-b331ff77c8c0-kube-api-access-pfpb9" seLinuxMountContext="" Mar 12 18:13:19.677013 master-0 kubenswrapper[7337]: I0312 18:13:19.676893 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64" volumeName="kubernetes.io/configmap/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-trusted-ca" seLinuxMountContext="" Mar 12 18:13:19.677013 master-0 kubenswrapper[7337]: I0312 18:13:19.676914 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="51eb717b-d11f-4bc3-8df6-deb51d5889f3" volumeName="kubernetes.io/projected/51eb717b-d11f-4bc3-8df6-deb51d5889f3-kube-api-access-gbnx8" seLinuxMountContext="" Mar 12 18:13:19.677013 master-0 kubenswrapper[7337]: I0312 18:13:19.676934 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74eb1407-de29-42e5-9e6c-ce1bec3a9d80" volumeName="kubernetes.io/projected/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-kube-api-access-b6ggc" seLinuxMountContext="" Mar 12 18:13:19.677013 master-0 kubenswrapper[7337]: I0312 18:13:19.676948 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b8dd13a7-10e5-431b-8d30-405dcfea02f5" volumeName="kubernetes.io/configmap/b8dd13a7-10e5-431b-8d30-405dcfea02f5-ovnkube-config" seLinuxMountContext="" Mar 12 18:13:19.677484 master-0 kubenswrapper[7337]: I0312 18:13:19.676967 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e697746f-fb9e-4d10-ab61-33c68e62cc0d" volumeName="kubernetes.io/configmap/e697746f-fb9e-4d10-ab61-33c68e62cc0d-config" seLinuxMountContext="" Mar 12 18:13:19.677484 master-0 kubenswrapper[7337]: I0312 18:13:19.677108 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="223a548b-a3ad-40dd-82de-e3dbb7f3e4fa" volumeName="kubernetes.io/secret/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa-serving-cert" seLinuxMountContext="" Mar 12 18:13:19.677484 master-0 kubenswrapper[7337]: I0312 18:13:19.677133 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="236f2886-bb69-49a7-9471-36454fd1cbd3" volumeName="kubernetes.io/projected/236f2886-bb69-49a7-9471-36454fd1cbd3-kube-api-access-b6ggg" seLinuxMountContext="" Mar 12 18:13:19.677484 master-0 kubenswrapper[7337]: I0312 18:13:19.677168 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64" volumeName="kubernetes.io/projected/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-kube-api-access-fdlxn" seLinuxMountContext="" Mar 12 18:13:19.677484 master-0 kubenswrapper[7337]: I0312 18:13:19.677182 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="875bdfaa-b0a4-4412-a477-c962844e7057" volumeName="kubernetes.io/projected/875bdfaa-b0a4-4412-a477-c962844e7057-kube-api-access-l2skd" seLinuxMountContext="" Mar 12 18:13:19.677484 master-0 kubenswrapper[7337]: I0312 18:13:19.677195 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1e2340b-ebca-40de-b1e0-8133999cd860" volumeName="kubernetes.io/secret/a1e2340b-ebca-40de-b1e0-8133999cd860-serving-cert" seLinuxMountContext="" Mar 12 18:13:19.677484 master-0 kubenswrapper[7337]: I0312 18:13:19.677208 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab926874-9722-4e65-9084-27b2f9915450" volumeName="kubernetes.io/projected/ab926874-9722-4e65-9084-27b2f9915450-kube-api-access" seLinuxMountContext="" Mar 12 18:13:19.677484 master-0 kubenswrapper[7337]: I0312 18:13:19.677217 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="236f2886-bb69-49a7-9471-36454fd1cbd3" volumeName="kubernetes.io/configmap/236f2886-bb69-49a7-9471-36454fd1cbd3-config" seLinuxMountContext="" Mar 12 18:13:19.677484 master-0 kubenswrapper[7337]: I0312 18:13:19.677228 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37cd9c0a-697e-4e67-932b-b331ff77c8c0" volumeName="kubernetes.io/empty-dir/37cd9c0a-697e-4e67-932b-b331ff77c8c0-available-featuregates" seLinuxMountContext="" Mar 12 18:13:19.677484 master-0 kubenswrapper[7337]: I0312 18:13:19.677237 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe" volumeName="kubernetes.io/projected/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-kube-api-access-tdlcw" seLinuxMountContext="" Mar 12 18:13:19.677484 master-0 kubenswrapper[7337]: I0312 18:13:19.677246 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d4ae1240-e04e-48e9-88df-9f1a53508da7" volumeName="kubernetes.io/configmap/d4ae1240-e04e-48e9-88df-9f1a53508da7-config" seLinuxMountContext="" Mar 12 18:13:19.677484 master-0 kubenswrapper[7337]: I0312 18:13:19.677258 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d4ae1240-e04e-48e9-88df-9f1a53508da7" volumeName="kubernetes.io/projected/d4ae1240-e04e-48e9-88df-9f1a53508da7-kube-api-access" seLinuxMountContext="" Mar 12 18:13:19.677484 master-0 kubenswrapper[7337]: I0312 18:13:19.677269 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d92dddc8-a810-43f5-8beb-32d1c8ad8381" volumeName="kubernetes.io/projected/d92dddc8-a810-43f5-8beb-32d1c8ad8381-kube-api-access-l22gw" seLinuxMountContext="" Mar 12 18:13:19.677484 master-0 kubenswrapper[7337]: I0312 18:13:19.677284 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74eb1407-de29-42e5-9e6c-ce1bec3a9d80" volumeName="kubernetes.io/secret/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 12 18:13:19.677484 master-0 kubenswrapper[7337]: I0312 18:13:19.677293 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1e2340b-ebca-40de-b1e0-8133999cd860" volumeName="kubernetes.io/projected/a1e2340b-ebca-40de-b1e0-8133999cd860-kube-api-access-6vpbp" seLinuxMountContext="" Mar 12 18:13:19.677484 master-0 kubenswrapper[7337]: I0312 18:13:19.677302 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b8dd13a7-10e5-431b-8d30-405dcfea02f5" volumeName="kubernetes.io/projected/b8dd13a7-10e5-431b-8d30-405dcfea02f5-kube-api-access-7rhmv" seLinuxMountContext="" Mar 12 18:13:19.677484 master-0 kubenswrapper[7337]: I0312 18:13:19.677325 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e697746f-fb9e-4d10-ab61-33c68e62cc0d" volumeName="kubernetes.io/projected/e697746f-fb9e-4d10-ab61-33c68e62cc0d-kube-api-access-vct98" seLinuxMountContext="" Mar 12 18:13:19.677484 master-0 kubenswrapper[7337]: I0312 18:13:19.677334 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="33feec78-4592-4343-965b-aa1b7044fcf3" volumeName="kubernetes.io/projected/33feec78-4592-4343-965b-aa1b7044fcf3-kube-api-access-ptrtx" seLinuxMountContext="" Mar 12 18:13:19.677484 master-0 kubenswrapper[7337]: I0312 18:13:19.677347 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b48f8fd-2efe-44e3-a6d7-c71358b83a2f" volumeName="kubernetes.io/projected/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-kube-api-access-9jgbv" seLinuxMountContext="" Mar 12 18:13:19.677484 master-0 kubenswrapper[7337]: I0312 18:13:19.677357 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b8dd13a7-10e5-431b-8d30-405dcfea02f5" volumeName="kubernetes.io/configmap/b8dd13a7-10e5-431b-8d30-405dcfea02f5-ovnkube-script-lib" seLinuxMountContext="" Mar 12 18:13:19.677484 master-0 kubenswrapper[7337]: I0312 18:13:19.677366 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b8dd13a7-10e5-431b-8d30-405dcfea02f5" volumeName="kubernetes.io/secret/b8dd13a7-10e5-431b-8d30-405dcfea02f5-ovn-node-metrics-cert" seLinuxMountContext="" Mar 12 18:13:19.677484 master-0 kubenswrapper[7337]: I0312 18:13:19.677379 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e697746f-fb9e-4d10-ab61-33c68e62cc0d" volumeName="kubernetes.io/secret/e697746f-fb9e-4d10-ab61-33c68e62cc0d-serving-cert" seLinuxMountContext="" Mar 12 18:13:19.677484 master-0 kubenswrapper[7337]: I0312 18:13:19.677391 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="062f1b21-2ffc-47da-8334-427c3b2a1a90" volumeName="kubernetes.io/projected/062f1b21-2ffc-47da-8334-427c3b2a1a90-kube-api-access-jn9nf" seLinuxMountContext="" Mar 12 18:13:19.677484 master-0 kubenswrapper[7337]: I0312 18:13:19.677409 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d94dc349-c5cb-4f12-8e48-867030af4981" volumeName="kubernetes.io/projected/d94dc349-c5cb-4f12-8e48-867030af4981-bound-sa-token" seLinuxMountContext="" Mar 12 18:13:19.677484 master-0 kubenswrapper[7337]: I0312 18:13:19.677422 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="00755a4e-124c-4a51-b1c5-7c505b3637a8" volumeName="kubernetes.io/configmap/00755a4e-124c-4a51-b1c5-7c505b3637a8-service-ca" seLinuxMountContext="" Mar 12 18:13:19.677484 master-0 kubenswrapper[7337]: I0312 18:13:19.677433 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="055f5c67-f512-4510-99c5-e194944b0599" volumeName="kubernetes.io/projected/055f5c67-f512-4510-99c5-e194944b0599-kube-api-access-tkt7d" seLinuxMountContext="" Mar 12 18:13:19.677484 master-0 kubenswrapper[7337]: I0312 18:13:19.677455 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74eb1407-de29-42e5-9e6c-ce1bec3a9d80" volumeName="kubernetes.io/configmap/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-ovnkube-config" seLinuxMountContext="" Mar 12 18:13:19.677484 master-0 kubenswrapper[7337]: I0312 18:13:19.677491 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ad05507-e242-4ff8-ae80-c16ff9ee68e2" volumeName="kubernetes.io/projected/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-kube-api-access-th8tc" seLinuxMountContext="" Mar 12 18:13:19.678431 master-0 kubenswrapper[7337]: I0312 18:13:19.677506 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6d288e3-8e73-44d2-874d-64c6c98dd991" volumeName="kubernetes.io/secret/b6d288e3-8e73-44d2-874d-64c6c98dd991-metrics-tls" seLinuxMountContext="" Mar 12 18:13:19.678431 master-0 kubenswrapper[7337]: I0312 18:13:19.677567 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e697746f-fb9e-4d10-ab61-33c68e62cc0d" volumeName="kubernetes.io/configmap/e697746f-fb9e-4d10-ab61-33c68e62cc0d-etcd-ca" seLinuxMountContext="" Mar 12 18:13:19.678431 master-0 kubenswrapper[7337]: I0312 18:13:19.677585 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="062f1b21-2ffc-47da-8334-427c3b2a1a90" volumeName="kubernetes.io/configmap/062f1b21-2ffc-47da-8334-427c3b2a1a90-service-ca-bundle" seLinuxMountContext="" Mar 12 18:13:19.678431 master-0 kubenswrapper[7337]: I0312 18:13:19.677598 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed" volumeName="kubernetes.io/projected/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-kube-api-access-wlf77" seLinuxMountContext="" Mar 12 18:13:19.678431 master-0 kubenswrapper[7337]: I0312 18:13:19.677613 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="38a4bf73-479e-4bbf-9aa3-639fc288c8bc" volumeName="kubernetes.io/projected/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-kube-api-access-2pn9h" seLinuxMountContext="" Mar 12 18:13:19.678431 master-0 kubenswrapper[7337]: I0312 18:13:19.677656 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab926874-9722-4e65-9084-27b2f9915450" volumeName="kubernetes.io/secret/ab926874-9722-4e65-9084-27b2f9915450-serving-cert" seLinuxMountContext="" Mar 12 18:13:19.678431 master-0 kubenswrapper[7337]: I0312 18:13:19.677668 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6d288e3-8e73-44d2-874d-64c6c98dd991" volumeName="kubernetes.io/projected/b6d288e3-8e73-44d2-874d-64c6c98dd991-kube-api-access-vdb9w" seLinuxMountContext="" Mar 12 18:13:19.678431 master-0 kubenswrapper[7337]: I0312 18:13:19.677685 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1e2340b-ebca-40de-b1e0-8133999cd860" volumeName="kubernetes.io/configmap/a1e2340b-ebca-40de-b1e0-8133999cd860-config" seLinuxMountContext="" Mar 12 18:13:19.678431 master-0 kubenswrapper[7337]: I0312 18:13:19.677698 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e720e1d0-5a6d-4b76-8b25-5963e24950f5" volumeName="kubernetes.io/configmap/e720e1d0-5a6d-4b76-8b25-5963e24950f5-config" seLinuxMountContext="" Mar 12 18:13:19.678431 master-0 kubenswrapper[7337]: I0312 18:13:19.677716 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3a2cda2-b70f-4128-a1be-48503f5aad6d" volumeName="kubernetes.io/empty-dir/f3a2cda2-b70f-4128-a1be-48503f5aad6d-operand-assets" seLinuxMountContext="" Mar 12 18:13:19.678431 master-0 kubenswrapper[7337]: I0312 18:13:19.677727 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3a2cda2-b70f-4128-a1be-48503f5aad6d" volumeName="kubernetes.io/projected/f3a2cda2-b70f-4128-a1be-48503f5aad6d-kube-api-access-tcvfv" seLinuxMountContext="" Mar 12 18:13:19.678431 master-0 kubenswrapper[7337]: I0312 18:13:19.677739 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e22c7035-4b7a-48cb-9abb-db277b387842" volumeName="kubernetes.io/projected/e22c7035-4b7a-48cb-9abb-db277b387842-bound-sa-token" seLinuxMountContext="" Mar 12 18:13:19.678431 master-0 kubenswrapper[7337]: I0312 18:13:19.677757 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="223a548b-a3ad-40dd-82de-e3dbb7f3e4fa" volumeName="kubernetes.io/projected/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa-kube-api-access-s55hv" seLinuxMountContext="" Mar 12 18:13:19.678431 master-0 kubenswrapper[7337]: I0312 18:13:19.677769 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="455f0aad-add2-49d0-995c-f92467bce2d6" volumeName="kubernetes.io/configmap/455f0aad-add2-49d0-995c-f92467bce2d6-whereabouts-configmap" seLinuxMountContext="" Mar 12 18:13:19.678431 master-0 kubenswrapper[7337]: I0312 18:13:19.677803 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="45aa4887-c913-4ece-ae34-fcde33832621" volumeName="kubernetes.io/projected/45aa4887-c913-4ece-ae34-fcde33832621-kube-api-access-4vr66" seLinuxMountContext="" Mar 12 18:13:19.678431 master-0 kubenswrapper[7337]: I0312 18:13:19.677827 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d92dddc8-a810-43f5-8beb-32d1c8ad8381" volumeName="kubernetes.io/configmap/d92dddc8-a810-43f5-8beb-32d1c8ad8381-iptables-alerter-script" seLinuxMountContext="" Mar 12 18:13:19.678431 master-0 kubenswrapper[7337]: I0312 18:13:19.677838 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d94dc349-c5cb-4f12-8e48-867030af4981" volumeName="kubernetes.io/projected/d94dc349-c5cb-4f12-8e48-867030af4981-kube-api-access-zjmcv" seLinuxMountContext="" Mar 12 18:13:19.678431 master-0 kubenswrapper[7337]: I0312 18:13:19.677855 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e22c7035-4b7a-48cb-9abb-db277b387842" volumeName="kubernetes.io/configmap/e22c7035-4b7a-48cb-9abb-db277b387842-trusted-ca" seLinuxMountContext="" Mar 12 18:13:19.678431 master-0 kubenswrapper[7337]: I0312 18:13:19.677868 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37cd9c0a-697e-4e67-932b-b331ff77c8c0" volumeName="kubernetes.io/secret/37cd9c0a-697e-4e67-932b-b331ff77c8c0-serving-cert" seLinuxMountContext="" Mar 12 18:13:19.678431 master-0 kubenswrapper[7337]: I0312 18:13:19.677885 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe" volumeName="kubernetes.io/configmap/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-env-overrides" seLinuxMountContext="" Mar 12 18:13:19.678431 master-0 kubenswrapper[7337]: I0312 18:13:19.677897 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab926874-9722-4e65-9084-27b2f9915450" volumeName="kubernetes.io/configmap/ab926874-9722-4e65-9084-27b2f9915450-config" seLinuxMountContext="" Mar 12 18:13:19.678431 master-0 kubenswrapper[7337]: I0312 18:13:19.677909 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d4ae1240-e04e-48e9-88df-9f1a53508da7" volumeName="kubernetes.io/secret/d4ae1240-e04e-48e9-88df-9f1a53508da7-serving-cert" seLinuxMountContext="" Mar 12 18:13:19.678431 master-0 kubenswrapper[7337]: I0312 18:13:19.677927 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e697746f-fb9e-4d10-ab61-33c68e62cc0d" volumeName="kubernetes.io/secret/e697746f-fb9e-4d10-ab61-33c68e62cc0d-etcd-client" seLinuxMountContext="" Mar 12 18:13:19.678431 master-0 kubenswrapper[7337]: I0312 18:13:19.677943 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d94dc349-c5cb-4f12-8e48-867030af4981" volumeName="kubernetes.io/configmap/d94dc349-c5cb-4f12-8e48-867030af4981-trusted-ca" seLinuxMountContext="" Mar 12 18:13:19.678431 master-0 kubenswrapper[7337]: I0312 18:13:19.677958 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e720e1d0-5a6d-4b76-8b25-5963e24950f5" volumeName="kubernetes.io/projected/e720e1d0-5a6d-4b76-8b25-5963e24950f5-kube-api-access" seLinuxMountContext="" Mar 12 18:13:19.678431 master-0 kubenswrapper[7337]: I0312 18:13:19.677971 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="00755a4e-124c-4a51-b1c5-7c505b3637a8" volumeName="kubernetes.io/projected/00755a4e-124c-4a51-b1c5-7c505b3637a8-kube-api-access" seLinuxMountContext="" Mar 12 18:13:19.678431 master-0 kubenswrapper[7337]: I0312 18:13:19.677994 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="055f5c67-f512-4510-99c5-e194944b0599" volumeName="kubernetes.io/configmap/055f5c67-f512-4510-99c5-e194944b0599-config" seLinuxMountContext="" Mar 12 18:13:19.678431 master-0 kubenswrapper[7337]: I0312 18:13:19.678012 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="062f1b21-2ffc-47da-8334-427c3b2a1a90" volumeName="kubernetes.io/secret/062f1b21-2ffc-47da-8334-427c3b2a1a90-serving-cert" seLinuxMountContext="" Mar 12 18:13:19.678431 master-0 kubenswrapper[7337]: I0312 18:13:19.678024 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="223a548b-a3ad-40dd-82de-e3dbb7f3e4fa" volumeName="kubernetes.io/configmap/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa-config" seLinuxMountContext="" Mar 12 18:13:19.678431 master-0 kubenswrapper[7337]: I0312 18:13:19.678040 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="38a4bf73-479e-4bbf-9aa3-639fc288c8bc" volumeName="kubernetes.io/configmap/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-daemon-config" seLinuxMountContext="" Mar 12 18:13:19.678431 master-0 kubenswrapper[7337]: I0312 18:13:19.678052 7337 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe" volumeName="kubernetes.io/secret/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-webhook-cert" seLinuxMountContext="" Mar 12 18:13:19.678431 master-0 kubenswrapper[7337]: I0312 18:13:19.678064 7337 reconstruct.go:97] "Volume reconstruction finished" Mar 12 18:13:19.678431 master-0 kubenswrapper[7337]: I0312 18:13:19.678072 7337 reconciler.go:26] "Reconciler: start to sync state" Mar 12 18:13:19.683365 master-0 kubenswrapper[7337]: I0312 18:13:19.683324 7337 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 12 18:13:19.719425 master-0 kubenswrapper[7337]: I0312 18:13:19.719362 7337 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 12 18:13:19.721241 master-0 kubenswrapper[7337]: I0312 18:13:19.721188 7337 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 12 18:13:19.721241 master-0 kubenswrapper[7337]: I0312 18:13:19.721245 7337 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 12 18:13:19.721362 master-0 kubenswrapper[7337]: I0312 18:13:19.721269 7337 kubelet.go:2335] "Starting kubelet main sync loop" Mar 12 18:13:19.721362 master-0 kubenswrapper[7337]: E0312 18:13:19.721318 7337 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 18:13:19.725877 master-0 kubenswrapper[7337]: I0312 18:13:19.725141 7337 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 12 18:13:19.731419 master-0 kubenswrapper[7337]: I0312 18:13:19.729878 7337 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="424ff1cd728e6aca964e1aafeb2eb3f61c869370919f825ab26a7330b62524f0" exitCode=0 Mar 12 18:13:19.733858 master-0 kubenswrapper[7337]: I0312 18:13:19.733816 7337 generic.go:334] "Generic (PLEG): container finished" podID="2a4a981c-9454-4e1f-951e-1a62737659cc" containerID="95a463de33fcdba00f135dbdd2f42b2c5b30584ee4c54c59c7552f930a4442bf" exitCode=0 Mar 12 18:13:19.742591 master-0 kubenswrapper[7337]: I0312 18:13:19.741691 7337 generic.go:334] "Generic (PLEG): container finished" podID="b8dd13a7-10e5-431b-8d30-405dcfea02f5" containerID="69a2563b13bb321b549ca470bba68e3784ff4506218240cbeb3734f424459804" exitCode=0 Mar 12 18:13:19.745798 master-0 kubenswrapper[7337]: I0312 18:13:19.745723 7337 generic.go:334] "Generic (PLEG): container finished" podID="455f0aad-add2-49d0-995c-f92467bce2d6" containerID="c629217e0646c42efab7b6831a82c134d4897e205bc3cb7b99ec2b82209a7725" exitCode=0 Mar 12 18:13:19.745798 master-0 kubenswrapper[7337]: I0312 18:13:19.745762 7337 generic.go:334] "Generic (PLEG): container finished" podID="455f0aad-add2-49d0-995c-f92467bce2d6" containerID="0c7342da7ff90812cfb607510698f6f5025811001aa1d822318142b6a574472a" exitCode=0 Mar 12 18:13:19.745798 master-0 kubenswrapper[7337]: I0312 18:13:19.745773 7337 generic.go:334] "Generic (PLEG): container finished" podID="455f0aad-add2-49d0-995c-f92467bce2d6" containerID="47d87208497022a24111ccaca14cfa76489b3e3c8d2e4baeec44eed1ec3639c0" exitCode=0 Mar 12 18:13:19.745798 master-0 kubenswrapper[7337]: I0312 18:13:19.745784 7337 generic.go:334] "Generic (PLEG): container finished" podID="455f0aad-add2-49d0-995c-f92467bce2d6" containerID="6f8ff60199929b0a4e5f12c0833311ee92d8752831cac14e7f6e3610c7c482cd" exitCode=0 Mar 12 18:13:19.745798 master-0 kubenswrapper[7337]: I0312 18:13:19.745793 7337 generic.go:334] "Generic (PLEG): container finished" podID="455f0aad-add2-49d0-995c-f92467bce2d6" containerID="a06bfc83f83320e9affd2425dbf28da14fdf99e08ecffd8df981975c0ab701b1" exitCode=0 Mar 12 18:13:19.745798 master-0 kubenswrapper[7337]: I0312 18:13:19.745801 7337 generic.go:334] "Generic (PLEG): container finished" podID="455f0aad-add2-49d0-995c-f92467bce2d6" containerID="f1a76c40be6adf4508f866e6663729add45233d4cc201334c0c921cf2c117caa" exitCode=0 Mar 12 18:13:19.751003 master-0 kubenswrapper[7337]: I0312 18:13:19.750958 7337 generic.go:334] "Generic (PLEG): container finished" podID="43fdaf13-ffc1-4787-8dd2-90d0685b3124" containerID="87d66e1cc29893f39e111a5a2a21953d603c0527dd13bddf2486860762147978" exitCode=0 Mar 12 18:13:19.768275 master-0 kubenswrapper[7337]: I0312 18:13:19.768214 7337 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="d26315a3bde904b11ba5d9d409301a02b1633a540a8d5ec716c4b45d6b097f49" exitCode=1 Mar 12 18:13:19.771329 master-0 kubenswrapper[7337]: I0312 18:13:19.771275 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/3.log" Mar 12 18:13:19.771800 master-0 kubenswrapper[7337]: I0312 18:13:19.771764 7337 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="f4576abbe7a90b1b20fca15403fa958348685c536308da122d58a74078454e59" exitCode=1 Mar 12 18:13:19.771800 master-0 kubenswrapper[7337]: I0312 18:13:19.771794 7337 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="cfb72c7fed7776f25cec78c2b1f068a79f1aca1d681c7f12e196e29d22a04486" exitCode=0 Mar 12 18:13:19.792940 master-0 kubenswrapper[7337]: I0312 18:13:19.792916 7337 manager.go:324] Recovery completed Mar 12 18:13:19.821491 master-0 kubenswrapper[7337]: E0312 18:13:19.821438 7337 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 12 18:13:19.824862 master-0 kubenswrapper[7337]: I0312 18:13:19.824839 7337 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 12 18:13:19.824862 master-0 kubenswrapper[7337]: I0312 18:13:19.824859 7337 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 12 18:13:19.824960 master-0 kubenswrapper[7337]: I0312 18:13:19.824877 7337 state_mem.go:36] "Initialized new in-memory state store" Mar 12 18:13:19.825063 master-0 kubenswrapper[7337]: I0312 18:13:19.825039 7337 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 12 18:13:19.825101 master-0 kubenswrapper[7337]: I0312 18:13:19.825073 7337 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 12 18:13:19.825101 master-0 kubenswrapper[7337]: I0312 18:13:19.825095 7337 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 12 18:13:19.825101 master-0 kubenswrapper[7337]: I0312 18:13:19.825101 7337 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 12 18:13:19.825179 master-0 kubenswrapper[7337]: I0312 18:13:19.825108 7337 policy_none.go:49] "None policy: Start" Mar 12 18:13:19.826459 master-0 kubenswrapper[7337]: I0312 18:13:19.826434 7337 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 12 18:13:19.826525 master-0 kubenswrapper[7337]: I0312 18:13:19.826475 7337 state_mem.go:35] "Initializing new in-memory state store" Mar 12 18:13:19.826701 master-0 kubenswrapper[7337]: I0312 18:13:19.826685 7337 state_mem.go:75] "Updated machine memory state" Mar 12 18:13:19.826701 master-0 kubenswrapper[7337]: I0312 18:13:19.826699 7337 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 12 18:13:19.835219 master-0 kubenswrapper[7337]: I0312 18:13:19.835199 7337 manager.go:334] "Starting Device Plugin manager" Mar 12 18:13:19.835293 master-0 kubenswrapper[7337]: I0312 18:13:19.835239 7337 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 12 18:13:19.835293 master-0 kubenswrapper[7337]: I0312 18:13:19.835252 7337 server.go:79] "Starting device plugin registration server" Mar 12 18:13:19.835714 master-0 kubenswrapper[7337]: I0312 18:13:19.835672 7337 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 18:13:19.835714 master-0 kubenswrapper[7337]: I0312 18:13:19.835689 7337 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 18:13:19.835893 master-0 kubenswrapper[7337]: I0312 18:13:19.835858 7337 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 12 18:13:19.835980 master-0 kubenswrapper[7337]: I0312 18:13:19.835932 7337 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 12 18:13:19.835980 master-0 kubenswrapper[7337]: I0312 18:13:19.835944 7337 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 18:13:19.936696 master-0 kubenswrapper[7337]: I0312 18:13:19.936496 7337 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:13:19.937779 master-0 kubenswrapper[7337]: I0312 18:13:19.937752 7337 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:13:19.937823 master-0 kubenswrapper[7337]: I0312 18:13:19.937790 7337 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:13:19.937823 master-0 kubenswrapper[7337]: I0312 18:13:19.937801 7337 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:13:19.937974 master-0 kubenswrapper[7337]: I0312 18:13:19.937859 7337 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 18:13:19.952944 master-0 kubenswrapper[7337]: I0312 18:13:19.952909 7337 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 12 18:13:19.953030 master-0 kubenswrapper[7337]: I0312 18:13:19.952998 7337 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 12 18:13:20.022585 master-0 kubenswrapper[7337]: I0312 18:13:20.022483 7337 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0"] Mar 12 18:13:20.022935 master-0 kubenswrapper[7337]: I0312 18:13:20.022882 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"1b41120246139f832c6fce447150fed26bcd9a47dc2f49808aa8f04449aadbb6"} Mar 12 18:13:20.022994 master-0 kubenswrapper[7337]: I0312 18:13:20.022933 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"c0a8d4431acf000c36d5a8e20b8fbea835bbdf1fd7c8e5eab3ca1097edb9bbb4"} Mar 12 18:13:20.022994 master-0 kubenswrapper[7337]: I0312 18:13:20.022943 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerDied","Data":"424ff1cd728e6aca964e1aafeb2eb3f61c869370919f825ab26a7330b62524f0"} Mar 12 18:13:20.022994 master-0 kubenswrapper[7337]: I0312 18:13:20.022954 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"c5d8b743e37e43da0e4ff17c103781e7e406b75fffac74b32a3f5490d58a4481"} Mar 12 18:13:20.022994 master-0 kubenswrapper[7337]: I0312 18:13:20.022964 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"5528dd5a4e01f059b975dba420b1e42f7d38d29b15865763037c06d90dade8e7"} Mar 12 18:13:20.022994 master-0 kubenswrapper[7337]: I0312 18:13:20.022972 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"b928346cccd3250ecd2b93ee414bcede71a1e719c241a242b6ebcdc2c000ed85"} Mar 12 18:13:20.022994 master-0 kubenswrapper[7337]: I0312 18:13:20.022980 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"64955db5addf9b64f24ce95166a3106b2564db66c223ef67752e78909dc304ef"} Mar 12 18:13:20.022994 master-0 kubenswrapper[7337]: I0312 18:13:20.022992 7337 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ac2e44511feb89f0eb8641549dc02db57c6e394fd3b40bd40da8f07b13abdb2" Mar 12 18:13:20.023172 master-0 kubenswrapper[7337]: I0312 18:13:20.023002 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"7905195025f5bcb8de89213d12f95430c8990ba81078908548bc95e6c97e2325"} Mar 12 18:13:20.023172 master-0 kubenswrapper[7337]: I0312 18:13:20.023010 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"91bbe32b85272f0f9f735ba2a67b1085ea37b3016231eb6d6938a08eed1a3b9d"} Mar 12 18:13:20.023172 master-0 kubenswrapper[7337]: I0312 18:13:20.023050 7337 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0eba8d5c0b0f8386f8e7b0fcfb1805cff55a3c43911a4889778f4fec45d35584" Mar 12 18:13:20.023172 master-0 kubenswrapper[7337]: I0312 18:13:20.023059 7337 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b6e13f8214d025f7992cbcd205cec35aba4e97b520d7e8f9e3e7a6bca8ac41d" Mar 12 18:13:20.023172 master-0 kubenswrapper[7337]: I0312 18:13:20.023075 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"95d38f1431066968104bb51ab17f5c680fc28063e6ba5f01ad252c4fc619c1e1"} Mar 12 18:13:20.023172 master-0 kubenswrapper[7337]: I0312 18:13:20.023083 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"46264a9af82e695b6d4dff6d1f55cd807b8535bd3965ef4182b223f7b1e2e6f6"} Mar 12 18:13:20.023172 master-0 kubenswrapper[7337]: I0312 18:13:20.023091 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"d26315a3bde904b11ba5d9d409301a02b1633a540a8d5ec716c4b45d6b097f49"} Mar 12 18:13:20.023172 master-0 kubenswrapper[7337]: I0312 18:13:20.023099 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"e54cdbac1902d131e189d00f45d878f018504994bb1378cf2e83bf1f9b2a651b"} Mar 12 18:13:20.023172 master-0 kubenswrapper[7337]: I0312 18:13:20.023107 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"93aef700d51857dffd379b4fa5c63e9358523baf23deed5a0f436de9a4c7c7b1"} Mar 12 18:13:20.023172 master-0 kubenswrapper[7337]: I0312 18:13:20.023117 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"f4576abbe7a90b1b20fca15403fa958348685c536308da122d58a74078454e59"} Mar 12 18:13:20.023172 master-0 kubenswrapper[7337]: I0312 18:13:20.023127 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"cfb72c7fed7776f25cec78c2b1f068a79f1aca1d681c7f12e196e29d22a04486"} Mar 12 18:13:20.023172 master-0 kubenswrapper[7337]: I0312 18:13:20.023136 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"427d9fb4da53770ccc67c12bc3aaae3e507cd75207365e40c1611eeaaf274b84"} Mar 12 18:13:20.038268 master-0 kubenswrapper[7337]: E0312 18:13:20.038224 7337 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 18:13:20.038412 master-0 kubenswrapper[7337]: W0312 18:13:20.038287 7337 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 12 18:13:20.038412 master-0 kubenswrapper[7337]: E0312 18:13:20.038381 7337 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Mar 12 18:13:20.038641 master-0 kubenswrapper[7337]: E0312 18:13:20.038477 7337 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:13:20.038694 master-0 kubenswrapper[7337]: E0312 18:13:20.038660 7337 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:13:20.038828 master-0 kubenswrapper[7337]: E0312 18:13:20.038807 7337 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 18:13:20.085181 master-0 kubenswrapper[7337]: I0312 18:13:20.085123 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:13:20.085181 master-0 kubenswrapper[7337]: I0312 18:13:20.085173 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:13:20.085385 master-0 kubenswrapper[7337]: I0312 18:13:20.085218 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 18:13:20.085385 master-0 kubenswrapper[7337]: I0312 18:13:20.085239 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 18:13:20.085385 master-0 kubenswrapper[7337]: I0312 18:13:20.085261 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:13:20.085385 master-0 kubenswrapper[7337]: I0312 18:13:20.085277 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:13:20.085385 master-0 kubenswrapper[7337]: I0312 18:13:20.085290 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:13:20.085385 master-0 kubenswrapper[7337]: I0312 18:13:20.085306 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:13:20.085385 master-0 kubenswrapper[7337]: I0312 18:13:20.085327 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:13:20.085385 master-0 kubenswrapper[7337]: I0312 18:13:20.085341 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:13:20.085385 master-0 kubenswrapper[7337]: I0312 18:13:20.085357 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:13:20.085385 master-0 kubenswrapper[7337]: I0312 18:13:20.085371 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:13:20.085385 master-0 kubenswrapper[7337]: I0312 18:13:20.085385 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:13:20.085811 master-0 kubenswrapper[7337]: I0312 18:13:20.085402 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 18:13:20.085811 master-0 kubenswrapper[7337]: I0312 18:13:20.085418 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 18:13:20.085811 master-0 kubenswrapper[7337]: I0312 18:13:20.085433 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 18:13:20.085811 master-0 kubenswrapper[7337]: I0312 18:13:20.085449 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 18:13:20.185932 master-0 kubenswrapper[7337]: I0312 18:13:20.185813 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:13:20.185932 master-0 kubenswrapper[7337]: I0312 18:13:20.185857 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:13:20.185932 master-0 kubenswrapper[7337]: I0312 18:13:20.185883 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 18:13:20.185932 master-0 kubenswrapper[7337]: I0312 18:13:20.185899 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 18:13:20.186196 master-0 kubenswrapper[7337]: I0312 18:13:20.185987 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:13:20.186196 master-0 kubenswrapper[7337]: I0312 18:13:20.186076 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 18:13:20.186196 master-0 kubenswrapper[7337]: I0312 18:13:20.186088 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 18:13:20.186196 master-0 kubenswrapper[7337]: I0312 18:13:20.186121 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:13:20.186196 master-0 kubenswrapper[7337]: I0312 18:13:20.186104 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 18:13:20.186196 master-0 kubenswrapper[7337]: I0312 18:13:20.186159 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 18:13:20.186348 master-0 kubenswrapper[7337]: I0312 18:13:20.186206 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 18:13:20.186348 master-0 kubenswrapper[7337]: I0312 18:13:20.186227 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:13:20.186348 master-0 kubenswrapper[7337]: I0312 18:13:20.186241 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:13:20.186348 master-0 kubenswrapper[7337]: I0312 18:13:20.186255 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 18:13:20.186348 master-0 kubenswrapper[7337]: I0312 18:13:20.186268 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 18:13:20.186348 master-0 kubenswrapper[7337]: I0312 18:13:20.186297 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 18:13:20.186348 master-0 kubenswrapper[7337]: I0312 18:13:20.186311 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 18:13:20.186348 master-0 kubenswrapper[7337]: I0312 18:13:20.186345 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:13:20.186569 master-0 kubenswrapper[7337]: I0312 18:13:20.186363 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:13:20.186569 master-0 kubenswrapper[7337]: I0312 18:13:20.186378 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:13:20.186569 master-0 kubenswrapper[7337]: I0312 18:13:20.186394 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:13:20.186569 master-0 kubenswrapper[7337]: I0312 18:13:20.186395 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:13:20.186569 master-0 kubenswrapper[7337]: I0312 18:13:20.186412 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:13:20.186569 master-0 kubenswrapper[7337]: I0312 18:13:20.186418 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:13:20.186569 master-0 kubenswrapper[7337]: I0312 18:13:20.186442 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:13:20.186569 master-0 kubenswrapper[7337]: I0312 18:13:20.186446 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:13:20.186569 master-0 kubenswrapper[7337]: I0312 18:13:20.186464 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:13:20.186569 master-0 kubenswrapper[7337]: I0312 18:13:20.186468 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 18:13:20.186569 master-0 kubenswrapper[7337]: I0312 18:13:20.186485 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:13:20.186569 master-0 kubenswrapper[7337]: I0312 18:13:20.186488 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:13:20.186569 master-0 kubenswrapper[7337]: I0312 18:13:20.186504 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:13:20.186569 master-0 kubenswrapper[7337]: I0312 18:13:20.186551 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:13:20.186569 master-0 kubenswrapper[7337]: I0312 18:13:20.186565 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:13:20.186944 master-0 kubenswrapper[7337]: I0312 18:13:20.186580 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:13:20.656134 master-0 kubenswrapper[7337]: I0312 18:13:20.656053 7337 apiserver.go:52] "Watching apiserver" Mar 12 18:13:20.665101 master-0 kubenswrapper[7337]: I0312 18:13:20.665028 7337 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 12 18:13:20.666409 master-0 kubenswrapper[7337]: I0312 18:13:20.666340 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-589895fbb7-jqj5k","openshift-marketplace/marketplace-operator-64bf9778cb-clkx5","openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-ddsbn","openshift-multus/network-metrics-daemon-z4sc9","openshift-network-node-identity/network-node-identity-hqrqt","openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7","assisted-installer/assisted-installer-controller-g257x","openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-649db","openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx","openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j","openshift-multus/multus-additional-cni-plugins-lv8hk","openshift-service-ca-operator/service-ca-operator-69b6fc6b88-9xlgl","openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp","openshift-network-diagnostics/network-check-target-cpthp","openshift-network-operator/network-operator-7c649bf6d4-vksss","openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s","openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9","kube-system/bootstrap-kube-controller-manager-master-0","openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4","openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-w7xvp","openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c","openshift-multus/multus-admission-controller-8d675b596-kcpg5","openshift-network-operator/iptables-alerter-4k8wm","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r","kube-system/bootstrap-kube-scheduler-master-0","openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6qlzz","openshift-etcd/etcd-master-0-master-0","openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq","openshift-ingress-operator/ingress-operator-677db989d6-4527l","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-ovn-kubernetes/ovnkube-node-hx8q8","openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-m6hsp","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpb6k","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-2xr98","openshift-multus/multus-656l8"] Mar 12 18:13:20.666934 master-0 kubenswrapper[7337]: I0312 18:13:20.666607 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-g257x" Mar 12 18:13:20.666934 master-0 kubenswrapper[7337]: I0312 18:13:20.666772 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r" Mar 12 18:13:20.666934 master-0 kubenswrapper[7337]: I0312 18:13:20.666837 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:13:20.666934 master-0 kubenswrapper[7337]: I0312 18:13:20.666839 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:13:20.670615 master-0 kubenswrapper[7337]: I0312 18:13:20.667023 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" Mar 12 18:13:20.670615 master-0 kubenswrapper[7337]: I0312 18:13:20.668417 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 12 18:13:20.672693 master-0 kubenswrapper[7337]: I0312 18:13:20.672320 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 12 18:13:20.672693 master-0 kubenswrapper[7337]: I0312 18:13:20.672422 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 12 18:13:20.672693 master-0 kubenswrapper[7337]: I0312 18:13:20.672510 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 12 18:13:20.673075 master-0 kubenswrapper[7337]: I0312 18:13:20.672771 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 12 18:13:20.673075 master-0 kubenswrapper[7337]: I0312 18:13:20.672770 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 12 18:13:20.673075 master-0 kubenswrapper[7337]: I0312 18:13:20.672968 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 12 18:13:20.676162 master-0 kubenswrapper[7337]: I0312 18:13:20.673366 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7" Mar 12 18:13:20.676162 master-0 kubenswrapper[7337]: I0312 18:13:20.673427 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:13:20.676162 master-0 kubenswrapper[7337]: I0312 18:13:20.673436 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 12 18:13:20.676162 master-0 kubenswrapper[7337]: I0312 18:13:20.673730 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 12 18:13:20.676162 master-0 kubenswrapper[7337]: I0312 18:13:20.674741 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:20.676162 master-0 kubenswrapper[7337]: I0312 18:13:20.675758 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 12 18:13:20.678661 master-0 kubenswrapper[7337]: I0312 18:13:20.677008 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 12 18:13:20.678661 master-0 kubenswrapper[7337]: I0312 18:13:20.677090 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 12 18:13:20.678661 master-0 kubenswrapper[7337]: I0312 18:13:20.677936 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 12 18:13:20.678661 master-0 kubenswrapper[7337]: I0312 18:13:20.678388 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 12 18:13:20.678661 master-0 kubenswrapper[7337]: I0312 18:13:20.678766 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 12 18:13:20.678661 master-0 kubenswrapper[7337]: I0312 18:13:20.678791 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 12 18:13:20.678661 master-0 kubenswrapper[7337]: I0312 18:13:20.679190 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:13:20.682001 master-0 kubenswrapper[7337]: I0312 18:13:20.679824 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-589895fbb7-jqj5k" Mar 12 18:13:20.682001 master-0 kubenswrapper[7337]: I0312 18:13:20.679919 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:13:20.682001 master-0 kubenswrapper[7337]: I0312 18:13:20.680559 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 12 18:13:20.682001 master-0 kubenswrapper[7337]: I0312 18:13:20.680890 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 12 18:13:20.682001 master-0 kubenswrapper[7337]: I0312 18:13:20.680912 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:13:20.682001 master-0 kubenswrapper[7337]: I0312 18:13:20.681677 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 12 18:13:20.682001 master-0 kubenswrapper[7337]: I0312 18:13:20.681722 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 12 18:13:20.682001 master-0 kubenswrapper[7337]: I0312 18:13:20.681743 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-kcpg5" Mar 12 18:13:20.682001 master-0 kubenswrapper[7337]: I0312 18:13:20.681771 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 12 18:13:20.682001 master-0 kubenswrapper[7337]: I0312 18:13:20.681930 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 12 18:13:20.682367 master-0 kubenswrapper[7337]: I0312 18:13:20.682093 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 12 18:13:20.682367 master-0 kubenswrapper[7337]: I0312 18:13:20.682136 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 12 18:13:20.682367 master-0 kubenswrapper[7337]: I0312 18:13:20.682095 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 12 18:13:20.682367 master-0 kubenswrapper[7337]: I0312 18:13:20.682167 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:13:20.684889 master-0 kubenswrapper[7337]: I0312 18:13:20.682478 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 12 18:13:20.684889 master-0 kubenswrapper[7337]: I0312 18:13:20.683342 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 12 18:13:20.686767 master-0 kubenswrapper[7337]: I0312 18:13:20.686735 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 12 18:13:20.688209 master-0 kubenswrapper[7337]: I0312 18:13:20.687016 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 12 18:13:20.688209 master-0 kubenswrapper[7337]: I0312 18:13:20.687403 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 12 18:13:20.688209 master-0 kubenswrapper[7337]: I0312 18:13:20.687640 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 12 18:13:20.688209 master-0 kubenswrapper[7337]: I0312 18:13:20.688128 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 12 18:13:20.689977 master-0 kubenswrapper[7337]: I0312 18:13:20.689948 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert\") pod \"catalog-operator-7d9c49f57b-pslh7\" (UID: \"47850839-bb4b-41e9-ac31-f1cabbb4926d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7" Mar 12 18:13:20.690046 master-0 kubenswrapper[7337]: I0312 18:13:20.689982 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-fz79c\" (UID: \"e94d098b-fbcc-4e85-b8ad-42f3a21c822c\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" Mar 12 18:13:20.690046 master-0 kubenswrapper[7337]: I0312 18:13:20.690011 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d94dc349-c5cb-4f12-8e48-867030af4981-trusted-ca\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:13:20.690046 master-0 kubenswrapper[7337]: I0312 18:13:20.690033 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjmcv\" (UniqueName: \"kubernetes.io/projected/d94dc349-c5cb-4f12-8e48-867030af4981-kube-api-access-zjmcv\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:13:20.690135 master-0 kubenswrapper[7337]: I0312 18:13:20.690056 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggsdx\" (UniqueName: \"kubernetes.io/projected/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-kube-api-access-ggsdx\") pod \"olm-operator-d64cfc9db-npt4r\" (UID: \"d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r" Mar 12 18:13:20.690135 master-0 kubenswrapper[7337]: I0312 18:13:20.690075 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:13:20.690135 master-0 kubenswrapper[7337]: I0312 18:13:20.690093 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d94dc349-c5cb-4f12-8e48-867030af4981-bound-sa-token\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:13:20.690135 master-0 kubenswrapper[7337]: I0312 18:13:20.690109 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4ae1240-e04e-48e9-88df-9f1a53508da7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpb6k\" (UID: \"d4ae1240-e04e-48e9-88df-9f1a53508da7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpb6k" Mar 12 18:13:20.690135 master-0 kubenswrapper[7337]: I0312 18:13:20.690124 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert\") pod \"olm-operator-d64cfc9db-npt4r\" (UID: \"d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r" Mar 12 18:13:20.690274 master-0 kubenswrapper[7337]: I0312 18:13:20.690154 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krrkl\" (UniqueName: \"kubernetes.io/projected/47850839-bb4b-41e9-ac31-f1cabbb4926d-kube-api-access-krrkl\") pod \"catalog-operator-7d9c49f57b-pslh7\" (UID: \"47850839-bb4b-41e9-ac31-f1cabbb4926d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7" Mar 12 18:13:20.690274 master-0 kubenswrapper[7337]: I0312 18:13:20.690172 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-clkx5\" (UID: \"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:13:20.690274 master-0 kubenswrapper[7337]: I0312 18:13:20.690191 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ae1240-e04e-48e9-88df-9f1a53508da7-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpb6k\" (UID: \"d4ae1240-e04e-48e9-88df-9f1a53508da7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpb6k" Mar 12 18:13:20.690274 master-0 kubenswrapper[7337]: I0312 18:13:20.690209 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkt7d\" (UniqueName: \"kubernetes.io/projected/055f5c67-f512-4510-99c5-e194944b0599-kube-api-access-tkt7d\") pod \"service-ca-operator-69b6fc6b88-9xlgl\" (UID: \"055f5c67-f512-4510-99c5-e194944b0599\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-9xlgl" Mar 12 18:13:20.690274 master-0 kubenswrapper[7337]: I0312 18:13:20.690226 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdlxn\" (UniqueName: \"kubernetes.io/projected/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-kube-api-access-fdlxn\") pod \"marketplace-operator-64bf9778cb-clkx5\" (UID: \"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:13:20.690274 master-0 kubenswrapper[7337]: I0312 18:13:20.690244 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/055f5c67-f512-4510-99c5-e194944b0599-config\") pod \"service-ca-operator-69b6fc6b88-9xlgl\" (UID: \"055f5c67-f512-4510-99c5-e194944b0599\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-9xlgl" Mar 12 18:13:20.690274 master-0 kubenswrapper[7337]: I0312 18:13:20.690262 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-kwv7s\" (UID: \"51eb717b-d11f-4bc3-8df6-deb51d5889f3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:13:20.690445 master-0 kubenswrapper[7337]: I0312 18:13:20.690287 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbnx8\" (UniqueName: \"kubernetes.io/projected/51eb717b-d11f-4bc3-8df6-deb51d5889f3-kube-api-access-gbnx8\") pod \"package-server-manager-854648ff6d-kwv7s\" (UID: \"51eb717b-d11f-4bc3-8df6-deb51d5889f3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:13:20.690445 master-0 kubenswrapper[7337]: I0312 18:13:20.690314 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-fz79c\" (UID: \"e94d098b-fbcc-4e85-b8ad-42f3a21c822c\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" Mar 12 18:13:20.690445 master-0 kubenswrapper[7337]: I0312 18:13:20.690333 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bttzm\" (UniqueName: \"kubernetes.io/projected/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-kube-api-access-bttzm\") pod \"cluster-monitoring-operator-674cbfbd9d-fz79c\" (UID: \"e94d098b-fbcc-4e85-b8ad-42f3a21c822c\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" Mar 12 18:13:20.690445 master-0 kubenswrapper[7337]: I0312 18:13:20.690350 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ae1240-e04e-48e9-88df-9f1a53508da7-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpb6k\" (UID: \"d4ae1240-e04e-48e9-88df-9f1a53508da7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpb6k" Mar 12 18:13:20.690445 master-0 kubenswrapper[7337]: I0312 18:13:20.690366 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/055f5c67-f512-4510-99c5-e194944b0599-serving-cert\") pod \"service-ca-operator-69b6fc6b88-9xlgl\" (UID: \"055f5c67-f512-4510-99c5-e194944b0599\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-9xlgl" Mar 12 18:13:20.690445 master-0 kubenswrapper[7337]: I0312 18:13:20.690383 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-clkx5\" (UID: \"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:13:20.691277 master-0 kubenswrapper[7337]: I0312 18:13:20.690861 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d94dc349-c5cb-4f12-8e48-867030af4981-trusted-ca\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:13:20.691277 master-0 kubenswrapper[7337]: I0312 18:13:20.690885 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ae1240-e04e-48e9-88df-9f1a53508da7-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpb6k\" (UID: \"d4ae1240-e04e-48e9-88df-9f1a53508da7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpb6k" Mar 12 18:13:20.691277 master-0 kubenswrapper[7337]: I0312 18:13:20.691205 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/055f5c67-f512-4510-99c5-e194944b0599-config\") pod \"service-ca-operator-69b6fc6b88-9xlgl\" (UID: \"055f5c67-f512-4510-99c5-e194944b0599\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-9xlgl" Mar 12 18:13:20.698012 master-0 kubenswrapper[7337]: I0312 18:13:20.697960 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-clkx5\" (UID: \"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:13:20.698012 master-0 kubenswrapper[7337]: I0312 18:13:20.698000 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-fz79c\" (UID: \"e94d098b-fbcc-4e85-b8ad-42f3a21c822c\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" Mar 12 18:13:20.698235 master-0 kubenswrapper[7337]: I0312 18:13:20.698205 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ae1240-e04e-48e9-88df-9f1a53508da7-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpb6k\" (UID: \"d4ae1240-e04e-48e9-88df-9f1a53508da7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpb6k" Mar 12 18:13:20.698287 master-0 kubenswrapper[7337]: I0312 18:13:20.698263 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/055f5c67-f512-4510-99c5-e194944b0599-serving-cert\") pod \"service-ca-operator-69b6fc6b88-9xlgl\" (UID: \"055f5c67-f512-4510-99c5-e194944b0599\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-9xlgl" Mar 12 18:13:20.698472 master-0 kubenswrapper[7337]: I0312 18:13:20.698450 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 12 18:13:20.698565 master-0 kubenswrapper[7337]: I0312 18:13:20.698539 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 12 18:13:20.698868 master-0 kubenswrapper[7337]: I0312 18:13:20.698834 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 12 18:13:20.698953 master-0 kubenswrapper[7337]: I0312 18:13:20.698929 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 12 18:13:20.700741 master-0 kubenswrapper[7337]: I0312 18:13:20.700717 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 12 18:13:20.700841 master-0 kubenswrapper[7337]: I0312 18:13:20.700820 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 12 18:13:20.701486 master-0 kubenswrapper[7337]: I0312 18:13:20.701465 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 12 18:13:20.701750 master-0 kubenswrapper[7337]: I0312 18:13:20.701723 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 12 18:13:20.702004 master-0 kubenswrapper[7337]: I0312 18:13:20.701984 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 12 18:13:20.702233 master-0 kubenswrapper[7337]: I0312 18:13:20.702210 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 12 18:13:20.702449 master-0 kubenswrapper[7337]: I0312 18:13:20.702426 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 12 18:13:20.702506 master-0 kubenswrapper[7337]: I0312 18:13:20.702495 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 12 18:13:20.702604 master-0 kubenswrapper[7337]: I0312 18:13:20.702581 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 12 18:13:20.702691 master-0 kubenswrapper[7337]: I0312 18:13:20.702432 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 12 18:13:20.702903 master-0 kubenswrapper[7337]: I0312 18:13:20.702859 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 12 18:13:20.702994 master-0 kubenswrapper[7337]: I0312 18:13:20.702971 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 12 18:13:20.703126 master-0 kubenswrapper[7337]: I0312 18:13:20.703104 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 12 18:13:20.703174 master-0 kubenswrapper[7337]: I0312 18:13:20.703136 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 12 18:13:20.703309 master-0 kubenswrapper[7337]: I0312 18:13:20.703291 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 12 18:13:20.703696 master-0 kubenswrapper[7337]: I0312 18:13:20.703669 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 12 18:13:20.703696 master-0 kubenswrapper[7337]: I0312 18:13:20.703689 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 12 18:13:20.703842 master-0 kubenswrapper[7337]: I0312 18:13:20.703761 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 12 18:13:20.703842 master-0 kubenswrapper[7337]: I0312 18:13:20.703835 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 12 18:13:20.703898 master-0 kubenswrapper[7337]: I0312 18:13:20.703884 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 12 18:13:20.704219 master-0 kubenswrapper[7337]: I0312 18:13:20.704205 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 12 18:13:20.705203 master-0 kubenswrapper[7337]: I0312 18:13:20.704333 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 12 18:13:20.705273 master-0 kubenswrapper[7337]: I0312 18:13:20.704362 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 12 18:13:20.705317 master-0 kubenswrapper[7337]: I0312 18:13:20.704372 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 12 18:13:20.705317 master-0 kubenswrapper[7337]: I0312 18:13:20.704408 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 18:13:20.705388 master-0 kubenswrapper[7337]: I0312 18:13:20.704435 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 12 18:13:20.705388 master-0 kubenswrapper[7337]: I0312 18:13:20.704466 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 12 18:13:20.705388 master-0 kubenswrapper[7337]: I0312 18:13:20.704493 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 12 18:13:20.705492 master-0 kubenswrapper[7337]: I0312 18:13:20.704546 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 12 18:13:20.705492 master-0 kubenswrapper[7337]: I0312 18:13:20.704593 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 12 18:13:20.705492 master-0 kubenswrapper[7337]: I0312 18:13:20.704623 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 12 18:13:20.705581 master-0 kubenswrapper[7337]: I0312 18:13:20.704637 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 12 18:13:20.705581 master-0 kubenswrapper[7337]: I0312 18:13:20.704636 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 12 18:13:20.705628 master-0 kubenswrapper[7337]: I0312 18:13:20.704664 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 12 18:13:20.705661 master-0 kubenswrapper[7337]: I0312 18:13:20.704673 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 12 18:13:20.705718 master-0 kubenswrapper[7337]: I0312 18:13:20.704689 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 12 18:13:20.705756 master-0 kubenswrapper[7337]: I0312 18:13:20.704703 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 12 18:13:20.705782 master-0 kubenswrapper[7337]: I0312 18:13:20.704715 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 12 18:13:20.705811 master-0 kubenswrapper[7337]: I0312 18:13:20.704750 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 12 18:13:20.705853 master-0 kubenswrapper[7337]: I0312 18:13:20.704755 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 12 18:13:20.705908 master-0 kubenswrapper[7337]: I0312 18:13:20.704758 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 12 18:13:20.705944 master-0 kubenswrapper[7337]: I0312 18:13:20.704786 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 12 18:13:20.705970 master-0 kubenswrapper[7337]: I0312 18:13:20.704788 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 12 18:13:20.706006 master-0 kubenswrapper[7337]: I0312 18:13:20.704852 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 12 18:13:20.706038 master-0 kubenswrapper[7337]: I0312 18:13:20.704890 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 12 18:13:20.706081 master-0 kubenswrapper[7337]: I0312 18:13:20.704896 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 12 18:13:20.706158 master-0 kubenswrapper[7337]: I0312 18:13:20.704917 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 12 18:13:20.706249 master-0 kubenswrapper[7337]: I0312 18:13:20.704938 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 12 18:13:20.706298 master-0 kubenswrapper[7337]: I0312 18:13:20.704961 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 12 18:13:20.706484 master-0 kubenswrapper[7337]: I0312 18:13:20.704990 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 12 18:13:20.706484 master-0 kubenswrapper[7337]: I0312 18:13:20.705006 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 12 18:13:20.706484 master-0 kubenswrapper[7337]: I0312 18:13:20.705036 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 12 18:13:20.706484 master-0 kubenswrapper[7337]: I0312 18:13:20.705068 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 12 18:13:20.706484 master-0 kubenswrapper[7337]: I0312 18:13:20.705067 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 12 18:13:20.707314 master-0 kubenswrapper[7337]: I0312 18:13:20.707294 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 12 18:13:20.707692 master-0 kubenswrapper[7337]: I0312 18:13:20.707373 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 12 18:13:20.707692 master-0 kubenswrapper[7337]: I0312 18:13:20.707426 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 12 18:13:20.707692 master-0 kubenswrapper[7337]: I0312 18:13:20.707467 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 12 18:13:20.707692 master-0 kubenswrapper[7337]: I0312 18:13:20.707554 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 12 18:13:20.707692 master-0 kubenswrapper[7337]: I0312 18:13:20.707560 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 12 18:13:20.711951 master-0 kubenswrapper[7337]: I0312 18:13:20.711885 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 12 18:13:20.713628 master-0 kubenswrapper[7337]: I0312 18:13:20.713594 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 12 18:13:20.714072 master-0 kubenswrapper[7337]: I0312 18:13:20.714043 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 12 18:13:20.717814 master-0 kubenswrapper[7337]: I0312 18:13:20.717797 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 12 18:13:20.737582 master-0 kubenswrapper[7337]: I0312 18:13:20.737568 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 12 18:13:20.746497 master-0 kubenswrapper[7337]: I0312 18:13:20.746462 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:13:20.757607 master-0 kubenswrapper[7337]: I0312 18:13:20.757592 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 12 18:13:20.769906 master-0 kubenswrapper[7337]: I0312 18:13:20.769883 7337 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 12 18:13:20.778690 master-0 kubenswrapper[7337]: I0312 18:13:20.778655 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 12 18:13:20.791793 master-0 kubenswrapper[7337]: I0312 18:13:20.791740 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-run-ovn\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.791851 master-0 kubenswrapper[7337]: I0312 18:13:20.791797 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-run-k8s-cni-cncf-io\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.791851 master-0 kubenswrapper[7337]: I0312 18:13:20.791833 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa-config\") pod \"openshift-controller-manager-operator-8565d84698-m6hsp\" (UID: \"223a548b-a3ad-40dd-82de-e3dbb7f3e4fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-m6hsp" Mar 12 18:13:20.791926 master-0 kubenswrapper[7337]: I0312 18:13:20.791885 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/062f1b21-2ffc-47da-8334-427c3b2a1a90-serving-cert\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:13:20.791926 master-0 kubenswrapper[7337]: I0312 18:13:20.791919 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e22c7035-4b7a-48cb-9abb-db277b387842-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:13:20.791978 master-0 kubenswrapper[7337]: I0312 18:13:20.791942 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e720e1d0-5a6d-4b76-8b25-5963e24950f5-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-w7xvp\" (UID: \"e720e1d0-5a6d-4b76-8b25-5963e24950f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-w7xvp" Mar 12 18:13:20.791978 master-0 kubenswrapper[7337]: I0312 18:13:20.791969 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-os-release\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:13:20.792030 master-0 kubenswrapper[7337]: I0312 18:13:20.791993 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-run-multus-certs\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.792030 master-0 kubenswrapper[7337]: I0312 18:13:20.792018 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pn9h\" (UniqueName: \"kubernetes.io/projected/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-kube-api-access-2pn9h\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.792089 master-0 kubenswrapper[7337]: I0312 18:13:20.792042 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e697746f-fb9e-4d10-ab61-33c68e62cc0d-serving-cert\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:13:20.792089 master-0 kubenswrapper[7337]: I0312 18:13:20.792064 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/062f1b21-2ffc-47da-8334-427c3b2a1a90-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:13:20.792089 master-0 kubenswrapper[7337]: I0312 18:13:20.792086 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-cni-bin\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.792166 master-0 kubenswrapper[7337]: I0312 18:13:20.792109 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/00755a4e-124c-4a51-b1c5-7c505b3637a8-service-ca\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:13:20.792166 master-0 kubenswrapper[7337]: I0312 18:13:20.792131 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e697746f-fb9e-4d10-ab61-33c68e62cc0d-etcd-ca\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:13:20.792166 master-0 kubenswrapper[7337]: I0312 18:13:20.792153 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s55hv\" (UniqueName: \"kubernetes.io/projected/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa-kube-api-access-s55hv\") pod \"openshift-controller-manager-operator-8565d84698-m6hsp\" (UID: \"223a548b-a3ad-40dd-82de-e3dbb7f3e4fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-m6hsp" Mar 12 18:13:20.792249 master-0 kubenswrapper[7337]: I0312 18:13:20.792176 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab926874-9722-4e65-9084-27b2f9915450-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-ddsbn\" (UID: \"ab926874-9722-4e65-9084-27b2f9915450\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-ddsbn" Mar 12 18:13:20.792249 master-0 kubenswrapper[7337]: I0312 18:13:20.792201 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1e2340b-ebca-40de-b1e0-8133999cd860-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-2xr98\" (UID: \"a1e2340b-ebca-40de-b1e0-8133999cd860\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-2xr98" Mar 12 18:13:20.792249 master-0 kubenswrapper[7337]: I0312 18:13:20.792232 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-daemon-config\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.792328 master-0 kubenswrapper[7337]: I0312 18:13:20.792256 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-fz79c\" (UID: \"e94d098b-fbcc-4e85-b8ad-42f3a21c822c\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" Mar 12 18:13:20.792328 master-0 kubenswrapper[7337]: I0312 18:13:20.792280 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn9nf\" (UniqueName: \"kubernetes.io/projected/062f1b21-2ffc-47da-8334-427c3b2a1a90-kube-api-access-jn9nf\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:13:20.792540 master-0 kubenswrapper[7337]: I0312 18:13:20.792497 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.792576 master-0 kubenswrapper[7337]: I0312 18:13:20.792553 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-cni-binary-copy\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.792630 master-0 kubenswrapper[7337]: I0312 18:13:20.792598 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:13:20.792671 master-0 kubenswrapper[7337]: I0312 18:13:20.792634 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:13:20.792671 master-0 kubenswrapper[7337]: I0312 18:13:20.792661 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vct98\" (UniqueName: \"kubernetes.io/projected/e697746f-fb9e-4d10-ab61-33c68e62cc0d-kube-api-access-vct98\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:13:20.792993 master-0 kubenswrapper[7337]: E0312 18:13:20.792956 7337 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 12 18:13:20.793051 master-0 kubenswrapper[7337]: E0312 18:13:20.793038 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls podName:e94d098b-fbcc-4e85-b8ad-42f3a21c822c nodeName:}" failed. No retries permitted until 2026-03-12 18:13:21.293019667 +0000 UTC m=+1.761620614 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-fz79c" (UID: "e94d098b-fbcc-4e85-b8ad-42f3a21c822c") : secret "cluster-monitoring-operator-tls" not found Mar 12 18:13:20.793148 master-0 kubenswrapper[7337]: E0312 18:13:20.793091 7337 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 12 18:13:20.793194 master-0 kubenswrapper[7337]: I0312 18:13:20.793164 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e697746f-fb9e-4d10-ab61-33c68e62cc0d-etcd-ca\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:13:20.793224 master-0 kubenswrapper[7337]: I0312 18:13:20.793186 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e22c7035-4b7a-48cb-9abb-db277b387842-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:13:20.793224 master-0 kubenswrapper[7337]: E0312 18:13:20.793214 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls podName:d94dc349-c5cb-4f12-8e48-867030af4981 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:21.29316644 +0000 UTC m=+1.761767407 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls") pod "ingress-operator-677db989d6-4527l" (UID: "d94dc349-c5cb-4f12-8e48-867030af4981") : secret "metrics-tls" not found Mar 12 18:13:20.793295 master-0 kubenswrapper[7337]: I0312 18:13:20.793268 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/062f1b21-2ffc-47da-8334-427c3b2a1a90-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:13:20.793348 master-0 kubenswrapper[7337]: I0312 18:13:20.793318 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/062f1b21-2ffc-47da-8334-427c3b2a1a90-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:13:20.793421 master-0 kubenswrapper[7337]: I0312 18:13:20.793134 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1e2340b-ebca-40de-b1e0-8133999cd860-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-2xr98\" (UID: \"a1e2340b-ebca-40de-b1e0-8133999cd860\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-2xr98" Mar 12 18:13:20.793479 master-0 kubenswrapper[7337]: I0312 18:13:20.793446 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d92dddc8-a810-43f5-8beb-32d1c8ad8381-host-slash\") pod \"iptables-alerter-4k8wm\" (UID: \"d92dddc8-a810-43f5-8beb-32d1c8ad8381\") " pod="openshift-network-operator/iptables-alerter-4k8wm" Mar 12 18:13:20.793537 master-0 kubenswrapper[7337]: I0312 18:13:20.793443 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-daemon-config\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.793537 master-0 kubenswrapper[7337]: I0312 18:13:20.793409 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/062f1b21-2ffc-47da-8334-427c3b2a1a90-serving-cert\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:13:20.793537 master-0 kubenswrapper[7337]: I0312 18:13:20.793532 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/00755a4e-124c-4a51-b1c5-7c505b3637a8-service-ca\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:13:20.793677 master-0 kubenswrapper[7337]: I0312 18:13:20.793574 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-cni-binary-copy\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.793677 master-0 kubenswrapper[7337]: I0312 18:13:20.793587 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e720e1d0-5a6d-4b76-8b25-5963e24950f5-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-w7xvp\" (UID: \"e720e1d0-5a6d-4b76-8b25-5963e24950f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-w7xvp" Mar 12 18:13:20.793677 master-0 kubenswrapper[7337]: I0312 18:13:20.793663 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa-config\") pod \"openshift-controller-manager-operator-8565d84698-m6hsp\" (UID: \"223a548b-a3ad-40dd-82de-e3dbb7f3e4fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-m6hsp" Mar 12 18:13:20.793753 master-0 kubenswrapper[7337]: I0312 18:13:20.793723 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e697746f-fb9e-4d10-ab61-33c68e62cc0d-serving-cert\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:13:20.793855 master-0 kubenswrapper[7337]: I0312 18:13:20.793827 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:13:20.793930 master-0 kubenswrapper[7337]: I0312 18:13:20.793916 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b8dd13a7-10e5-431b-8d30-405dcfea02f5-ovnkube-config\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.793993 master-0 kubenswrapper[7337]: I0312 18:13:20.793982 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b8dd13a7-10e5-431b-8d30-405dcfea02f5-ovnkube-script-lib\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.794947 master-0 kubenswrapper[7337]: I0312 18:13:20.794830 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfpb9\" (UniqueName: \"kubernetes.io/projected/37cd9c0a-697e-4e67-932b-b331ff77c8c0-kube-api-access-pfpb9\") pod \"openshift-config-operator-64488f9d78-tjp2j\" (UID: \"37cd9c0a-697e-4e67-932b-b331ff77c8c0\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" Mar 12 18:13:20.795313 master-0 kubenswrapper[7337]: I0312 18:13:20.795291 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/062f1b21-2ffc-47da-8334-427c3b2a1a90-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:13:20.795388 master-0 kubenswrapper[7337]: I0312 18:13:20.795364 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b8dd13a7-10e5-431b-8d30-405dcfea02f5-ovnkube-config\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.795449 master-0 kubenswrapper[7337]: I0312 18:13:20.795345 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmxc2\" (UniqueName: \"kubernetes.io/projected/e22c7035-4b7a-48cb-9abb-db277b387842-kube-api-access-pmxc2\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:13:20.795580 master-0 kubenswrapper[7337]: I0312 18:13:20.795565 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-cnibin\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:13:20.795690 master-0 kubenswrapper[7337]: I0312 18:13:20.795675 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/062f1b21-2ffc-47da-8334-427c3b2a1a90-config\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:13:20.795775 master-0 kubenswrapper[7337]: I0312 18:13:20.795764 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rhmv\" (UniqueName: \"kubernetes.io/projected/b8dd13a7-10e5-431b-8d30-405dcfea02f5-kube-api-access-7rhmv\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.795844 master-0 kubenswrapper[7337]: I0312 18:13:20.795832 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-os-release\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.795923 master-0 kubenswrapper[7337]: I0312 18:13:20.795911 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls\") pod \"dns-operator-589895fbb7-jqj5k\" (UID: \"8ad05507-e242-4ff8-ae80-c16ff9ee68e2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-jqj5k" Mar 12 18:13:20.796005 master-0 kubenswrapper[7337]: I0312 18:13:20.795990 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-var-lib-cni-bin\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.796120 master-0 kubenswrapper[7337]: I0312 18:13:20.796105 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-w7wj9\" (UID: \"74eb1407-de29-42e5-9e6c-ce1bec3a9d80\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" Mar 12 18:13:20.796203 master-0 kubenswrapper[7337]: I0312 18:13:20.796189 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vpbp\" (UniqueName: \"kubernetes.io/projected/a1e2340b-ebca-40de-b1e0-8133999cd860-kube-api-access-6vpbp\") pod \"kube-storage-version-migrator-operator-7f65c457f5-2xr98\" (UID: \"a1e2340b-ebca-40de-b1e0-8133999cd860\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-2xr98" Mar 12 18:13:20.796313 master-0 kubenswrapper[7337]: I0312 18:13:20.796299 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-w7wj9\" (UID: \"74eb1407-de29-42e5-9e6c-ce1bec3a9d80\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" Mar 12 18:13:20.796391 master-0 kubenswrapper[7337]: I0312 18:13:20.796379 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6ggc\" (UniqueName: \"kubernetes.io/projected/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-kube-api-access-b6ggc\") pod \"ovnkube-control-plane-66b55d57d-w7wj9\" (UID: \"74eb1407-de29-42e5-9e6c-ce1bec3a9d80\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" Mar 12 18:13:20.796464 master-0 kubenswrapper[7337]: I0312 18:13:20.796453 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-run-systemd\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.797235 master-0 kubenswrapper[7337]: I0312 18:13:20.796566 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-log-socket\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.797486 master-0 kubenswrapper[7337]: I0312 18:13:20.797444 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlf77\" (UniqueName: \"kubernetes.io/projected/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-kube-api-access-wlf77\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:20.797541 master-0 kubenswrapper[7337]: I0312 18:13:20.797502 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/455f0aad-add2-49d0-995c-f92467bce2d6-cni-binary-copy\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:13:20.797579 master-0 kubenswrapper[7337]: I0312 18:13:20.797547 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/00755a4e-124c-4a51-b1c5-7c505b3637a8-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:13:20.797637 master-0 kubenswrapper[7337]: I0312 18:13:20.797584 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-systemd-units\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.797637 master-0 kubenswrapper[7337]: I0312 18:13:20.797251 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/062f1b21-2ffc-47da-8334-427c3b2a1a90-config\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:13:20.797729 master-0 kubenswrapper[7337]: I0312 18:13:20.797707 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1e2340b-ebca-40de-b1e0-8133999cd860-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-2xr98\" (UID: \"a1e2340b-ebca-40de-b1e0-8133999cd860\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-2xr98" Mar 12 18:13:20.797944 master-0 kubenswrapper[7337]: I0312 18:13:20.797918 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-w7wj9\" (UID: \"74eb1407-de29-42e5-9e6c-ce1bec3a9d80\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" Mar 12 18:13:20.798015 master-0 kubenswrapper[7337]: I0312 18:13:20.797990 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-cnibin\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.798076 master-0 kubenswrapper[7337]: I0312 18:13:20.798056 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e697746f-fb9e-4d10-ab61-33c68e62cc0d-etcd-client\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:13:20.798076 master-0 kubenswrapper[7337]: I0312 18:13:20.798057 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-w7wj9\" (UID: \"74eb1407-de29-42e5-9e6c-ce1bec3a9d80\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" Mar 12 18:13:20.798252 master-0 kubenswrapper[7337]: I0312 18:13:20.798228 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-m6hsp\" (UID: \"223a548b-a3ad-40dd-82de-e3dbb7f3e4fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-m6hsp" Mar 12 18:13:20.798316 master-0 kubenswrapper[7337]: I0312 18:13:20.798286 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcvfv\" (UniqueName: \"kubernetes.io/projected/f3a2cda2-b70f-4128-a1be-48503f5aad6d-kube-api-access-tcvfv\") pod \"cluster-olm-operator-77899cf6d-6nvn4\" (UID: \"f3a2cda2-b70f-4128-a1be-48503f5aad6d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4" Mar 12 18:13:20.798350 master-0 kubenswrapper[7337]: I0312 18:13:20.798319 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/455f0aad-add2-49d0-995c-f92467bce2d6-cni-binary-copy\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:13:20.798405 master-0 kubenswrapper[7337]: I0312 18:13:20.798360 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptrtx\" (UniqueName: \"kubernetes.io/projected/33feec78-4592-4343-965b-aa1b7044fcf3-kube-api-access-ptrtx\") pod \"network-check-target-cpthp\" (UID: \"33feec78-4592-4343-965b-aa1b7044fcf3\") " pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:13:20.798405 master-0 kubenswrapper[7337]: I0312 18:13:20.798372 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1e2340b-ebca-40de-b1e0-8133999cd860-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-2xr98\" (UID: \"a1e2340b-ebca-40de-b1e0-8133999cd860\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-2xr98" Mar 12 18:13:20.798405 master-0 kubenswrapper[7337]: I0312 18:13:20.798391 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs\") pod \"multus-admission-controller-8d675b596-kcpg5\" (UID: \"875bdfaa-b0a4-4412-a477-c962844e7057\") " pod="openshift-multus/multus-admission-controller-8d675b596-kcpg5" Mar 12 18:13:20.798489 master-0 kubenswrapper[7337]: I0312 18:13:20.798422 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:20.798489 master-0 kubenswrapper[7337]: I0312 18:13:20.798459 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 12 18:13:20.798662 master-0 kubenswrapper[7337]: I0312 18:13:20.798535 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e720e1d0-5a6d-4b76-8b25-5963e24950f5-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-w7xvp\" (UID: \"e720e1d0-5a6d-4b76-8b25-5963e24950f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-w7xvp" Mar 12 18:13:20.798662 master-0 kubenswrapper[7337]: I0312 18:13:20.798578 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e720e1d0-5a6d-4b76-8b25-5963e24950f5-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-w7xvp\" (UID: \"e720e1d0-5a6d-4b76-8b25-5963e24950f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-w7xvp" Mar 12 18:13:20.798662 master-0 kubenswrapper[7337]: I0312 18:13:20.798585 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e697746f-fb9e-4d10-ab61-33c68e62cc0d-etcd-client\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:13:20.798894 master-0 kubenswrapper[7337]: I0312 18:13:20.798720 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e720e1d0-5a6d-4b76-8b25-5963e24950f5-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-w7xvp\" (UID: \"e720e1d0-5a6d-4b76-8b25-5963e24950f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-w7xvp" Mar 12 18:13:20.799051 master-0 kubenswrapper[7337]: I0312 18:13:20.799022 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-m6hsp\" (UID: \"223a548b-a3ad-40dd-82de-e3dbb7f3e4fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-m6hsp" Mar 12 18:13:20.799138 master-0 kubenswrapper[7337]: I0312 18:13:20.798917 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l22gw\" (UniqueName: \"kubernetes.io/projected/d92dddc8-a810-43f5-8beb-32d1c8ad8381-kube-api-access-l22gw\") pod \"iptables-alerter-4k8wm\" (UID: \"d92dddc8-a810-43f5-8beb-32d1c8ad8381\") " pod="openshift-network-operator/iptables-alerter-4k8wm" Mar 12 18:13:20.799270 master-0 kubenswrapper[7337]: I0312 18:13:20.799245 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-run-openvswitch\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.799354 master-0 kubenswrapper[7337]: I0312 18:13:20.799328 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-socket-dir-parent\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.799573 master-0 kubenswrapper[7337]: I0312 18:13:20.799393 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37cd9c0a-697e-4e67-932b-b331ff77c8c0-serving-cert\") pod \"openshift-config-operator-64488f9d78-tjp2j\" (UID: \"37cd9c0a-697e-4e67-932b-b331ff77c8c0\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" Mar 12 18:13:20.799745 master-0 kubenswrapper[7337]: I0312 18:13:20.799608 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2skd\" (UniqueName: \"kubernetes.io/projected/875bdfaa-b0a4-4412-a477-c962844e7057-kube-api-access-l2skd\") pod \"multus-admission-controller-8d675b596-kcpg5\" (UID: \"875bdfaa-b0a4-4412-a477-c962844e7057\") " pod="openshift-multus/multus-admission-controller-8d675b596-kcpg5" Mar 12 18:13:20.799745 master-0 kubenswrapper[7337]: I0312 18:13:20.799687 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-run-netns\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.799745 master-0 kubenswrapper[7337]: I0312 18:13:20.799696 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37cd9c0a-697e-4e67-932b-b331ff77c8c0-serving-cert\") pod \"openshift-config-operator-64488f9d78-tjp2j\" (UID: \"37cd9c0a-697e-4e67-932b-b331ff77c8c0\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" Mar 12 18:13:20.799745 master-0 kubenswrapper[7337]: I0312 18:13:20.799719 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-run-ovn-kubernetes\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.799852 master-0 kubenswrapper[7337]: I0312 18:13:20.799773 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab926874-9722-4e65-9084-27b2f9915450-serving-cert\") pod \"kube-apiserver-operator-68bd585b-ddsbn\" (UID: \"ab926874-9722-4e65-9084-27b2f9915450\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-ddsbn" Mar 12 18:13:20.799916 master-0 kubenswrapper[7337]: I0312 18:13:20.799891 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-system-cni-dir\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:13:20.800010 master-0 kubenswrapper[7337]: I0312 18:13:20.799986 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/b6d288e3-8e73-44d2-874d-64c6c98dd991-host-etc-kube\") pod \"network-operator-7c649bf6d4-vksss\" (UID: \"b6d288e3-8e73-44d2-874d-64c6c98dd991\") " pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" Mar 12 18:13:20.800079 master-0 kubenswrapper[7337]: I0312 18:13:20.800060 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-clkx5\" (UID: \"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:13:20.800175 master-0 kubenswrapper[7337]: I0312 18:13:20.800150 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:20.800239 master-0 kubenswrapper[7337]: E0312 18:13:20.800179 7337 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 12 18:13:20.800239 master-0 kubenswrapper[7337]: I0312 18:13:20.800194 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e697746f-fb9e-4d10-ab61-33c68e62cc0d-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:13:20.800239 master-0 kubenswrapper[7337]: I0312 18:13:20.800233 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/37cd9c0a-697e-4e67-932b-b331ff77c8c0-available-featuregates\") pod \"openshift-config-operator-64488f9d78-tjp2j\" (UID: \"37cd9c0a-697e-4e67-932b-b331ff77c8c0\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" Mar 12 18:13:20.800321 master-0 kubenswrapper[7337]: I0312 18:13:20.800246 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab926874-9722-4e65-9084-27b2f9915450-serving-cert\") pod \"kube-apiserver-operator-68bd585b-ddsbn\" (UID: \"ab926874-9722-4e65-9084-27b2f9915450\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-ddsbn" Mar 12 18:13:20.800321 master-0 kubenswrapper[7337]: E0312 18:13:20.800242 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics podName:4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:21.30022512 +0000 UTC m=+1.768826077 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-clkx5" (UID: "4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64") : secret "marketplace-operator-metrics" not found Mar 12 18:13:20.800321 master-0 kubenswrapper[7337]: I0312 18:13:20.800280 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert\") pod \"catalog-operator-7d9c49f57b-pslh7\" (UID: \"47850839-bb4b-41e9-ac31-f1cabbb4926d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7" Mar 12 18:13:20.800408 master-0 kubenswrapper[7337]: E0312 18:13:20.800354 7337 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 12 18:13:20.800408 master-0 kubenswrapper[7337]: I0312 18:13:20.800390 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/37cd9c0a-697e-4e67-932b-b331ff77c8c0-available-featuregates\") pod \"openshift-config-operator-64488f9d78-tjp2j\" (UID: \"37cd9c0a-697e-4e67-932b-b331ff77c8c0\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" Mar 12 18:13:20.800462 master-0 kubenswrapper[7337]: E0312 18:13:20.800398 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert podName:47850839-bb4b-41e9-ac31-f1cabbb4926d nodeName:}" failed. No retries permitted until 2026-03-12 18:13:21.300386764 +0000 UTC m=+1.768987701 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert") pod "catalog-operator-7d9c49f57b-pslh7" (UID: "47850839-bb4b-41e9-ac31-f1cabbb4926d") : secret "catalog-operator-serving-cert" not found Mar 12 18:13:20.800462 master-0 kubenswrapper[7337]: I0312 18:13:20.800345 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d92dddc8-a810-43f5-8beb-32d1c8ad8381-iptables-alerter-script\") pod \"iptables-alerter-4k8wm\" (UID: \"d92dddc8-a810-43f5-8beb-32d1c8ad8381\") " pod="openshift-network-operator/iptables-alerter-4k8wm" Mar 12 18:13:20.800535 master-0 kubenswrapper[7337]: I0312 18:13:20.800456 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e697746f-fb9e-4d10-ab61-33c68e62cc0d-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:13:20.800535 master-0 kubenswrapper[7337]: I0312 18:13:20.800473 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-kubelet\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.800587 master-0 kubenswrapper[7337]: I0312 18:13:20.800538 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b8dd13a7-10e5-431b-8d30-405dcfea02f5-env-overrides\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.800651 master-0 kubenswrapper[7337]: I0312 18:13:20.800632 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs\") pod \"network-metrics-daemon-z4sc9\" (UID: \"5b48f8fd-2efe-44e3-a6d7-c71358b83a2f\") " pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:13:20.800687 master-0 kubenswrapper[7337]: I0312 18:13:20.800670 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d92dddc8-a810-43f5-8beb-32d1c8ad8381-iptables-alerter-script\") pod \"iptables-alerter-4k8wm\" (UID: \"d92dddc8-a810-43f5-8beb-32d1c8ad8381\") " pod="openshift-network-operator/iptables-alerter-4k8wm" Mar 12 18:13:20.801336 master-0 kubenswrapper[7337]: I0312 18:13:20.801312 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdlcw\" (UniqueName: \"kubernetes.io/projected/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-kube-api-access-tdlcw\") pod \"network-node-identity-hqrqt\" (UID: \"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe\") " pod="openshift-network-node-identity/network-node-identity-hqrqt" Mar 12 18:13:20.801382 master-0 kubenswrapper[7337]: I0312 18:13:20.801347 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-var-lib-kubelet\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.801425 master-0 kubenswrapper[7337]: I0312 18:13:20.801404 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b8dd13a7-10e5-431b-8d30-405dcfea02f5-env-overrides\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.801425 master-0 kubenswrapper[7337]: I0312 18:13:20.801414 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-conf-dir\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.801481 master-0 kubenswrapper[7337]: I0312 18:13:20.801454 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert\") pod \"olm-operator-d64cfc9db-npt4r\" (UID: \"d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r" Mar 12 18:13:20.801481 master-0 kubenswrapper[7337]: I0312 18:13:20.801474 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-w7wj9\" (UID: \"74eb1407-de29-42e5-9e6c-ce1bec3a9d80\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" Mar 12 18:13:20.801612 master-0 kubenswrapper[7337]: E0312 18:13:20.801578 7337 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 12 18:13:20.801662 master-0 kubenswrapper[7337]: E0312 18:13:20.801645 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert podName:d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:21.301624596 +0000 UTC m=+1.770225643 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert") pod "olm-operator-d64cfc9db-npt4r" (UID: "d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27") : secret "olm-operator-serving-cert" not found Mar 12 18:13:20.801696 master-0 kubenswrapper[7337]: I0312 18:13:20.801675 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-env-overrides\") pod \"network-node-identity-hqrqt\" (UID: \"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe\") " pod="openshift-network-node-identity/network-node-identity-hqrqt" Mar 12 18:13:20.801728 master-0 kubenswrapper[7337]: I0312 18:13:20.801702 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-ovnkube-identity-cm\") pod \"network-node-identity-hqrqt\" (UID: \"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe\") " pod="openshift-network-node-identity/network-node-identity-hqrqt" Mar 12 18:13:20.801728 master-0 kubenswrapper[7337]: I0312 18:13:20.801721 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-system-cni-dir\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.801786 master-0 kubenswrapper[7337]: I0312 18:13:20.801741 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/236f2886-bb69-49a7-9471-36454fd1cbd3-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-6qlzz\" (UID: \"236f2886-bb69-49a7-9471-36454fd1cbd3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6qlzz" Mar 12 18:13:20.801786 master-0 kubenswrapper[7337]: I0312 18:13:20.801740 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-w7wj9\" (UID: \"74eb1407-de29-42e5-9e6c-ce1bec3a9d80\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" Mar 12 18:13:20.801886 master-0 kubenswrapper[7337]: I0312 18:13:20.801853 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:20.801939 master-0 kubenswrapper[7337]: I0312 18:13:20.801916 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-node-log\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.801970 master-0 kubenswrapper[7337]: I0312 18:13:20.801926 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/236f2886-bb69-49a7-9471-36454fd1cbd3-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-6qlzz\" (UID: \"236f2886-bb69-49a7-9471-36454fd1cbd3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6qlzz" Mar 12 18:13:20.802000 master-0 kubenswrapper[7337]: I0312 18:13:20.801956 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vr66\" (UniqueName: \"kubernetes.io/projected/45aa4887-c913-4ece-ae34-fcde33832621-kube-api-access-4vr66\") pod \"csi-snapshot-controller-operator-5685fbc7d-649db\" (UID: \"45aa4887-c913-4ece-ae34-fcde33832621\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-649db" Mar 12 18:13:20.802028 master-0 kubenswrapper[7337]: I0312 18:13:20.802008 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/236f2886-bb69-49a7-9471-36454fd1cbd3-config\") pod \"openshift-apiserver-operator-799b6db4d7-6qlzz\" (UID: \"236f2886-bb69-49a7-9471-36454fd1cbd3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6qlzz" Mar 12 18:13:20.802074 master-0 kubenswrapper[7337]: I0312 18:13:20.802038 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/f3a2cda2-b70f-4128-a1be-48503f5aad6d-operand-assets\") pod \"cluster-olm-operator-77899cf6d-6nvn4\" (UID: \"f3a2cda2-b70f-4128-a1be-48503f5aad6d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4" Mar 12 18:13:20.802106 master-0 kubenswrapper[7337]: I0312 18:13:20.802087 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e22c7035-4b7a-48cb-9abb-db277b387842-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:13:20.802147 master-0 kubenswrapper[7337]: I0312 18:13:20.802113 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th8tc\" (UniqueName: \"kubernetes.io/projected/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-kube-api-access-th8tc\") pod \"dns-operator-589895fbb7-jqj5k\" (UID: \"8ad05507-e242-4ff8-ae80-c16ff9ee68e2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-jqj5k" Mar 12 18:13:20.802180 master-0 kubenswrapper[7337]: I0312 18:13:20.802166 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdb9w\" (UniqueName: \"kubernetes.io/projected/b6d288e3-8e73-44d2-874d-64c6c98dd991-kube-api-access-vdb9w\") pod \"network-operator-7c649bf6d4-vksss\" (UID: \"b6d288e3-8e73-44d2-874d-64c6c98dd991\") " pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" Mar 12 18:13:20.802211 master-0 kubenswrapper[7337]: I0312 18:13:20.802110 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-env-overrides\") pod \"network-node-identity-hqrqt\" (UID: \"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe\") " pod="openshift-network-node-identity/network-node-identity-hqrqt" Mar 12 18:13:20.802242 master-0 kubenswrapper[7337]: I0312 18:13:20.802195 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-kwv7s\" (UID: \"51eb717b-d11f-4bc3-8df6-deb51d5889f3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:13:20.802271 master-0 kubenswrapper[7337]: I0312 18:13:20.802241 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3a2cda2-b70f-4128-a1be-48503f5aad6d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-6nvn4\" (UID: \"f3a2cda2-b70f-4128-a1be-48503f5aad6d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4" Mar 12 18:13:20.802271 master-0 kubenswrapper[7337]: I0312 18:13:20.802243 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/236f2886-bb69-49a7-9471-36454fd1cbd3-config\") pod \"openshift-apiserver-operator-799b6db4d7-6qlzz\" (UID: \"236f2886-bb69-49a7-9471-36454fd1cbd3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6qlzz" Mar 12 18:13:20.802271 master-0 kubenswrapper[7337]: I0312 18:13:20.802261 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:20.802271 master-0 kubenswrapper[7337]: E0312 18:13:20.802266 7337 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 12 18:13:20.802425 master-0 kubenswrapper[7337]: I0312 18:13:20.802299 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6ggg\" (UniqueName: \"kubernetes.io/projected/236f2886-bb69-49a7-9471-36454fd1cbd3-kube-api-access-b6ggg\") pod \"openshift-apiserver-operator-799b6db4d7-6qlzz\" (UID: \"236f2886-bb69-49a7-9471-36454fd1cbd3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6qlzz" Mar 12 18:13:20.802425 master-0 kubenswrapper[7337]: E0312 18:13:20.802324 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert podName:51eb717b-d11f-4bc3-8df6-deb51d5889f3 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:21.302314503 +0000 UTC m=+1.770915450 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-kwv7s" (UID: "51eb717b-d11f-4bc3-8df6-deb51d5889f3") : secret "package-server-manager-serving-cert" not found Mar 12 18:13:20.802425 master-0 kubenswrapper[7337]: I0312 18:13:20.802387 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-ovnkube-identity-cm\") pod \"network-node-identity-hqrqt\" (UID: \"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe\") " pod="openshift-network-node-identity/network-node-identity-hqrqt" Mar 12 18:13:20.802611 master-0 kubenswrapper[7337]: I0312 18:13:20.802465 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3a2cda2-b70f-4128-a1be-48503f5aad6d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-6nvn4\" (UID: \"f3a2cda2-b70f-4128-a1be-48503f5aad6d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4" Mar 12 18:13:20.802611 master-0 kubenswrapper[7337]: I0312 18:13:20.802552 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/00755a4e-124c-4a51-b1c5-7c505b3637a8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:13:20.802611 master-0 kubenswrapper[7337]: I0312 18:13:20.802566 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/f3a2cda2-b70f-4128-a1be-48503f5aad6d-operand-assets\") pod \"cluster-olm-operator-77899cf6d-6nvn4\" (UID: \"f3a2cda2-b70f-4128-a1be-48503f5aad6d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4" Mar 12 18:13:20.802611 master-0 kubenswrapper[7337]: I0312 18:13:20.802593 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b8dd13a7-10e5-431b-8d30-405dcfea02f5-ovn-node-metrics-cert\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.802723 master-0 kubenswrapper[7337]: I0312 18:13:20.802624 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab926874-9722-4e65-9084-27b2f9915450-config\") pod \"kube-apiserver-operator-68bd585b-ddsbn\" (UID: \"ab926874-9722-4e65-9084-27b2f9915450\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-ddsbn" Mar 12 18:13:20.802723 master-0 kubenswrapper[7337]: I0312 18:13:20.802651 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/455f0aad-add2-49d0-995c-f92467bce2d6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:13:20.802723 master-0 kubenswrapper[7337]: I0312 18:13:20.802678 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-cni-dir\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.802723 master-0 kubenswrapper[7337]: I0312 18:13:20.802707 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-etc-openvswitch\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.802829 master-0 kubenswrapper[7337]: I0312 18:13:20.802732 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b6d288e3-8e73-44d2-874d-64c6c98dd991-metrics-tls\") pod \"network-operator-7c649bf6d4-vksss\" (UID: \"b6d288e3-8e73-44d2-874d-64c6c98dd991\") " pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" Mar 12 18:13:20.802829 master-0 kubenswrapper[7337]: I0312 18:13:20.802753 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jgbv\" (UniqueName: \"kubernetes.io/projected/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-kube-api-access-9jgbv\") pod \"network-metrics-daemon-z4sc9\" (UID: \"5b48f8fd-2efe-44e3-a6d7-c71358b83a2f\") " pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:13:20.802829 master-0 kubenswrapper[7337]: I0312 18:13:20.802771 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:13:20.802829 master-0 kubenswrapper[7337]: I0312 18:13:20.802784 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b8dd13a7-10e5-431b-8d30-405dcfea02f5-ovn-node-metrics-cert\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.802829 master-0 kubenswrapper[7337]: I0312 18:13:20.802794 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/455f0aad-add2-49d0-995c-f92467bce2d6-whereabouts-configmap\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:13:20.802978 master-0 kubenswrapper[7337]: I0312 18:13:20.802861 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-var-lib-openvswitch\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.802978 master-0 kubenswrapper[7337]: I0312 18:13:20.802894 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-cni-netd\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.803032 master-0 kubenswrapper[7337]: I0312 18:13:20.802981 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/455f0aad-add2-49d0-995c-f92467bce2d6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:13:20.803032 master-0 kubenswrapper[7337]: I0312 18:13:20.803008 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-hostroot\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.803032 master-0 kubenswrapper[7337]: I0312 18:13:20.803028 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e697746f-fb9e-4d10-ab61-33c68e62cc0d-config\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:13:20.803303 master-0 kubenswrapper[7337]: I0312 18:13:20.803056 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab926874-9722-4e65-9084-27b2f9915450-config\") pod \"kube-apiserver-operator-68bd585b-ddsbn\" (UID: \"ab926874-9722-4e65-9084-27b2f9915450\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-ddsbn" Mar 12 18:13:20.803303 master-0 kubenswrapper[7337]: I0312 18:13:20.803085 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-slash\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.803303 master-0 kubenswrapper[7337]: I0312 18:13:20.803124 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-webhook-cert\") pod \"network-node-identity-hqrqt\" (UID: \"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe\") " pod="openshift-network-node-identity/network-node-identity-hqrqt" Mar 12 18:13:20.803303 master-0 kubenswrapper[7337]: I0312 18:13:20.803156 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxsgv\" (UniqueName: \"kubernetes.io/projected/455f0aad-add2-49d0-995c-f92467bce2d6-kube-api-access-pxsgv\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:13:20.803303 master-0 kubenswrapper[7337]: I0312 18:13:20.803167 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/455f0aad-add2-49d0-995c-f92467bce2d6-whereabouts-configmap\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:13:20.803303 master-0 kubenswrapper[7337]: I0312 18:13:20.803185 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-run-netns\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.803303 master-0 kubenswrapper[7337]: I0312 18:13:20.803215 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-var-lib-cni-multus\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.803303 master-0 kubenswrapper[7337]: I0312 18:13:20.803243 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-etc-kubernetes\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.803303 master-0 kubenswrapper[7337]: I0312 18:13:20.803261 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b6d288e3-8e73-44d2-874d-64c6c98dd991-metrics-tls\") pod \"network-operator-7c649bf6d4-vksss\" (UID: \"b6d288e3-8e73-44d2-874d-64c6c98dd991\") " pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" Mar 12 18:13:20.803303 master-0 kubenswrapper[7337]: I0312 18:13:20.803272 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00755a4e-124c-4a51-b1c5-7c505b3637a8-kube-api-access\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:13:20.803606 master-0 kubenswrapper[7337]: I0312 18:13:20.803374 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-webhook-cert\") pod \"network-node-identity-hqrqt\" (UID: \"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe\") " pod="openshift-network-node-identity/network-node-identity-hqrqt" Mar 12 18:13:20.804447 master-0 kubenswrapper[7337]: I0312 18:13:20.804395 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e697746f-fb9e-4d10-ab61-33c68e62cc0d-config\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:13:20.806117 master-0 kubenswrapper[7337]: I0312 18:13:20.806088 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b8dd13a7-10e5-431b-8d30-405dcfea02f5-ovnkube-script-lib\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.840324 master-0 kubenswrapper[7337]: I0312 18:13:20.840269 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjmcv\" (UniqueName: \"kubernetes.io/projected/d94dc349-c5cb-4f12-8e48-867030af4981-kube-api-access-zjmcv\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:13:20.848555 master-0 kubenswrapper[7337]: I0312 18:13:20.848532 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkt7d\" (UniqueName: \"kubernetes.io/projected/055f5c67-f512-4510-99c5-e194944b0599-kube-api-access-tkt7d\") pod \"service-ca-operator-69b6fc6b88-9xlgl\" (UID: \"055f5c67-f512-4510-99c5-e194944b0599\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-9xlgl" Mar 12 18:13:20.870958 master-0 kubenswrapper[7337]: I0312 18:13:20.870919 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdlxn\" (UniqueName: \"kubernetes.io/projected/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-kube-api-access-fdlxn\") pod \"marketplace-operator-64bf9778cb-clkx5\" (UID: \"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:13:20.889922 master-0 kubenswrapper[7337]: I0312 18:13:20.889873 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggsdx\" (UniqueName: \"kubernetes.io/projected/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-kube-api-access-ggsdx\") pod \"olm-operator-d64cfc9db-npt4r\" (UID: \"d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r" Mar 12 18:13:20.904630 master-0 kubenswrapper[7337]: I0312 18:13:20.903999 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/b6d288e3-8e73-44d2-874d-64c6c98dd991-host-etc-kube\") pod \"network-operator-7c649bf6d4-vksss\" (UID: \"b6d288e3-8e73-44d2-874d-64c6c98dd991\") " pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" Mar 12 18:13:20.904630 master-0 kubenswrapper[7337]: I0312 18:13:20.904047 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-kubelet\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.904630 master-0 kubenswrapper[7337]: I0312 18:13:20.904064 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs\") pod \"network-metrics-daemon-z4sc9\" (UID: \"5b48f8fd-2efe-44e3-a6d7-c71358b83a2f\") " pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:13:20.904630 master-0 kubenswrapper[7337]: I0312 18:13:20.904085 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-var-lib-kubelet\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.904630 master-0 kubenswrapper[7337]: I0312 18:13:20.904101 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-conf-dir\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.904630 master-0 kubenswrapper[7337]: I0312 18:13:20.904180 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/b6d288e3-8e73-44d2-874d-64c6c98dd991-host-etc-kube\") pod \"network-operator-7c649bf6d4-vksss\" (UID: \"b6d288e3-8e73-44d2-874d-64c6c98dd991\") " pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" Mar 12 18:13:20.904630 master-0 kubenswrapper[7337]: I0312 18:13:20.904191 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-kubelet\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.904630 master-0 kubenswrapper[7337]: I0312 18:13:20.904240 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-system-cni-dir\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.904630 master-0 kubenswrapper[7337]: I0312 18:13:20.904268 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-node-log\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.904630 master-0 kubenswrapper[7337]: I0312 18:13:20.904271 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-conf-dir\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.904630 master-0 kubenswrapper[7337]: I0312 18:13:20.904370 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-system-cni-dir\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.904630 master-0 kubenswrapper[7337]: E0312 18:13:20.904433 7337 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 12 18:13:20.904630 master-0 kubenswrapper[7337]: E0312 18:13:20.904468 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs podName:5b48f8fd-2efe-44e3-a6d7-c71358b83a2f nodeName:}" failed. No retries permitted until 2026-03-12 18:13:21.404457054 +0000 UTC m=+1.873058001 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs") pod "network-metrics-daemon-z4sc9" (UID: "5b48f8fd-2efe-44e3-a6d7-c71358b83a2f") : secret "metrics-daemon-secret" not found Mar 12 18:13:20.904630 master-0 kubenswrapper[7337]: I0312 18:13:20.904480 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-node-log\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.904630 master-0 kubenswrapper[7337]: I0312 18:13:20.904524 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:20.904630 master-0 kubenswrapper[7337]: I0312 18:13:20.904544 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-var-lib-kubelet\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.904630 master-0 kubenswrapper[7337]: E0312 18:13:20.904588 7337 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 12 18:13:20.904630 master-0 kubenswrapper[7337]: E0312 18:13:20.904625 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert podName:306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed nodeName:}" failed. No retries permitted until 2026-03-12 18:13:21.404615618 +0000 UTC m=+1.873216565 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-lqpbp" (UID: "306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed") : secret "performance-addon-operator-webhook-cert" not found Mar 12 18:13:20.904630 master-0 kubenswrapper[7337]: I0312 18:13:20.904649 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/00755a4e-124c-4a51-b1c5-7c505b3637a8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:13:20.904630 master-0 kubenswrapper[7337]: I0312 18:13:20.904684 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-etc-openvswitch\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.905384 master-0 kubenswrapper[7337]: I0312 18:13:20.904709 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:13:20.905384 master-0 kubenswrapper[7337]: I0312 18:13:20.904727 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-cni-dir\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.905384 master-0 kubenswrapper[7337]: I0312 18:13:20.904758 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-etc-openvswitch\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.905384 master-0 kubenswrapper[7337]: I0312 18:13:20.904788 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:13:20.905384 master-0 kubenswrapper[7337]: I0312 18:13:20.904819 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-var-lib-openvswitch\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.905384 master-0 kubenswrapper[7337]: I0312 18:13:20.904851 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-cni-netd\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.905384 master-0 kubenswrapper[7337]: I0312 18:13:20.904860 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-cni-dir\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.905384 master-0 kubenswrapper[7337]: I0312 18:13:20.904892 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-var-lib-openvswitch\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.905384 master-0 kubenswrapper[7337]: I0312 18:13:20.904917 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-cni-netd\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.905384 master-0 kubenswrapper[7337]: I0312 18:13:20.904955 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-hostroot\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.905384 master-0 kubenswrapper[7337]: I0312 18:13:20.904987 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-hostroot\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.905384 master-0 kubenswrapper[7337]: I0312 18:13:20.905014 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-slash\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.905384 master-0 kubenswrapper[7337]: I0312 18:13:20.905042 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-slash\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.905384 master-0 kubenswrapper[7337]: I0312 18:13:20.905077 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-run-netns\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.905384 master-0 kubenswrapper[7337]: I0312 18:13:20.905116 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-var-lib-cni-multus\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.905384 master-0 kubenswrapper[7337]: I0312 18:13:20.905154 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-etc-kubernetes\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.905384 master-0 kubenswrapper[7337]: I0312 18:13:20.905241 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-etc-kubernetes\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.905384 master-0 kubenswrapper[7337]: I0312 18:13:20.905284 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-run-ovn\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.905384 master-0 kubenswrapper[7337]: I0312 18:13:20.905302 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-run-netns\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.905384 master-0 kubenswrapper[7337]: I0312 18:13:20.905311 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-run-k8s-cni-cncf-io\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.905384 master-0 kubenswrapper[7337]: I0312 18:13:20.905336 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-run-ovn\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.905384 master-0 kubenswrapper[7337]: I0312 18:13:20.905340 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-os-release\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:13:20.905384 master-0 kubenswrapper[7337]: I0312 18:13:20.905377 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-run-multus-certs\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.905384 master-0 kubenswrapper[7337]: I0312 18:13:20.905394 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-run-k8s-cni-cncf-io\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.905384 master-0 kubenswrapper[7337]: I0312 18:13:20.905401 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-var-lib-cni-multus\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: I0312 18:13:20.905428 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-run-multus-certs\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: I0312 18:13:20.905448 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-cni-bin\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: I0312 18:13:20.905506 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: I0312 18:13:20.905549 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: I0312 18:13:20.905555 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: E0312 18:13:20.905595 7337 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: I0312 18:13:20.905600 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d92dddc8-a810-43f5-8beb-32d1c8ad8381-host-slash\") pod \"iptables-alerter-4k8wm\" (UID: \"d92dddc8-a810-43f5-8beb-32d1c8ad8381\") " pod="openshift-network-operator/iptables-alerter-4k8wm" Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: I0312 18:13:20.905595 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-os-release\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: E0312 18:13:20.905621 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert podName:00755a4e-124c-4a51-b1c5-7c505b3637a8 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:21.405612024 +0000 UTC m=+1.874212971 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert") pod "cluster-version-operator-745944c6b7-cxwmx" (UID: "00755a4e-124c-4a51-b1c5-7c505b3637a8") : secret "cluster-version-operator-serving-cert" not found Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: I0312 18:13:20.905633 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: I0312 18:13:20.905651 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d92dddc8-a810-43f5-8beb-32d1c8ad8381-host-slash\") pod \"iptables-alerter-4k8wm\" (UID: \"d92dddc8-a810-43f5-8beb-32d1c8ad8381\") " pod="openshift-network-operator/iptables-alerter-4k8wm" Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: I0312 18:13:20.905533 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-cni-bin\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: I0312 18:13:20.905675 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-cnibin\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: I0312 18:13:20.905685 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-cnibin\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: I0312 18:13:20.905703 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-os-release\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: E0312 18:13:20.905724 7337 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: I0312 18:13:20.905730 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls\") pod \"dns-operator-589895fbb7-jqj5k\" (UID: \"8ad05507-e242-4ff8-ae80-c16ff9ee68e2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-jqj5k" Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: I0312 18:13:20.905752 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-var-lib-cni-bin\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: E0312 18:13:20.905758 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls podName:e22c7035-4b7a-48cb-9abb-db277b387842 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:21.405749047 +0000 UTC m=+1.874349994 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-l4krq" (UID: "e22c7035-4b7a-48cb-9abb-db277b387842") : secret "image-registry-operator-tls" not found Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: I0312 18:13:20.905794 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-run-systemd\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: I0312 18:13:20.905831 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-var-lib-cni-bin\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: E0312 18:13:20.905840 7337 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: I0312 18:13:20.905856 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-log-socket\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: E0312 18:13:20.905860 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls podName:8ad05507-e242-4ff8-ae80-c16ff9ee68e2 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:21.4058543 +0000 UTC m=+1.874455247 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls") pod "dns-operator-589895fbb7-jqj5k" (UID: "8ad05507-e242-4ff8-ae80-c16ff9ee68e2") : secret "metrics-tls" not found Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: I0312 18:13:20.905879 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-run-systemd\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: I0312 18:13:20.905897 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-log-socket\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: I0312 18:13:20.905904 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/00755a4e-124c-4a51-b1c5-7c505b3637a8-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: I0312 18:13:20.905940 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-os-release\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: I0312 18:13:20.905934 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-systemd-units\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: I0312 18:13:20.905964 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/00755a4e-124c-4a51-b1c5-7c505b3637a8-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: I0312 18:13:20.905972 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-cnibin\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: I0312 18:13:20.905984 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-systemd-units\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: I0312 18:13:20.906017 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs\") pod \"multus-admission-controller-8d675b596-kcpg5\" (UID: \"875bdfaa-b0a4-4412-a477-c962844e7057\") " pod="openshift-multus/multus-admission-controller-8d675b596-kcpg5" Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: I0312 18:13:20.906029 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-cnibin\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: I0312 18:13:20.906041 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: E0312 18:13:20.906064 7337 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: E0312 18:13:20.906082 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs podName:875bdfaa-b0a4-4412-a477-c962844e7057 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:21.406076495 +0000 UTC m=+1.874677442 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs") pod "multus-admission-controller-8d675b596-kcpg5" (UID: "875bdfaa-b0a4-4412-a477-c962844e7057") : secret "multus-admission-controller-secret" not found Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: I0312 18:13:20.906079 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-run-openvswitch\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.906075 master-0 kubenswrapper[7337]: E0312 18:13:20.906112 7337 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 12 18:13:20.907506 master-0 kubenswrapper[7337]: E0312 18:13:20.906129 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls podName:306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed nodeName:}" failed. No retries permitted until 2026-03-12 18:13:21.406124227 +0000 UTC m=+1.874725174 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-lqpbp" (UID: "306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed") : secret "node-tuning-operator-tls" not found Mar 12 18:13:20.907506 master-0 kubenswrapper[7337]: I0312 18:13:20.906110 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-socket-dir-parent\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.907506 master-0 kubenswrapper[7337]: I0312 18:13:20.906147 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-socket-dir-parent\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:20.907506 master-0 kubenswrapper[7337]: I0312 18:13:20.906159 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-run-netns\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.907506 master-0 kubenswrapper[7337]: I0312 18:13:20.906179 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-run-openvswitch\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.907506 master-0 kubenswrapper[7337]: I0312 18:13:20.906183 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-run-netns\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.907506 master-0 kubenswrapper[7337]: I0312 18:13:20.906193 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-run-ovn-kubernetes\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.907506 master-0 kubenswrapper[7337]: I0312 18:13:20.906207 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-run-ovn-kubernetes\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:20.907506 master-0 kubenswrapper[7337]: I0312 18:13:20.906224 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-system-cni-dir\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:13:20.907506 master-0 kubenswrapper[7337]: I0312 18:13:20.906229 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-system-cni-dir\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:13:20.907506 master-0 kubenswrapper[7337]: I0312 18:13:20.906456 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/00755a4e-124c-4a51-b1c5-7c505b3637a8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:13:20.909599 master-0 kubenswrapper[7337]: I0312 18:13:20.909566 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d94dc349-c5cb-4f12-8e48-867030af4981-bound-sa-token\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:13:20.934133 master-0 kubenswrapper[7337]: I0312 18:13:20.934081 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4ae1240-e04e-48e9-88df-9f1a53508da7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpb6k\" (UID: \"d4ae1240-e04e-48e9-88df-9f1a53508da7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpb6k" Mar 12 18:13:20.947764 master-0 kubenswrapper[7337]: I0312 18:13:20.947722 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krrkl\" (UniqueName: \"kubernetes.io/projected/47850839-bb4b-41e9-ac31-f1cabbb4926d-kube-api-access-krrkl\") pod \"catalog-operator-7d9c49f57b-pslh7\" (UID: \"47850839-bb4b-41e9-ac31-f1cabbb4926d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7" Mar 12 18:13:20.975744 master-0 kubenswrapper[7337]: I0312 18:13:20.975591 7337 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 18:13:20.977186 master-0 kubenswrapper[7337]: I0312 18:13:20.977052 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbnx8\" (UniqueName: \"kubernetes.io/projected/51eb717b-d11f-4bc3-8df6-deb51d5889f3-kube-api-access-gbnx8\") pod \"package-server-manager-854648ff6d-kwv7s\" (UID: \"51eb717b-d11f-4bc3-8df6-deb51d5889f3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:13:20.994903 master-0 kubenswrapper[7337]: I0312 18:13:20.994828 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bttzm\" (UniqueName: \"kubernetes.io/projected/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-kube-api-access-bttzm\") pod \"cluster-monitoring-operator-674cbfbd9d-fz79c\" (UID: \"e94d098b-fbcc-4e85-b8ad-42f3a21c822c\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" Mar 12 18:13:21.025633 master-0 kubenswrapper[7337]: E0312 18:13:21.025592 7337 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 18:13:21.052029 master-0 kubenswrapper[7337]: I0312 18:13:21.051995 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vct98\" (UniqueName: \"kubernetes.io/projected/e697746f-fb9e-4d10-ab61-33c68e62cc0d-kube-api-access-vct98\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:13:21.077956 master-0 kubenswrapper[7337]: I0312 18:13:21.077927 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pn9h\" (UniqueName: \"kubernetes.io/projected/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-kube-api-access-2pn9h\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:13:21.083849 master-0 kubenswrapper[7337]: I0312 18:13:21.083801 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:13:21.099670 master-0 kubenswrapper[7337]: I0312 18:13:21.099636 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s55hv\" (UniqueName: \"kubernetes.io/projected/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa-kube-api-access-s55hv\") pod \"openshift-controller-manager-operator-8565d84698-m6hsp\" (UID: \"223a548b-a3ad-40dd-82de-e3dbb7f3e4fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-m6hsp" Mar 12 18:13:21.109500 master-0 kubenswrapper[7337]: I0312 18:13:21.109467 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab926874-9722-4e65-9084-27b2f9915450-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-ddsbn\" (UID: \"ab926874-9722-4e65-9084-27b2f9915450\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-ddsbn" Mar 12 18:13:21.129965 master-0 kubenswrapper[7337]: I0312 18:13:21.129900 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn9nf\" (UniqueName: \"kubernetes.io/projected/062f1b21-2ffc-47da-8334-427c3b2a1a90-kube-api-access-jn9nf\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:13:21.132892 master-0 kubenswrapper[7337]: I0312 18:13:21.132845 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:13:21.139155 master-0 kubenswrapper[7337]: I0312 18:13:21.139110 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:13:21.148854 master-0 kubenswrapper[7337]: I0312 18:13:21.148818 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfpb9\" (UniqueName: \"kubernetes.io/projected/37cd9c0a-697e-4e67-932b-b331ff77c8c0-kube-api-access-pfpb9\") pod \"openshift-config-operator-64488f9d78-tjp2j\" (UID: \"37cd9c0a-697e-4e67-932b-b331ff77c8c0\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" Mar 12 18:13:21.178450 master-0 kubenswrapper[7337]: I0312 18:13:21.178366 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmxc2\" (UniqueName: \"kubernetes.io/projected/e22c7035-4b7a-48cb-9abb-db277b387842-kube-api-access-pmxc2\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:13:21.199190 master-0 kubenswrapper[7337]: I0312 18:13:21.199115 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rhmv\" (UniqueName: \"kubernetes.io/projected/b8dd13a7-10e5-431b-8d30-405dcfea02f5-kube-api-access-7rhmv\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:21.211657 master-0 kubenswrapper[7337]: I0312 18:13:21.211596 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlf77\" (UniqueName: \"kubernetes.io/projected/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-kube-api-access-wlf77\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:21.233816 master-0 kubenswrapper[7337]: I0312 18:13:21.233701 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vpbp\" (UniqueName: \"kubernetes.io/projected/a1e2340b-ebca-40de-b1e0-8133999cd860-kube-api-access-6vpbp\") pod \"kube-storage-version-migrator-operator-7f65c457f5-2xr98\" (UID: \"a1e2340b-ebca-40de-b1e0-8133999cd860\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-2xr98" Mar 12 18:13:21.252729 master-0 kubenswrapper[7337]: I0312 18:13:21.252683 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6ggc\" (UniqueName: \"kubernetes.io/projected/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-kube-api-access-b6ggc\") pod \"ovnkube-control-plane-66b55d57d-w7wj9\" (UID: \"74eb1407-de29-42e5-9e6c-ce1bec3a9d80\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" Mar 12 18:13:21.273959 master-0 kubenswrapper[7337]: I0312 18:13:21.273916 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcvfv\" (UniqueName: \"kubernetes.io/projected/f3a2cda2-b70f-4128-a1be-48503f5aad6d-kube-api-access-tcvfv\") pod \"cluster-olm-operator-77899cf6d-6nvn4\" (UID: \"f3a2cda2-b70f-4128-a1be-48503f5aad6d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4" Mar 12 18:13:21.290923 master-0 kubenswrapper[7337]: I0312 18:13:21.290893 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptrtx\" (UniqueName: \"kubernetes.io/projected/33feec78-4592-4343-965b-aa1b7044fcf3-kube-api-access-ptrtx\") pod \"network-check-target-cpthp\" (UID: \"33feec78-4592-4343-965b-aa1b7044fcf3\") " pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:13:21.308157 master-0 kubenswrapper[7337]: I0312 18:13:21.308115 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e720e1d0-5a6d-4b76-8b25-5963e24950f5-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-w7xvp\" (UID: \"e720e1d0-5a6d-4b76-8b25-5963e24950f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-w7xvp" Mar 12 18:13:21.311162 master-0 kubenswrapper[7337]: I0312 18:13:21.311120 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-clkx5\" (UID: \"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:13:21.311248 master-0 kubenswrapper[7337]: I0312 18:13:21.311167 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert\") pod \"catalog-operator-7d9c49f57b-pslh7\" (UID: \"47850839-bb4b-41e9-ac31-f1cabbb4926d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7" Mar 12 18:13:21.311315 master-0 kubenswrapper[7337]: E0312 18:13:21.311288 7337 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 12 18:13:21.311356 master-0 kubenswrapper[7337]: I0312 18:13:21.311324 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert\") pod \"olm-operator-d64cfc9db-npt4r\" (UID: \"d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r" Mar 12 18:13:21.311356 master-0 kubenswrapper[7337]: E0312 18:13:21.311352 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics podName:4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:22.311334714 +0000 UTC m=+2.779935661 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-clkx5" (UID: "4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64") : secret "marketplace-operator-metrics" not found Mar 12 18:13:21.311442 master-0 kubenswrapper[7337]: E0312 18:13:21.311420 7337 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 12 18:13:21.311480 master-0 kubenswrapper[7337]: E0312 18:13:21.311464 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert podName:47850839-bb4b-41e9-ac31-f1cabbb4926d nodeName:}" failed. No retries permitted until 2026-03-12 18:13:22.311451837 +0000 UTC m=+2.780052864 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert") pod "catalog-operator-7d9c49f57b-pslh7" (UID: "47850839-bb4b-41e9-ac31-f1cabbb4926d") : secret "catalog-operator-serving-cert" not found Mar 12 18:13:21.311535 master-0 kubenswrapper[7337]: E0312 18:13:21.311497 7337 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 12 18:13:21.311576 master-0 kubenswrapper[7337]: I0312 18:13:21.311532 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-kwv7s\" (UID: \"51eb717b-d11f-4bc3-8df6-deb51d5889f3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:13:21.311576 master-0 kubenswrapper[7337]: E0312 18:13:21.311557 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert podName:d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:22.311546109 +0000 UTC m=+2.780147146 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert") pod "olm-operator-d64cfc9db-npt4r" (UID: "d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27") : secret "olm-operator-serving-cert" not found Mar 12 18:13:21.311635 master-0 kubenswrapper[7337]: E0312 18:13:21.311601 7337 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 12 18:13:21.311635 master-0 kubenswrapper[7337]: E0312 18:13:21.311632 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert podName:51eb717b-d11f-4bc3-8df6-deb51d5889f3 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:22.311622861 +0000 UTC m=+2.780223808 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-kwv7s" (UID: "51eb717b-d11f-4bc3-8df6-deb51d5889f3") : secret "package-server-manager-serving-cert" not found Mar 12 18:13:21.311687 master-0 kubenswrapper[7337]: I0312 18:13:21.311653 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-fz79c\" (UID: \"e94d098b-fbcc-4e85-b8ad-42f3a21c822c\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" Mar 12 18:13:21.311721 master-0 kubenswrapper[7337]: I0312 18:13:21.311685 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:13:21.311894 master-0 kubenswrapper[7337]: E0312 18:13:21.311843 7337 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 12 18:13:21.311932 master-0 kubenswrapper[7337]: E0312 18:13:21.311902 7337 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 12 18:13:21.311959 master-0 kubenswrapper[7337]: E0312 18:13:21.311935 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls podName:d94dc349-c5cb-4f12-8e48-867030af4981 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:22.311926499 +0000 UTC m=+2.780527446 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls") pod "ingress-operator-677db989d6-4527l" (UID: "d94dc349-c5cb-4f12-8e48-867030af4981") : secret "metrics-tls" not found Mar 12 18:13:21.311959 master-0 kubenswrapper[7337]: E0312 18:13:21.311955 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls podName:e94d098b-fbcc-4e85-b8ad-42f3a21c822c nodeName:}" failed. No retries permitted until 2026-03-12 18:13:22.311946269 +0000 UTC m=+2.780547336 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-fz79c" (UID: "e94d098b-fbcc-4e85-b8ad-42f3a21c822c") : secret "cluster-monitoring-operator-tls" not found Mar 12 18:13:21.329365 master-0 kubenswrapper[7337]: I0312 18:13:21.329314 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l22gw\" (UniqueName: \"kubernetes.io/projected/d92dddc8-a810-43f5-8beb-32d1c8ad8381-kube-api-access-l22gw\") pod \"iptables-alerter-4k8wm\" (UID: \"d92dddc8-a810-43f5-8beb-32d1c8ad8381\") " pod="openshift-network-operator/iptables-alerter-4k8wm" Mar 12 18:13:21.353167 master-0 kubenswrapper[7337]: I0312 18:13:21.353118 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2skd\" (UniqueName: \"kubernetes.io/projected/875bdfaa-b0a4-4412-a477-c962844e7057-kube-api-access-l2skd\") pod \"multus-admission-controller-8d675b596-kcpg5\" (UID: \"875bdfaa-b0a4-4412-a477-c962844e7057\") " pod="openshift-multus/multus-admission-controller-8d675b596-kcpg5" Mar 12 18:13:21.372131 master-0 kubenswrapper[7337]: I0312 18:13:21.372061 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdlcw\" (UniqueName: \"kubernetes.io/projected/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-kube-api-access-tdlcw\") pod \"network-node-identity-hqrqt\" (UID: \"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe\") " pod="openshift-network-node-identity/network-node-identity-hqrqt" Mar 12 18:13:21.389401 master-0 kubenswrapper[7337]: I0312 18:13:21.389348 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vr66\" (UniqueName: \"kubernetes.io/projected/45aa4887-c913-4ece-ae34-fcde33832621-kube-api-access-4vr66\") pod \"csi-snapshot-controller-operator-5685fbc7d-649db\" (UID: \"45aa4887-c913-4ece-ae34-fcde33832621\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-649db" Mar 12 18:13:21.409423 master-0 kubenswrapper[7337]: I0312 18:13:21.409381 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdb9w\" (UniqueName: \"kubernetes.io/projected/b6d288e3-8e73-44d2-874d-64c6c98dd991-kube-api-access-vdb9w\") pod \"network-operator-7c649bf6d4-vksss\" (UID: \"b6d288e3-8e73-44d2-874d-64c6c98dd991\") " pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" Mar 12 18:13:21.412462 master-0 kubenswrapper[7337]: I0312 18:13:21.412415 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs\") pod \"multus-admission-controller-8d675b596-kcpg5\" (UID: \"875bdfaa-b0a4-4412-a477-c962844e7057\") " pod="openshift-multus/multus-admission-controller-8d675b596-kcpg5" Mar 12 18:13:21.412562 master-0 kubenswrapper[7337]: I0312 18:13:21.412480 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:21.412691 master-0 kubenswrapper[7337]: E0312 18:13:21.412651 7337 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 12 18:13:21.412733 master-0 kubenswrapper[7337]: I0312 18:13:21.412710 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs\") pod \"network-metrics-daemon-z4sc9\" (UID: \"5b48f8fd-2efe-44e3-a6d7-c71358b83a2f\") " pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:13:21.412762 master-0 kubenswrapper[7337]: E0312 18:13:21.412751 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs podName:875bdfaa-b0a4-4412-a477-c962844e7057 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:22.412723955 +0000 UTC m=+2.881324922 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs") pod "multus-admission-controller-8d675b596-kcpg5" (UID: "875bdfaa-b0a4-4412-a477-c962844e7057") : secret "multus-admission-controller-secret" not found Mar 12 18:13:21.412877 master-0 kubenswrapper[7337]: E0312 18:13:21.412854 7337 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 12 18:13:21.412877 master-0 kubenswrapper[7337]: I0312 18:13:21.412863 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:21.412940 master-0 kubenswrapper[7337]: E0312 18:13:21.412922 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls podName:306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed nodeName:}" failed. No retries permitted until 2026-03-12 18:13:22.41290101 +0000 UTC m=+2.881502017 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-lqpbp" (UID: "306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed") : secret "node-tuning-operator-tls" not found Mar 12 18:13:21.412940 master-0 kubenswrapper[7337]: E0312 18:13:21.412932 7337 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 12 18:13:21.412996 master-0 kubenswrapper[7337]: E0312 18:13:21.412965 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert podName:306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed nodeName:}" failed. No retries permitted until 2026-03-12 18:13:22.412952381 +0000 UTC m=+2.881553338 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-lqpbp" (UID: "306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed") : secret "performance-addon-operator-webhook-cert" not found Mar 12 18:13:21.412996 master-0 kubenswrapper[7337]: E0312 18:13:21.412969 7337 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 12 18:13:21.413074 master-0 kubenswrapper[7337]: E0312 18:13:21.413059 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs podName:5b48f8fd-2efe-44e3-a6d7-c71358b83a2f nodeName:}" failed. No retries permitted until 2026-03-12 18:13:22.413040743 +0000 UTC m=+2.881641690 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs") pod "network-metrics-daemon-z4sc9" (UID: "5b48f8fd-2efe-44e3-a6d7-c71358b83a2f") : secret "metrics-daemon-secret" not found Mar 12 18:13:21.413242 master-0 kubenswrapper[7337]: I0312 18:13:21.413195 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:13:21.413310 master-0 kubenswrapper[7337]: I0312 18:13:21.413287 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:13:21.413341 master-0 kubenswrapper[7337]: E0312 18:13:21.413328 7337 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 18:13:21.413374 master-0 kubenswrapper[7337]: I0312 18:13:21.413335 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls\") pod \"dns-operator-589895fbb7-jqj5k\" (UID: \"8ad05507-e242-4ff8-ae80-c16ff9ee68e2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-jqj5k" Mar 12 18:13:21.413374 master-0 kubenswrapper[7337]: E0312 18:13:21.413366 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert podName:00755a4e-124c-4a51-b1c5-7c505b3637a8 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:22.413356051 +0000 UTC m=+2.881957188 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert") pod "cluster-version-operator-745944c6b7-cxwmx" (UID: "00755a4e-124c-4a51-b1c5-7c505b3637a8") : secret "cluster-version-operator-serving-cert" not found Mar 12 18:13:21.413430 master-0 kubenswrapper[7337]: E0312 18:13:21.413410 7337 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 12 18:13:21.413476 master-0 kubenswrapper[7337]: E0312 18:13:21.413462 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls podName:e22c7035-4b7a-48cb-9abb-db277b387842 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:22.413435653 +0000 UTC m=+2.882036600 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-l4krq" (UID: "e22c7035-4b7a-48cb-9abb-db277b387842") : secret "image-registry-operator-tls" not found Mar 12 18:13:21.413577 master-0 kubenswrapper[7337]: E0312 18:13:21.413553 7337 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 12 18:13:21.413824 master-0 kubenswrapper[7337]: E0312 18:13:21.413637 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls podName:8ad05507-e242-4ff8-ae80-c16ff9ee68e2 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:22.413617368 +0000 UTC m=+2.882218305 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls") pod "dns-operator-589895fbb7-jqj5k" (UID: "8ad05507-e242-4ff8-ae80-c16ff9ee68e2") : secret "metrics-tls" not found Mar 12 18:13:21.431335 master-0 kubenswrapper[7337]: I0312 18:13:21.431228 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th8tc\" (UniqueName: \"kubernetes.io/projected/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-kube-api-access-th8tc\") pod \"dns-operator-589895fbb7-jqj5k\" (UID: \"8ad05507-e242-4ff8-ae80-c16ff9ee68e2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-jqj5k" Mar 12 18:13:21.449928 master-0 kubenswrapper[7337]: I0312 18:13:21.449873 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e22c7035-4b7a-48cb-9abb-db277b387842-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:13:21.475312 master-0 kubenswrapper[7337]: I0312 18:13:21.475261 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jgbv\" (UniqueName: \"kubernetes.io/projected/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-kube-api-access-9jgbv\") pod \"network-metrics-daemon-z4sc9\" (UID: \"5b48f8fd-2efe-44e3-a6d7-c71358b83a2f\") " pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:13:21.493039 master-0 kubenswrapper[7337]: I0312 18:13:21.492992 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxsgv\" (UniqueName: \"kubernetes.io/projected/455f0aad-add2-49d0-995c-f92467bce2d6-kube-api-access-pxsgv\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:13:21.510171 master-0 kubenswrapper[7337]: I0312 18:13:21.510120 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00755a4e-124c-4a51-b1c5-7c505b3637a8-kube-api-access\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:13:21.530289 master-0 kubenswrapper[7337]: I0312 18:13:21.530218 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6ggg\" (UniqueName: \"kubernetes.io/projected/236f2886-bb69-49a7-9471-36454fd1cbd3-kube-api-access-b6ggg\") pod \"openshift-apiserver-operator-799b6db4d7-6qlzz\" (UID: \"236f2886-bb69-49a7-9471-36454fd1cbd3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6qlzz" Mar 12 18:13:21.583478 master-0 kubenswrapper[7337]: I0312 18:13:21.583435 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:13:22.142038 master-0 kubenswrapper[7337]: E0312 18:13:22.141965 7337 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460" Mar 12 18:13:22.142626 master-0 kubenswrapper[7337]: E0312 18:13:22.142190 7337 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l22gw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4k8wm_openshift-network-operator(d92dddc8-a810-43f5-8beb-32d1c8ad8381): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 18:13:22.144034 master-0 kubenswrapper[7337]: E0312 18:13:22.143974 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-network-operator/iptables-alerter-4k8wm" podUID="d92dddc8-a810-43f5-8beb-32d1c8ad8381" Mar 12 18:13:22.323592 master-0 kubenswrapper[7337]: I0312 18:13:22.323543 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-clkx5\" (UID: \"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:13:22.323592 master-0 kubenswrapper[7337]: I0312 18:13:22.323591 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert\") pod \"catalog-operator-7d9c49f57b-pslh7\" (UID: \"47850839-bb4b-41e9-ac31-f1cabbb4926d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7" Mar 12 18:13:22.323823 master-0 kubenswrapper[7337]: E0312 18:13:22.323718 7337 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 12 18:13:22.323823 master-0 kubenswrapper[7337]: I0312 18:13:22.323765 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert\") pod \"olm-operator-d64cfc9db-npt4r\" (UID: \"d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r" Mar 12 18:13:22.323823 master-0 kubenswrapper[7337]: E0312 18:13:22.323792 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics podName:4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:24.323772091 +0000 UTC m=+4.792373138 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-clkx5" (UID: "4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64") : secret "marketplace-operator-metrics" not found Mar 12 18:13:22.323955 master-0 kubenswrapper[7337]: I0312 18:13:22.323827 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-kwv7s\" (UID: \"51eb717b-d11f-4bc3-8df6-deb51d5889f3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:13:22.323955 master-0 kubenswrapper[7337]: E0312 18:13:22.323897 7337 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 12 18:13:22.323955 master-0 kubenswrapper[7337]: E0312 18:13:22.323910 7337 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 12 18:13:22.323955 master-0 kubenswrapper[7337]: E0312 18:13:22.323953 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert podName:47850839-bb4b-41e9-ac31-f1cabbb4926d nodeName:}" failed. No retries permitted until 2026-03-12 18:13:24.323935875 +0000 UTC m=+4.792536822 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert") pod "catalog-operator-7d9c49f57b-pslh7" (UID: "47850839-bb4b-41e9-ac31-f1cabbb4926d") : secret "catalog-operator-serving-cert" not found Mar 12 18:13:22.324098 master-0 kubenswrapper[7337]: E0312 18:13:22.323956 7337 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 12 18:13:22.324098 master-0 kubenswrapper[7337]: E0312 18:13:22.323972 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert podName:d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:24.323964156 +0000 UTC m=+4.792565233 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert") pod "olm-operator-d64cfc9db-npt4r" (UID: "d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27") : secret "olm-operator-serving-cert" not found Mar 12 18:13:22.324098 master-0 kubenswrapper[7337]: I0312 18:13:22.323907 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-fz79c\" (UID: \"e94d098b-fbcc-4e85-b8ad-42f3a21c822c\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" Mar 12 18:13:22.324098 master-0 kubenswrapper[7337]: E0312 18:13:22.323988 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls podName:e94d098b-fbcc-4e85-b8ad-42f3a21c822c nodeName:}" failed. No retries permitted until 2026-03-12 18:13:24.323981306 +0000 UTC m=+4.792582253 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-fz79c" (UID: "e94d098b-fbcc-4e85-b8ad-42f3a21c822c") : secret "cluster-monitoring-operator-tls" not found Mar 12 18:13:22.324098 master-0 kubenswrapper[7337]: E0312 18:13:22.324013 7337 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 12 18:13:22.324098 master-0 kubenswrapper[7337]: I0312 18:13:22.324019 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:13:22.324098 master-0 kubenswrapper[7337]: E0312 18:13:22.324049 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert podName:51eb717b-d11f-4bc3-8df6-deb51d5889f3 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:24.324038968 +0000 UTC m=+4.792639915 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-kwv7s" (UID: "51eb717b-d11f-4bc3-8df6-deb51d5889f3") : secret "package-server-manager-serving-cert" not found Mar 12 18:13:22.324098 master-0 kubenswrapper[7337]: E0312 18:13:22.324080 7337 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 12 18:13:22.324098 master-0 kubenswrapper[7337]: E0312 18:13:22.324106 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls podName:d94dc349-c5cb-4f12-8e48-867030af4981 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:24.324098179 +0000 UTC m=+4.792699236 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls") pod "ingress-operator-677db989d6-4527l" (UID: "d94dc349-c5cb-4f12-8e48-867030af4981") : secret "metrics-tls" not found Mar 12 18:13:22.424779 master-0 kubenswrapper[7337]: I0312 18:13:22.424677 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:13:22.424779 master-0 kubenswrapper[7337]: I0312 18:13:22.424725 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:13:22.424779 master-0 kubenswrapper[7337]: I0312 18:13:22.424747 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls\") pod \"dns-operator-589895fbb7-jqj5k\" (UID: \"8ad05507-e242-4ff8-ae80-c16ff9ee68e2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-jqj5k" Mar 12 18:13:22.424779 master-0 kubenswrapper[7337]: I0312 18:13:22.424774 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs\") pod \"multus-admission-controller-8d675b596-kcpg5\" (UID: \"875bdfaa-b0a4-4412-a477-c962844e7057\") " pod="openshift-multus/multus-admission-controller-8d675b596-kcpg5" Mar 12 18:13:22.425377 master-0 kubenswrapper[7337]: I0312 18:13:22.424801 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:22.425377 master-0 kubenswrapper[7337]: I0312 18:13:22.424835 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs\") pod \"network-metrics-daemon-z4sc9\" (UID: \"5b48f8fd-2efe-44e3-a6d7-c71358b83a2f\") " pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:13:22.425377 master-0 kubenswrapper[7337]: I0312 18:13:22.424868 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:22.425377 master-0 kubenswrapper[7337]: E0312 18:13:22.424969 7337 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 12 18:13:22.425377 master-0 kubenswrapper[7337]: E0312 18:13:22.425025 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert podName:306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed nodeName:}" failed. No retries permitted until 2026-03-12 18:13:24.425011199 +0000 UTC m=+4.893612136 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-lqpbp" (UID: "306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed") : secret "performance-addon-operator-webhook-cert" not found Mar 12 18:13:22.425377 master-0 kubenswrapper[7337]: E0312 18:13:22.425226 7337 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 18:13:22.425377 master-0 kubenswrapper[7337]: E0312 18:13:22.425248 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert podName:00755a4e-124c-4a51-b1c5-7c505b3637a8 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:24.425240984 +0000 UTC m=+4.893841931 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert") pod "cluster-version-operator-745944c6b7-cxwmx" (UID: "00755a4e-124c-4a51-b1c5-7c505b3637a8") : secret "cluster-version-operator-serving-cert" not found Mar 12 18:13:22.425377 master-0 kubenswrapper[7337]: E0312 18:13:22.425278 7337 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 12 18:13:22.425377 master-0 kubenswrapper[7337]: E0312 18:13:22.425295 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls podName:e22c7035-4b7a-48cb-9abb-db277b387842 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:24.425290246 +0000 UTC m=+4.893891193 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-l4krq" (UID: "e22c7035-4b7a-48cb-9abb-db277b387842") : secret "image-registry-operator-tls" not found Mar 12 18:13:22.425377 master-0 kubenswrapper[7337]: E0312 18:13:22.425322 7337 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 12 18:13:22.425377 master-0 kubenswrapper[7337]: E0312 18:13:22.425338 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls podName:8ad05507-e242-4ff8-ae80-c16ff9ee68e2 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:24.425333687 +0000 UTC m=+4.893934624 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls") pod "dns-operator-589895fbb7-jqj5k" (UID: "8ad05507-e242-4ff8-ae80-c16ff9ee68e2") : secret "metrics-tls" not found Mar 12 18:13:22.425377 master-0 kubenswrapper[7337]: E0312 18:13:22.425365 7337 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 12 18:13:22.425377 master-0 kubenswrapper[7337]: E0312 18:13:22.425382 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs podName:875bdfaa-b0a4-4412-a477-c962844e7057 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:24.425375768 +0000 UTC m=+4.893976705 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs") pod "multus-admission-controller-8d675b596-kcpg5" (UID: "875bdfaa-b0a4-4412-a477-c962844e7057") : secret "multus-admission-controller-secret" not found Mar 12 18:13:22.426133 master-0 kubenswrapper[7337]: E0312 18:13:22.425410 7337 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 12 18:13:22.426133 master-0 kubenswrapper[7337]: E0312 18:13:22.425662 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls podName:306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed nodeName:}" failed. No retries permitted until 2026-03-12 18:13:24.425421579 +0000 UTC m=+4.894022526 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-lqpbp" (UID: "306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed") : secret "node-tuning-operator-tls" not found Mar 12 18:13:22.426133 master-0 kubenswrapper[7337]: E0312 18:13:22.425705 7337 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 12 18:13:22.426133 master-0 kubenswrapper[7337]: E0312 18:13:22.425726 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs podName:5b48f8fd-2efe-44e3-a6d7-c71358b83a2f nodeName:}" failed. No retries permitted until 2026-03-12 18:13:24.425719827 +0000 UTC m=+4.894320774 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs") pod "network-metrics-daemon-z4sc9" (UID: "5b48f8fd-2efe-44e3-a6d7-c71358b83a2f") : secret "metrics-daemon-secret" not found Mar 12 18:13:22.684264 master-0 kubenswrapper[7337]: E0312 18:13:22.684030 7337 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953" Mar 12 18:13:22.684264 master-0 kubenswrapper[7337]: E0312 18:13:22.684219 7337 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 18:13:22.684264 master-0 kubenswrapper[7337]: container &Container{Name:authentication-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953,Command:[/bin/bash -ec],Args:[if [ -s /var/run/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then Mar 12 18:13:22.684264 master-0 kubenswrapper[7337]: echo "Copying system trust bundle" Mar 12 18:13:22.684264 master-0 kubenswrapper[7337]: cp -f /var/run/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem Mar 12 18:13:22.684264 master-0 kubenswrapper[7337]: fi Mar 12 18:13:22.684264 master-0 kubenswrapper[7337]: exec authentication-operator operator --config=/var/run/configmaps/config/operator-config.yaml --v=2 --terminate-on-files=/var/run/configmaps/trusted-ca-bundle/ca-bundle.crt --terminate-on-files=/tmp/terminate Mar 12 18:13:22.684264 master-0 kubenswrapper[7337]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:IMAGE_OAUTH_SERVER,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d3571ade02a7c61123d62c53fda6a57031a52c058c0571759dc09f96b23978f,ValueFrom:nil,},EnvVar{Name:IMAGE_OAUTH_APISERVER,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},EnvVar{Name:OPERAND_OAUTH_SERVER_IMAGE_VERSION,Value:4.18.34_openshift,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{209715200 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:trusted-ca-bundle,ReadOnly:true,MountPath:/var/run/configmaps/trusted-ca-bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:service-ca-bundle,ReadOnly:true,MountPath:/var/run/configmaps/service-ca-bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jn9nf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:healthz,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod authentication-operator-7c6989d6c4-ljw8b_openshift-authentication-operator(062f1b21-2ffc-47da-8334-427c3b2a1a90): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Mar 12 18:13:22.684264 master-0 kubenswrapper[7337]: > logger="UnhandledError" Mar 12 18:13:22.687621 master-0 kubenswrapper[7337]: E0312 18:13:22.687085 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" podUID="062f1b21-2ffc-47da-8334-427c3b2a1a90" Mar 12 18:13:22.816849 master-0 kubenswrapper[7337]: I0312 18:13:22.814779 7337 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 18:13:22.838285 master-0 kubenswrapper[7337]: I0312 18:13:22.838241 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:13:22.938132 master-0 kubenswrapper[7337]: I0312 18:13:22.937826 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-cpthp"] Mar 12 18:13:23.818949 master-0 kubenswrapper[7337]: I0312 18:13:23.818687 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-649db" event={"ID":"45aa4887-c913-4ece-ae34-fcde33832621","Type":"ContainerStarted","Data":"713977d47dfecb905c7cc3c14de2a72254744fe363e6f7198ff24aaf349daf7b"} Mar 12 18:13:23.819876 master-0 kubenswrapper[7337]: I0312 18:13:23.819460 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-w7xvp" event={"ID":"e720e1d0-5a6d-4b76-8b25-5963e24950f5","Type":"ContainerStarted","Data":"6cbf8532a0aab6166e00e40dafe24b7c97f2d79bb9206285a901edb45142b490"} Mar 12 18:13:23.820795 master-0 kubenswrapper[7337]: I0312 18:13:23.820766 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-2xr98" event={"ID":"a1e2340b-ebca-40de-b1e0-8133999cd860","Type":"ContainerStarted","Data":"fff98590531dfb71359f592b09852a158d9cf8cc7fff20e92644173e6e6819dc"} Mar 12 18:13:23.821976 master-0 kubenswrapper[7337]: I0312 18:13:23.821948 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-9xlgl" event={"ID":"055f5c67-f512-4510-99c5-e194944b0599","Type":"ContainerStarted","Data":"fce4a972222f063110d34772de7116adb2483b3e9c195060fc1414ecf2cd9f6c"} Mar 12 18:13:23.823213 master-0 kubenswrapper[7337]: I0312 18:13:23.823188 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpb6k" event={"ID":"d4ae1240-e04e-48e9-88df-9f1a53508da7","Type":"ContainerStarted","Data":"336f9bff957643e2b1614f5b9ab58d3286fac81af162d3e42ef2ab143bd1a53e"} Mar 12 18:13:23.824309 master-0 kubenswrapper[7337]: I0312 18:13:23.824284 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6qlzz" event={"ID":"236f2886-bb69-49a7-9471-36454fd1cbd3","Type":"ContainerStarted","Data":"6ae7a934b8aa2f254b8b82bbc367d7391db11d303ac3c55852c1da10c3f95301"} Mar 12 18:13:23.827819 master-0 kubenswrapper[7337]: I0312 18:13:23.827762 7337 generic.go:334] "Generic (PLEG): container finished" podID="37cd9c0a-697e-4e67-932b-b331ff77c8c0" containerID="a644d995b7e4a613ffad672523990100f60140e220dd50976725d9008b099a3d" exitCode=0 Mar 12 18:13:23.827968 master-0 kubenswrapper[7337]: I0312 18:13:23.827836 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" event={"ID":"37cd9c0a-697e-4e67-932b-b331ff77c8c0","Type":"ContainerDied","Data":"a644d995b7e4a613ffad672523990100f60140e220dd50976725d9008b099a3d"} Mar 12 18:13:23.830620 master-0 kubenswrapper[7337]: I0312 18:13:23.830590 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-m6hsp" event={"ID":"223a548b-a3ad-40dd-82de-e3dbb7f3e4fa","Type":"ContainerStarted","Data":"05c0afcccf4bf3051eac46ea2747146033d8dbf283902873560ad4999c7825f8"} Mar 12 18:13:23.831698 master-0 kubenswrapper[7337]: I0312 18:13:23.831674 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" event={"ID":"e697746f-fb9e-4d10-ab61-33c68e62cc0d","Type":"ContainerStarted","Data":"2a197e2fe83ed2e384dda0d8770ef6e8d98b56d89ae78066b100f526847a5d4c"} Mar 12 18:13:23.836856 master-0 kubenswrapper[7337]: I0312 18:13:23.836254 7337 generic.go:334] "Generic (PLEG): container finished" podID="f3a2cda2-b70f-4128-a1be-48503f5aad6d" containerID="452ac1e185a248cfac36d370d36916ef0c27910988d43a12329a25e9765f77ac" exitCode=0 Mar 12 18:13:23.836856 master-0 kubenswrapper[7337]: I0312 18:13:23.836333 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4" event={"ID":"f3a2cda2-b70f-4128-a1be-48503f5aad6d","Type":"ContainerDied","Data":"452ac1e185a248cfac36d370d36916ef0c27910988d43a12329a25e9765f77ac"} Mar 12 18:13:24.206540 master-0 kubenswrapper[7337]: I0312 18:13:24.206139 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:13:24.206540 master-0 kubenswrapper[7337]: I0312 18:13:24.206253 7337 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 18:13:24.282448 master-0 kubenswrapper[7337]: I0312 18:13:24.282407 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:13:24.319573 master-0 kubenswrapper[7337]: I0312 18:13:24.317803 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-2ltx9"] Mar 12 18:13:24.319573 master-0 kubenswrapper[7337]: E0312 18:13:24.317951 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43fdaf13-ffc1-4787-8dd2-90d0685b3124" containerName="prober" Mar 12 18:13:24.319573 master-0 kubenswrapper[7337]: I0312 18:13:24.317963 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="43fdaf13-ffc1-4787-8dd2-90d0685b3124" containerName="prober" Mar 12 18:13:24.319573 master-0 kubenswrapper[7337]: E0312 18:13:24.317970 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a4a981c-9454-4e1f-951e-1a62737659cc" containerName="assisted-installer-controller" Mar 12 18:13:24.319573 master-0 kubenswrapper[7337]: I0312 18:13:24.317977 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a4a981c-9454-4e1f-951e-1a62737659cc" containerName="assisted-installer-controller" Mar 12 18:13:24.319573 master-0 kubenswrapper[7337]: I0312 18:13:24.318030 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="43fdaf13-ffc1-4787-8dd2-90d0685b3124" containerName="prober" Mar 12 18:13:24.319573 master-0 kubenswrapper[7337]: I0312 18:13:24.318041 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a4a981c-9454-4e1f-951e-1a62737659cc" containerName="assisted-installer-controller" Mar 12 18:13:24.319573 master-0 kubenswrapper[7337]: I0312 18:13:24.318253 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-2ltx9" Mar 12 18:13:24.345111 master-0 kubenswrapper[7337]: I0312 18:13:24.343312 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-2ltx9"] Mar 12 18:13:24.350533 master-0 kubenswrapper[7337]: I0312 18:13:24.347894 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert\") pod \"catalog-operator-7d9c49f57b-pslh7\" (UID: \"47850839-bb4b-41e9-ac31-f1cabbb4926d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7" Mar 12 18:13:24.350533 master-0 kubenswrapper[7337]: I0312 18:13:24.347950 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert\") pod \"olm-operator-d64cfc9db-npt4r\" (UID: \"d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r" Mar 12 18:13:24.350533 master-0 kubenswrapper[7337]: I0312 18:13:24.347979 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-kwv7s\" (UID: \"51eb717b-d11f-4bc3-8df6-deb51d5889f3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:13:24.350533 master-0 kubenswrapper[7337]: I0312 18:13:24.348013 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfjj6\" (UniqueName: \"kubernetes.io/projected/bce831df-c604-4608-a24e-b14d62c5287a-kube-api-access-wfjj6\") pod \"csi-snapshot-controller-7577d6f48-2ltx9\" (UID: \"bce831df-c604-4608-a24e-b14d62c5287a\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-2ltx9" Mar 12 18:13:24.350533 master-0 kubenswrapper[7337]: I0312 18:13:24.348040 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-fz79c\" (UID: \"e94d098b-fbcc-4e85-b8ad-42f3a21c822c\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" Mar 12 18:13:24.350533 master-0 kubenswrapper[7337]: I0312 18:13:24.348060 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:13:24.350533 master-0 kubenswrapper[7337]: I0312 18:13:24.348124 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-clkx5\" (UID: \"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:13:24.350533 master-0 kubenswrapper[7337]: E0312 18:13:24.348262 7337 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 12 18:13:24.350533 master-0 kubenswrapper[7337]: E0312 18:13:24.348319 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics podName:4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:28.348301228 +0000 UTC m=+8.816902175 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-clkx5" (UID: "4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64") : secret "marketplace-operator-metrics" not found Mar 12 18:13:24.350533 master-0 kubenswrapper[7337]: E0312 18:13:24.348744 7337 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 12 18:13:24.350533 master-0 kubenswrapper[7337]: E0312 18:13:24.348777 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert podName:47850839-bb4b-41e9-ac31-f1cabbb4926d nodeName:}" failed. No retries permitted until 2026-03-12 18:13:28.34876663 +0000 UTC m=+8.817367577 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert") pod "catalog-operator-7d9c49f57b-pslh7" (UID: "47850839-bb4b-41e9-ac31-f1cabbb4926d") : secret "catalog-operator-serving-cert" not found Mar 12 18:13:24.350533 master-0 kubenswrapper[7337]: E0312 18:13:24.348820 7337 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 12 18:13:24.350533 master-0 kubenswrapper[7337]: E0312 18:13:24.348838 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert podName:d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:28.348832572 +0000 UTC m=+8.817433519 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert") pod "olm-operator-d64cfc9db-npt4r" (UID: "d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27") : secret "olm-operator-serving-cert" not found Mar 12 18:13:24.350533 master-0 kubenswrapper[7337]: E0312 18:13:24.348875 7337 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 12 18:13:24.350533 master-0 kubenswrapper[7337]: E0312 18:13:24.348898 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert podName:51eb717b-d11f-4bc3-8df6-deb51d5889f3 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:28.348890733 +0000 UTC m=+8.817491680 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-kwv7s" (UID: "51eb717b-d11f-4bc3-8df6-deb51d5889f3") : secret "package-server-manager-serving-cert" not found Mar 12 18:13:24.350533 master-0 kubenswrapper[7337]: E0312 18:13:24.348954 7337 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 12 18:13:24.350533 master-0 kubenswrapper[7337]: E0312 18:13:24.348978 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls podName:e94d098b-fbcc-4e85-b8ad-42f3a21c822c nodeName:}" failed. No retries permitted until 2026-03-12 18:13:28.348968925 +0000 UTC m=+8.817569872 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-fz79c" (UID: "e94d098b-fbcc-4e85-b8ad-42f3a21c822c") : secret "cluster-monitoring-operator-tls" not found Mar 12 18:13:24.350533 master-0 kubenswrapper[7337]: E0312 18:13:24.349017 7337 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 12 18:13:24.350533 master-0 kubenswrapper[7337]: E0312 18:13:24.349038 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls podName:d94dc349-c5cb-4f12-8e48-867030af4981 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:28.349031847 +0000 UTC m=+8.817632794 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls") pod "ingress-operator-677db989d6-4527l" (UID: "d94dc349-c5cb-4f12-8e48-867030af4981") : secret "metrics-tls" not found Mar 12 18:13:24.450676 master-0 kubenswrapper[7337]: I0312 18:13:24.449376 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:13:24.450676 master-0 kubenswrapper[7337]: I0312 18:13:24.449463 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls\") pod \"dns-operator-589895fbb7-jqj5k\" (UID: \"8ad05507-e242-4ff8-ae80-c16ff9ee68e2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-jqj5k" Mar 12 18:13:24.450676 master-0 kubenswrapper[7337]: I0312 18:13:24.449497 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs\") pod \"multus-admission-controller-8d675b596-kcpg5\" (UID: \"875bdfaa-b0a4-4412-a477-c962844e7057\") " pod="openshift-multus/multus-admission-controller-8d675b596-kcpg5" Mar 12 18:13:24.450676 master-0 kubenswrapper[7337]: I0312 18:13:24.449560 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:24.450676 master-0 kubenswrapper[7337]: I0312 18:13:24.449640 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs\") pod \"network-metrics-daemon-z4sc9\" (UID: \"5b48f8fd-2efe-44e3-a6d7-c71358b83a2f\") " pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:13:24.450676 master-0 kubenswrapper[7337]: I0312 18:13:24.449699 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:24.450676 master-0 kubenswrapper[7337]: I0312 18:13:24.449745 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfjj6\" (UniqueName: \"kubernetes.io/projected/bce831df-c604-4608-a24e-b14d62c5287a-kube-api-access-wfjj6\") pod \"csi-snapshot-controller-7577d6f48-2ltx9\" (UID: \"bce831df-c604-4608-a24e-b14d62c5287a\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-2ltx9" Mar 12 18:13:24.450676 master-0 kubenswrapper[7337]: I0312 18:13:24.449791 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:13:24.450676 master-0 kubenswrapper[7337]: E0312 18:13:24.449966 7337 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 18:13:24.450676 master-0 kubenswrapper[7337]: E0312 18:13:24.450042 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert podName:00755a4e-124c-4a51-b1c5-7c505b3637a8 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:28.450024358 +0000 UTC m=+8.918625315 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert") pod "cluster-version-operator-745944c6b7-cxwmx" (UID: "00755a4e-124c-4a51-b1c5-7c505b3637a8") : secret "cluster-version-operator-serving-cert" not found Mar 12 18:13:24.450676 master-0 kubenswrapper[7337]: E0312 18:13:24.450130 7337 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 12 18:13:24.450676 master-0 kubenswrapper[7337]: E0312 18:13:24.450183 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls podName:e22c7035-4b7a-48cb-9abb-db277b387842 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:28.450152592 +0000 UTC m=+8.918753559 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-l4krq" (UID: "e22c7035-4b7a-48cb-9abb-db277b387842") : secret "image-registry-operator-tls" not found Mar 12 18:13:24.450676 master-0 kubenswrapper[7337]: E0312 18:13:24.450231 7337 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 12 18:13:24.450676 master-0 kubenswrapper[7337]: E0312 18:13:24.450276 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls podName:8ad05507-e242-4ff8-ae80-c16ff9ee68e2 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:28.450267854 +0000 UTC m=+8.918868811 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls") pod "dns-operator-589895fbb7-jqj5k" (UID: "8ad05507-e242-4ff8-ae80-c16ff9ee68e2") : secret "metrics-tls" not found Mar 12 18:13:24.450676 master-0 kubenswrapper[7337]: E0312 18:13:24.450348 7337 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 12 18:13:24.450676 master-0 kubenswrapper[7337]: E0312 18:13:24.450377 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs podName:875bdfaa-b0a4-4412-a477-c962844e7057 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:28.450369067 +0000 UTC m=+8.918970034 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs") pod "multus-admission-controller-8d675b596-kcpg5" (UID: "875bdfaa-b0a4-4412-a477-c962844e7057") : secret "multus-admission-controller-secret" not found Mar 12 18:13:24.450676 master-0 kubenswrapper[7337]: E0312 18:13:24.450450 7337 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 12 18:13:24.450676 master-0 kubenswrapper[7337]: E0312 18:13:24.450476 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls podName:306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed nodeName:}" failed. No retries permitted until 2026-03-12 18:13:28.45046732 +0000 UTC m=+8.919068287 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-lqpbp" (UID: "306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed") : secret "node-tuning-operator-tls" not found Mar 12 18:13:24.450676 master-0 kubenswrapper[7337]: E0312 18:13:24.450570 7337 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 12 18:13:24.450676 master-0 kubenswrapper[7337]: E0312 18:13:24.450617 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs podName:5b48f8fd-2efe-44e3-a6d7-c71358b83a2f nodeName:}" failed. No retries permitted until 2026-03-12 18:13:28.450586763 +0000 UTC m=+8.919187720 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs") pod "network-metrics-daemon-z4sc9" (UID: "5b48f8fd-2efe-44e3-a6d7-c71358b83a2f") : secret "metrics-daemon-secret" not found Mar 12 18:13:24.450676 master-0 kubenswrapper[7337]: E0312 18:13:24.450662 7337 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 12 18:13:24.450676 master-0 kubenswrapper[7337]: E0312 18:13:24.450710 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert podName:306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed nodeName:}" failed. No retries permitted until 2026-03-12 18:13:28.450699575 +0000 UTC m=+8.919300542 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-lqpbp" (UID: "306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed") : secret "performance-addon-operator-webhook-cert" not found Mar 12 18:13:24.488908 master-0 kubenswrapper[7337]: I0312 18:13:24.488147 7337 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 12 18:13:24.503826 master-0 kubenswrapper[7337]: I0312 18:13:24.503574 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfjj6\" (UniqueName: \"kubernetes.io/projected/bce831df-c604-4608-a24e-b14d62c5287a-kube-api-access-wfjj6\") pod \"csi-snapshot-controller-7577d6f48-2ltx9\" (UID: \"bce831df-c604-4608-a24e-b14d62c5287a\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-2ltx9" Mar 12 18:13:24.692175 master-0 kubenswrapper[7337]: I0312 18:13:24.692113 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-2ltx9" Mar 12 18:13:24.895466 master-0 kubenswrapper[7337]: I0312 18:13:24.894323 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-2ltx9"] Mar 12 18:13:24.904731 master-0 kubenswrapper[7337]: I0312 18:13:24.904037 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:24.933743 master-0 kubenswrapper[7337]: I0312 18:13:24.933681 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:25.496466 master-0 kubenswrapper[7337]: I0312 18:13:25.496426 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-hv2zw"] Mar 12 18:13:25.497397 master-0 kubenswrapper[7337]: I0312 18:13:25.497376 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-hv2zw" Mar 12 18:13:25.499458 master-0 kubenswrapper[7337]: I0312 18:13:25.499246 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 18:13:25.500166 master-0 kubenswrapper[7337]: I0312 18:13:25.499950 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 18:13:25.500366 master-0 kubenswrapper[7337]: I0312 18:13:25.500298 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 18:13:25.500366 master-0 kubenswrapper[7337]: I0312 18:13:25.500345 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 18:13:25.500456 master-0 kubenswrapper[7337]: I0312 18:13:25.500417 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 18:13:25.500731 master-0 kubenswrapper[7337]: I0312 18:13:25.500698 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 18:13:25.507062 master-0 kubenswrapper[7337]: I0312 18:13:25.506968 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-hv2zw"] Mar 12 18:13:25.566458 master-0 kubenswrapper[7337]: I0312 18:13:25.566392 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-client-ca\") pod \"controller-manager-6f7fd6c796-hv2zw\" (UID: \"4d11f3b8-c47f-46a1-9590-5b27f36ae0d5\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-hv2zw" Mar 12 18:13:25.566458 master-0 kubenswrapper[7337]: I0312 18:13:25.566464 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-config\") pod \"controller-manager-6f7fd6c796-hv2zw\" (UID: \"4d11f3b8-c47f-46a1-9590-5b27f36ae0d5\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-hv2zw" Mar 12 18:13:25.566732 master-0 kubenswrapper[7337]: I0312 18:13:25.566489 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-serving-cert\") pod \"controller-manager-6f7fd6c796-hv2zw\" (UID: \"4d11f3b8-c47f-46a1-9590-5b27f36ae0d5\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-hv2zw" Mar 12 18:13:25.566732 master-0 kubenswrapper[7337]: I0312 18:13:25.566509 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-hv2zw\" (UID: \"4d11f3b8-c47f-46a1-9590-5b27f36ae0d5\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-hv2zw" Mar 12 18:13:25.566732 master-0 kubenswrapper[7337]: I0312 18:13:25.566584 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwt6t\" (UniqueName: \"kubernetes.io/projected/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-kube-api-access-cwt6t\") pod \"controller-manager-6f7fd6c796-hv2zw\" (UID: \"4d11f3b8-c47f-46a1-9590-5b27f36ae0d5\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-hv2zw" Mar 12 18:13:25.635573 master-0 kubenswrapper[7337]: I0312 18:13:25.635424 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-57ccdf9b5-w72wh"] Mar 12 18:13:25.642220 master-0 kubenswrapper[7337]: I0312 18:13:25.635953 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-w72wh" Mar 12 18:13:25.642220 master-0 kubenswrapper[7337]: I0312 18:13:25.639177 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 12 18:13:25.642220 master-0 kubenswrapper[7337]: I0312 18:13:25.639467 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 12 18:13:25.644062 master-0 kubenswrapper[7337]: I0312 18:13:25.644030 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-57ccdf9b5-w72wh"] Mar 12 18:13:25.667921 master-0 kubenswrapper[7337]: I0312 18:13:25.667822 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-serving-cert\") pod \"controller-manager-6f7fd6c796-hv2zw\" (UID: \"4d11f3b8-c47f-46a1-9590-5b27f36ae0d5\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-hv2zw" Mar 12 18:13:25.668114 master-0 kubenswrapper[7337]: E0312 18:13:25.667991 7337 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 18:13:25.668114 master-0 kubenswrapper[7337]: I0312 18:13:25.668034 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-hv2zw\" (UID: \"4d11f3b8-c47f-46a1-9590-5b27f36ae0d5\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-hv2zw" Mar 12 18:13:25.668114 master-0 kubenswrapper[7337]: E0312 18:13:25.668069 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-serving-cert podName:4d11f3b8-c47f-46a1-9590-5b27f36ae0d5 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:26.168045611 +0000 UTC m=+6.636646558 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-serving-cert") pod "controller-manager-6f7fd6c796-hv2zw" (UID: "4d11f3b8-c47f-46a1-9590-5b27f36ae0d5") : secret "serving-cert" not found Mar 12 18:13:25.668114 master-0 kubenswrapper[7337]: E0312 18:13:25.668103 7337 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Mar 12 18:13:25.668296 master-0 kubenswrapper[7337]: E0312 18:13:25.668144 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-proxy-ca-bundles podName:4d11f3b8-c47f-46a1-9590-5b27f36ae0d5 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:26.168132983 +0000 UTC m=+6.636733920 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-proxy-ca-bundles") pod "controller-manager-6f7fd6c796-hv2zw" (UID: "4d11f3b8-c47f-46a1-9590-5b27f36ae0d5") : configmap "openshift-global-ca" not found Mar 12 18:13:25.668296 master-0 kubenswrapper[7337]: I0312 18:13:25.668163 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwt6t\" (UniqueName: \"kubernetes.io/projected/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-kube-api-access-cwt6t\") pod \"controller-manager-6f7fd6c796-hv2zw\" (UID: \"4d11f3b8-c47f-46a1-9590-5b27f36ae0d5\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-hv2zw" Mar 12 18:13:25.668296 master-0 kubenswrapper[7337]: I0312 18:13:25.668227 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8qw4\" (UniqueName: \"kubernetes.io/projected/8c241720-7815-40fd-8d4a-1685a43b5893-kube-api-access-l8qw4\") pod \"migrator-57ccdf9b5-w72wh\" (UID: \"8c241720-7815-40fd-8d4a-1685a43b5893\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-w72wh" Mar 12 18:13:25.668535 master-0 kubenswrapper[7337]: I0312 18:13:25.668479 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-client-ca\") pod \"controller-manager-6f7fd6c796-hv2zw\" (UID: \"4d11f3b8-c47f-46a1-9590-5b27f36ae0d5\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-hv2zw" Mar 12 18:13:25.668610 master-0 kubenswrapper[7337]: E0312 18:13:25.668579 7337 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 12 18:13:25.668656 master-0 kubenswrapper[7337]: I0312 18:13:25.668608 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-config\") pod \"controller-manager-6f7fd6c796-hv2zw\" (UID: \"4d11f3b8-c47f-46a1-9590-5b27f36ae0d5\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-hv2zw" Mar 12 18:13:25.668656 master-0 kubenswrapper[7337]: E0312 18:13:25.668616 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-client-ca podName:4d11f3b8-c47f-46a1-9590-5b27f36ae0d5 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:26.168605575 +0000 UTC m=+6.637206592 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-client-ca") pod "controller-manager-6f7fd6c796-hv2zw" (UID: "4d11f3b8-c47f-46a1-9590-5b27f36ae0d5") : configmap "client-ca" not found Mar 12 18:13:25.668747 master-0 kubenswrapper[7337]: E0312 18:13:25.668672 7337 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Mar 12 18:13:25.668747 master-0 kubenswrapper[7337]: E0312 18:13:25.668696 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-config podName:4d11f3b8-c47f-46a1-9590-5b27f36ae0d5 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:26.168688587 +0000 UTC m=+6.637289624 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-config") pod "controller-manager-6f7fd6c796-hv2zw" (UID: "4d11f3b8-c47f-46a1-9590-5b27f36ae0d5") : configmap "config" not found Mar 12 18:13:25.686558 master-0 kubenswrapper[7337]: I0312 18:13:25.686258 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwt6t\" (UniqueName: \"kubernetes.io/projected/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-kube-api-access-cwt6t\") pod \"controller-manager-6f7fd6c796-hv2zw\" (UID: \"4d11f3b8-c47f-46a1-9590-5b27f36ae0d5\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-hv2zw" Mar 12 18:13:25.745336 master-0 kubenswrapper[7337]: I0312 18:13:25.745075 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:25.770319 master-0 kubenswrapper[7337]: I0312 18:13:25.765954 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:25.770319 master-0 kubenswrapper[7337]: I0312 18:13:25.769926 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8qw4\" (UniqueName: \"kubernetes.io/projected/8c241720-7815-40fd-8d4a-1685a43b5893-kube-api-access-l8qw4\") pod \"migrator-57ccdf9b5-w72wh\" (UID: \"8c241720-7815-40fd-8d4a-1685a43b5893\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-w72wh" Mar 12 18:13:25.793752 master-0 kubenswrapper[7337]: I0312 18:13:25.793709 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8qw4\" (UniqueName: \"kubernetes.io/projected/8c241720-7815-40fd-8d4a-1685a43b5893-kube-api-access-l8qw4\") pod \"migrator-57ccdf9b5-w72wh\" (UID: \"8c241720-7815-40fd-8d4a-1685a43b5893\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-w72wh" Mar 12 18:13:25.872137 master-0 kubenswrapper[7337]: I0312 18:13:25.872102 7337 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 18:13:25.872990 master-0 kubenswrapper[7337]: I0312 18:13:25.872964 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-2ltx9" event={"ID":"bce831df-c604-4608-a24e-b14d62c5287a","Type":"ContainerStarted","Data":"3e5cfbce195dab841bc3d549f7ec807dce5f9f747be2dff9f428eff5e81f95a6"} Mar 12 18:13:25.954702 master-0 kubenswrapper[7337]: I0312 18:13:25.954647 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-w72wh" Mar 12 18:13:26.185365 master-0 kubenswrapper[7337]: I0312 18:13:26.185254 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-serving-cert\") pod \"controller-manager-6f7fd6c796-hv2zw\" (UID: \"4d11f3b8-c47f-46a1-9590-5b27f36ae0d5\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-hv2zw" Mar 12 18:13:26.185365 master-0 kubenswrapper[7337]: I0312 18:13:26.185291 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-hv2zw\" (UID: \"4d11f3b8-c47f-46a1-9590-5b27f36ae0d5\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-hv2zw" Mar 12 18:13:26.185614 master-0 kubenswrapper[7337]: E0312 18:13:26.185463 7337 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 18:13:26.185852 master-0 kubenswrapper[7337]: I0312 18:13:26.185618 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-client-ca\") pod \"controller-manager-6f7fd6c796-hv2zw\" (UID: \"4d11f3b8-c47f-46a1-9590-5b27f36ae0d5\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-hv2zw" Mar 12 18:13:26.185852 master-0 kubenswrapper[7337]: E0312 18:13:26.185729 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-serving-cert podName:4d11f3b8-c47f-46a1-9590-5b27f36ae0d5 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:27.18569005 +0000 UTC m=+7.654291057 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-serving-cert") pod "controller-manager-6f7fd6c796-hv2zw" (UID: "4d11f3b8-c47f-46a1-9590-5b27f36ae0d5") : secret "serving-cert" not found Mar 12 18:13:26.185852 master-0 kubenswrapper[7337]: E0312 18:13:26.185736 7337 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Mar 12 18:13:26.185852 master-0 kubenswrapper[7337]: E0312 18:13:26.185772 7337 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 12 18:13:26.185852 master-0 kubenswrapper[7337]: E0312 18:13:26.185825 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-client-ca podName:4d11f3b8-c47f-46a1-9590-5b27f36ae0d5 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:27.185808913 +0000 UTC m=+7.654409860 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-client-ca") pod "controller-manager-6f7fd6c796-hv2zw" (UID: "4d11f3b8-c47f-46a1-9590-5b27f36ae0d5") : configmap "client-ca" not found Mar 12 18:13:26.185852 master-0 kubenswrapper[7337]: I0312 18:13:26.185821 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-config\") pod \"controller-manager-6f7fd6c796-hv2zw\" (UID: \"4d11f3b8-c47f-46a1-9590-5b27f36ae0d5\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-hv2zw" Mar 12 18:13:26.185852 master-0 kubenswrapper[7337]: E0312 18:13:26.185840 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-proxy-ca-bundles podName:4d11f3b8-c47f-46a1-9590-5b27f36ae0d5 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:27.185833313 +0000 UTC m=+7.654434260 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-proxy-ca-bundles") pod "controller-manager-6f7fd6c796-hv2zw" (UID: "4d11f3b8-c47f-46a1-9590-5b27f36ae0d5") : configmap "openshift-global-ca" not found Mar 12 18:13:26.186064 master-0 kubenswrapper[7337]: E0312 18:13:26.185870 7337 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Mar 12 18:13:26.186064 master-0 kubenswrapper[7337]: E0312 18:13:26.185909 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-config podName:4d11f3b8-c47f-46a1-9590-5b27f36ae0d5 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:27.185895345 +0000 UTC m=+7.654496292 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-config") pod "controller-manager-6f7fd6c796-hv2zw" (UID: "4d11f3b8-c47f-46a1-9590-5b27f36ae0d5") : configmap "config" not found Mar 12 18:13:26.657315 master-0 kubenswrapper[7337]: I0312 18:13:26.657272 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-hv2zw"] Mar 12 18:13:26.657723 master-0 kubenswrapper[7337]: E0312 18:13:26.657681 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-6f7fd6c796-hv2zw" podUID="4d11f3b8-c47f-46a1-9590-5b27f36ae0d5" Mar 12 18:13:26.668461 master-0 kubenswrapper[7337]: I0312 18:13:26.668150 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b45cc45cc-kjvgg"] Mar 12 18:13:26.669791 master-0 kubenswrapper[7337]: I0312 18:13:26.669277 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b45cc45cc-kjvgg" Mar 12 18:13:26.672160 master-0 kubenswrapper[7337]: I0312 18:13:26.671598 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 18:13:26.679065 master-0 kubenswrapper[7337]: I0312 18:13:26.678920 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 18:13:26.679910 master-0 kubenswrapper[7337]: I0312 18:13:26.679123 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 18:13:26.679910 master-0 kubenswrapper[7337]: I0312 18:13:26.679255 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 18:13:26.679910 master-0 kubenswrapper[7337]: I0312 18:13:26.679610 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 18:13:26.683384 master-0 kubenswrapper[7337]: I0312 18:13:26.683346 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b45cc45cc-kjvgg"] Mar 12 18:13:26.801540 master-0 kubenswrapper[7337]: I0312 18:13:26.792136 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41644344-9d23-4ed1-9044-4fe65cab6159-config\") pod \"route-controller-manager-5b45cc45cc-kjvgg\" (UID: \"41644344-9d23-4ed1-9044-4fe65cab6159\") " pod="openshift-route-controller-manager/route-controller-manager-5b45cc45cc-kjvgg" Mar 12 18:13:26.801540 master-0 kubenswrapper[7337]: I0312 18:13:26.792199 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41644344-9d23-4ed1-9044-4fe65cab6159-serving-cert\") pod \"route-controller-manager-5b45cc45cc-kjvgg\" (UID: \"41644344-9d23-4ed1-9044-4fe65cab6159\") " pod="openshift-route-controller-manager/route-controller-manager-5b45cc45cc-kjvgg" Mar 12 18:13:26.801540 master-0 kubenswrapper[7337]: I0312 18:13:26.792285 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvqfn\" (UniqueName: \"kubernetes.io/projected/41644344-9d23-4ed1-9044-4fe65cab6159-kube-api-access-jvqfn\") pod \"route-controller-manager-5b45cc45cc-kjvgg\" (UID: \"41644344-9d23-4ed1-9044-4fe65cab6159\") " pod="openshift-route-controller-manager/route-controller-manager-5b45cc45cc-kjvgg" Mar 12 18:13:26.801540 master-0 kubenswrapper[7337]: I0312 18:13:26.792343 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41644344-9d23-4ed1-9044-4fe65cab6159-client-ca\") pod \"route-controller-manager-5b45cc45cc-kjvgg\" (UID: \"41644344-9d23-4ed1-9044-4fe65cab6159\") " pod="openshift-route-controller-manager/route-controller-manager-5b45cc45cc-kjvgg" Mar 12 18:13:26.874987 master-0 kubenswrapper[7337]: I0312 18:13:26.874919 7337 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 18:13:26.875369 master-0 kubenswrapper[7337]: I0312 18:13:26.875343 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-hv2zw" Mar 12 18:13:26.879272 master-0 kubenswrapper[7337]: I0312 18:13:26.879239 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-84bfdbbb7f-769nb"] Mar 12 18:13:26.879690 master-0 kubenswrapper[7337]: I0312 18:13:26.879672 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-84bfdbbb7f-769nb" Mar 12 18:13:26.883256 master-0 kubenswrapper[7337]: I0312 18:13:26.882552 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 12 18:13:26.883256 master-0 kubenswrapper[7337]: I0312 18:13:26.882657 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 12 18:13:26.883256 master-0 kubenswrapper[7337]: I0312 18:13:26.882716 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 12 18:13:26.883256 master-0 kubenswrapper[7337]: I0312 18:13:26.882905 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 12 18:13:26.885148 master-0 kubenswrapper[7337]: I0312 18:13:26.885129 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-hv2zw" Mar 12 18:13:26.891398 master-0 kubenswrapper[7337]: I0312 18:13:26.891364 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-84bfdbbb7f-769nb"] Mar 12 18:13:26.892840 master-0 kubenswrapper[7337]: I0312 18:13:26.892815 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41644344-9d23-4ed1-9044-4fe65cab6159-config\") pod \"route-controller-manager-5b45cc45cc-kjvgg\" (UID: \"41644344-9d23-4ed1-9044-4fe65cab6159\") " pod="openshift-route-controller-manager/route-controller-manager-5b45cc45cc-kjvgg" Mar 12 18:13:26.892901 master-0 kubenswrapper[7337]: I0312 18:13:26.892867 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41644344-9d23-4ed1-9044-4fe65cab6159-serving-cert\") pod \"route-controller-manager-5b45cc45cc-kjvgg\" (UID: \"41644344-9d23-4ed1-9044-4fe65cab6159\") " pod="openshift-route-controller-manager/route-controller-manager-5b45cc45cc-kjvgg" Mar 12 18:13:26.892955 master-0 kubenswrapper[7337]: I0312 18:13:26.892933 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvqfn\" (UniqueName: \"kubernetes.io/projected/41644344-9d23-4ed1-9044-4fe65cab6159-kube-api-access-jvqfn\") pod \"route-controller-manager-5b45cc45cc-kjvgg\" (UID: \"41644344-9d23-4ed1-9044-4fe65cab6159\") " pod="openshift-route-controller-manager/route-controller-manager-5b45cc45cc-kjvgg" Mar 12 18:13:26.892998 master-0 kubenswrapper[7337]: I0312 18:13:26.892960 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41644344-9d23-4ed1-9044-4fe65cab6159-client-ca\") pod \"route-controller-manager-5b45cc45cc-kjvgg\" (UID: \"41644344-9d23-4ed1-9044-4fe65cab6159\") " pod="openshift-route-controller-manager/route-controller-manager-5b45cc45cc-kjvgg" Mar 12 18:13:26.893088 master-0 kubenswrapper[7337]: E0312 18:13:26.893071 7337 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 12 18:13:26.893134 master-0 kubenswrapper[7337]: E0312 18:13:26.893114 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41644344-9d23-4ed1-9044-4fe65cab6159-client-ca podName:41644344-9d23-4ed1-9044-4fe65cab6159 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:27.393101351 +0000 UTC m=+7.861702298 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/41644344-9d23-4ed1-9044-4fe65cab6159-client-ca") pod "route-controller-manager-5b45cc45cc-kjvgg" (UID: "41644344-9d23-4ed1-9044-4fe65cab6159") : configmap "client-ca" not found Mar 12 18:13:26.893425 master-0 kubenswrapper[7337]: E0312 18:13:26.893399 7337 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 18:13:26.893478 master-0 kubenswrapper[7337]: E0312 18:13:26.893467 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41644344-9d23-4ed1-9044-4fe65cab6159-serving-cert podName:41644344-9d23-4ed1-9044-4fe65cab6159 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:27.39344774 +0000 UTC m=+7.862048757 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/41644344-9d23-4ed1-9044-4fe65cab6159-serving-cert") pod "route-controller-manager-5b45cc45cc-kjvgg" (UID: "41644344-9d23-4ed1-9044-4fe65cab6159") : secret "serving-cert" not found Mar 12 18:13:26.894361 master-0 kubenswrapper[7337]: I0312 18:13:26.894339 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41644344-9d23-4ed1-9044-4fe65cab6159-config\") pod \"route-controller-manager-5b45cc45cc-kjvgg\" (UID: \"41644344-9d23-4ed1-9044-4fe65cab6159\") " pod="openshift-route-controller-manager/route-controller-manager-5b45cc45cc-kjvgg" Mar 12 18:13:26.913982 master-0 kubenswrapper[7337]: I0312 18:13:26.913897 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvqfn\" (UniqueName: \"kubernetes.io/projected/41644344-9d23-4ed1-9044-4fe65cab6159-kube-api-access-jvqfn\") pod \"route-controller-manager-5b45cc45cc-kjvgg\" (UID: \"41644344-9d23-4ed1-9044-4fe65cab6159\") " pod="openshift-route-controller-manager/route-controller-manager-5b45cc45cc-kjvgg" Mar 12 18:13:26.919737 master-0 kubenswrapper[7337]: I0312 18:13:26.919703 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:13:26.924303 master-0 kubenswrapper[7337]: I0312 18:13:26.924283 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:13:26.993511 master-0 kubenswrapper[7337]: I0312 18:13:26.993467 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwt6t\" (UniqueName: \"kubernetes.io/projected/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-kube-api-access-cwt6t\") pod \"4d11f3b8-c47f-46a1-9590-5b27f36ae0d5\" (UID: \"4d11f3b8-c47f-46a1-9590-5b27f36ae0d5\") " Mar 12 18:13:26.993901 master-0 kubenswrapper[7337]: I0312 18:13:26.993783 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtrvs\" (UniqueName: \"kubernetes.io/projected/8fe3d699-023e-4de7-8d42-6c9d8a5e68f3-kube-api-access-xtrvs\") pod \"service-ca-84bfdbbb7f-769nb\" (UID: \"8fe3d699-023e-4de7-8d42-6c9d8a5e68f3\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-769nb" Mar 12 18:13:26.993901 master-0 kubenswrapper[7337]: I0312 18:13:26.993851 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8fe3d699-023e-4de7-8d42-6c9d8a5e68f3-signing-cabundle\") pod \"service-ca-84bfdbbb7f-769nb\" (UID: \"8fe3d699-023e-4de7-8d42-6c9d8a5e68f3\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-769nb" Mar 12 18:13:26.993999 master-0 kubenswrapper[7337]: I0312 18:13:26.993968 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8fe3d699-023e-4de7-8d42-6c9d8a5e68f3-signing-key\") pod \"service-ca-84bfdbbb7f-769nb\" (UID: \"8fe3d699-023e-4de7-8d42-6c9d8a5e68f3\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-769nb" Mar 12 18:13:26.996273 master-0 kubenswrapper[7337]: I0312 18:13:26.996225 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-kube-api-access-cwt6t" (OuterVolumeSpecName: "kube-api-access-cwt6t") pod "4d11f3b8-c47f-46a1-9590-5b27f36ae0d5" (UID: "4d11f3b8-c47f-46a1-9590-5b27f36ae0d5"). InnerVolumeSpecName "kube-api-access-cwt6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:13:27.095201 master-0 kubenswrapper[7337]: I0312 18:13:27.095153 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8fe3d699-023e-4de7-8d42-6c9d8a5e68f3-signing-key\") pod \"service-ca-84bfdbbb7f-769nb\" (UID: \"8fe3d699-023e-4de7-8d42-6c9d8a5e68f3\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-769nb" Mar 12 18:13:27.095401 master-0 kubenswrapper[7337]: I0312 18:13:27.095366 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtrvs\" (UniqueName: \"kubernetes.io/projected/8fe3d699-023e-4de7-8d42-6c9d8a5e68f3-kube-api-access-xtrvs\") pod \"service-ca-84bfdbbb7f-769nb\" (UID: \"8fe3d699-023e-4de7-8d42-6c9d8a5e68f3\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-769nb" Mar 12 18:13:27.095466 master-0 kubenswrapper[7337]: I0312 18:13:27.095403 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8fe3d699-023e-4de7-8d42-6c9d8a5e68f3-signing-cabundle\") pod \"service-ca-84bfdbbb7f-769nb\" (UID: \"8fe3d699-023e-4de7-8d42-6c9d8a5e68f3\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-769nb" Mar 12 18:13:27.095547 master-0 kubenswrapper[7337]: I0312 18:13:27.095479 7337 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwt6t\" (UniqueName: \"kubernetes.io/projected/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-kube-api-access-cwt6t\") on node \"master-0\" DevicePath \"\"" Mar 12 18:13:27.096438 master-0 kubenswrapper[7337]: I0312 18:13:27.096410 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8fe3d699-023e-4de7-8d42-6c9d8a5e68f3-signing-cabundle\") pod \"service-ca-84bfdbbb7f-769nb\" (UID: \"8fe3d699-023e-4de7-8d42-6c9d8a5e68f3\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-769nb" Mar 12 18:13:27.101507 master-0 kubenswrapper[7337]: I0312 18:13:27.101458 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8fe3d699-023e-4de7-8d42-6c9d8a5e68f3-signing-key\") pod \"service-ca-84bfdbbb7f-769nb\" (UID: \"8fe3d699-023e-4de7-8d42-6c9d8a5e68f3\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-769nb" Mar 12 18:13:27.116721 master-0 kubenswrapper[7337]: I0312 18:13:27.116677 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtrvs\" (UniqueName: \"kubernetes.io/projected/8fe3d699-023e-4de7-8d42-6c9d8a5e68f3-kube-api-access-xtrvs\") pod \"service-ca-84bfdbbb7f-769nb\" (UID: \"8fe3d699-023e-4de7-8d42-6c9d8a5e68f3\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-769nb" Mar 12 18:13:27.197062 master-0 kubenswrapper[7337]: I0312 18:13:27.196934 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-client-ca\") pod \"controller-manager-6f7fd6c796-hv2zw\" (UID: \"4d11f3b8-c47f-46a1-9590-5b27f36ae0d5\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-hv2zw" Mar 12 18:13:27.197242 master-0 kubenswrapper[7337]: I0312 18:13:27.197067 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-config\") pod \"controller-manager-6f7fd6c796-hv2zw\" (UID: \"4d11f3b8-c47f-46a1-9590-5b27f36ae0d5\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-hv2zw" Mar 12 18:13:27.197242 master-0 kubenswrapper[7337]: I0312 18:13:27.197201 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-serving-cert\") pod \"controller-manager-6f7fd6c796-hv2zw\" (UID: \"4d11f3b8-c47f-46a1-9590-5b27f36ae0d5\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-hv2zw" Mar 12 18:13:27.197343 master-0 kubenswrapper[7337]: E0312 18:13:27.197091 7337 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 12 18:13:27.197343 master-0 kubenswrapper[7337]: E0312 18:13:27.197334 7337 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 18:13:27.197427 master-0 kubenswrapper[7337]: E0312 18:13:27.197338 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-client-ca podName:4d11f3b8-c47f-46a1-9590-5b27f36ae0d5 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:29.197297737 +0000 UTC m=+9.665898684 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-client-ca") pod "controller-manager-6f7fd6c796-hv2zw" (UID: "4d11f3b8-c47f-46a1-9590-5b27f36ae0d5") : configmap "client-ca" not found Mar 12 18:13:27.197427 master-0 kubenswrapper[7337]: E0312 18:13:27.197375 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-serving-cert podName:4d11f3b8-c47f-46a1-9590-5b27f36ae0d5 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:29.197366288 +0000 UTC m=+9.665967235 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-serving-cert") pod "controller-manager-6f7fd6c796-hv2zw" (UID: "4d11f3b8-c47f-46a1-9590-5b27f36ae0d5") : secret "serving-cert" not found Mar 12 18:13:27.198228 master-0 kubenswrapper[7337]: I0312 18:13:27.198201 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-config\") pod \"controller-manager-6f7fd6c796-hv2zw\" (UID: \"4d11f3b8-c47f-46a1-9590-5b27f36ae0d5\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-hv2zw" Mar 12 18:13:27.198295 master-0 kubenswrapper[7337]: I0312 18:13:27.197229 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-hv2zw\" (UID: \"4d11f3b8-c47f-46a1-9590-5b27f36ae0d5\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-hv2zw" Mar 12 18:13:27.198511 master-0 kubenswrapper[7337]: I0312 18:13:27.198478 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-hv2zw\" (UID: \"4d11f3b8-c47f-46a1-9590-5b27f36ae0d5\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-hv2zw" Mar 12 18:13:27.202716 master-0 kubenswrapper[7337]: I0312 18:13:27.202683 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-84bfdbbb7f-769nb" Mar 12 18:13:27.299575 master-0 kubenswrapper[7337]: I0312 18:13:27.299498 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-config\") pod \"4d11f3b8-c47f-46a1-9590-5b27f36ae0d5\" (UID: \"4d11f3b8-c47f-46a1-9590-5b27f36ae0d5\") " Mar 12 18:13:27.299575 master-0 kubenswrapper[7337]: I0312 18:13:27.299581 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-proxy-ca-bundles\") pod \"4d11f3b8-c47f-46a1-9590-5b27f36ae0d5\" (UID: \"4d11f3b8-c47f-46a1-9590-5b27f36ae0d5\") " Mar 12 18:13:27.301061 master-0 kubenswrapper[7337]: I0312 18:13:27.300169 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-config" (OuterVolumeSpecName: "config") pod "4d11f3b8-c47f-46a1-9590-5b27f36ae0d5" (UID: "4d11f3b8-c47f-46a1-9590-5b27f36ae0d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:13:27.301134 master-0 kubenswrapper[7337]: I0312 18:13:27.300426 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4d11f3b8-c47f-46a1-9590-5b27f36ae0d5" (UID: "4d11f3b8-c47f-46a1-9590-5b27f36ae0d5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:13:27.401112 master-0 kubenswrapper[7337]: I0312 18:13:27.401053 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41644344-9d23-4ed1-9044-4fe65cab6159-serving-cert\") pod \"route-controller-manager-5b45cc45cc-kjvgg\" (UID: \"41644344-9d23-4ed1-9044-4fe65cab6159\") " pod="openshift-route-controller-manager/route-controller-manager-5b45cc45cc-kjvgg" Mar 12 18:13:27.401301 master-0 kubenswrapper[7337]: I0312 18:13:27.401147 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41644344-9d23-4ed1-9044-4fe65cab6159-client-ca\") pod \"route-controller-manager-5b45cc45cc-kjvgg\" (UID: \"41644344-9d23-4ed1-9044-4fe65cab6159\") " pod="openshift-route-controller-manager/route-controller-manager-5b45cc45cc-kjvgg" Mar 12 18:13:27.401301 master-0 kubenswrapper[7337]: E0312 18:13:27.401224 7337 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 12 18:13:27.401301 master-0 kubenswrapper[7337]: E0312 18:13:27.401271 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41644344-9d23-4ed1-9044-4fe65cab6159-client-ca podName:41644344-9d23-4ed1-9044-4fe65cab6159 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:28.40125609 +0000 UTC m=+8.869857037 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/41644344-9d23-4ed1-9044-4fe65cab6159-client-ca") pod "route-controller-manager-5b45cc45cc-kjvgg" (UID: "41644344-9d23-4ed1-9044-4fe65cab6159") : configmap "client-ca" not found Mar 12 18:13:27.401301 master-0 kubenswrapper[7337]: E0312 18:13:27.401275 7337 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 18:13:27.401485 master-0 kubenswrapper[7337]: E0312 18:13:27.401370 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41644344-9d23-4ed1-9044-4fe65cab6159-serving-cert podName:41644344-9d23-4ed1-9044-4fe65cab6159 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:28.401343802 +0000 UTC m=+8.869944859 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/41644344-9d23-4ed1-9044-4fe65cab6159-serving-cert") pod "route-controller-manager-5b45cc45cc-kjvgg" (UID: "41644344-9d23-4ed1-9044-4fe65cab6159") : secret "serving-cert" not found Mar 12 18:13:27.401558 master-0 kubenswrapper[7337]: I0312 18:13:27.401490 7337 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:13:27.401558 master-0 kubenswrapper[7337]: I0312 18:13:27.401529 7337 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 12 18:13:27.659334 master-0 kubenswrapper[7337]: I0312 18:13:27.659061 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-57ccdf9b5-w72wh"] Mar 12 18:13:27.672542 master-0 kubenswrapper[7337]: I0312 18:13:27.672318 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-84bfdbbb7f-769nb"] Mar 12 18:13:27.877936 master-0 kubenswrapper[7337]: I0312 18:13:27.877893 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-w72wh" event={"ID":"8c241720-7815-40fd-8d4a-1685a43b5893","Type":"ContainerStarted","Data":"7f1277c10cbb7843daf01cf48e1bbb02b9db679e347497370ac485520e63be09"} Mar 12 18:13:27.878910 master-0 kubenswrapper[7337]: I0312 18:13:27.878891 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-2ltx9" event={"ID":"bce831df-c604-4608-a24e-b14d62c5287a","Type":"ContainerStarted","Data":"e722a80d2945b3420fc5caeacc6065896e4511dd6a2605aa37b3ca081a3ef049"} Mar 12 18:13:27.880962 master-0 kubenswrapper[7337]: I0312 18:13:27.880904 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" event={"ID":"37cd9c0a-697e-4e67-932b-b331ff77c8c0","Type":"ContainerStarted","Data":"3100d6853d6653605e1a09e2cf985a9ecb63a1450916f3d98d5854fad367310a"} Mar 12 18:13:27.881201 master-0 kubenswrapper[7337]: I0312 18:13:27.881175 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" Mar 12 18:13:27.882800 master-0 kubenswrapper[7337]: I0312 18:13:27.882741 7337 generic.go:334] "Generic (PLEG): container finished" podID="f3a2cda2-b70f-4128-a1be-48503f5aad6d" containerID="fdad7d8f064f703bfbf88c807f07178484bba5e53ad147ecc9d74969fce8c221" exitCode=0 Mar 12 18:13:27.882878 master-0 kubenswrapper[7337]: I0312 18:13:27.882754 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4" event={"ID":"f3a2cda2-b70f-4128-a1be-48503f5aad6d","Type":"ContainerDied","Data":"fdad7d8f064f703bfbf88c807f07178484bba5e53ad147ecc9d74969fce8c221"} Mar 12 18:13:27.884958 master-0 kubenswrapper[7337]: I0312 18:13:27.884506 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-769nb" event={"ID":"8fe3d699-023e-4de7-8d42-6c9d8a5e68f3","Type":"ContainerStarted","Data":"7007abd6bd87f278095a5c5bea805876ca0e2532537842c0b1266ddd70ce3cd3"} Mar 12 18:13:27.884958 master-0 kubenswrapper[7337]: I0312 18:13:27.884548 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-769nb" event={"ID":"8fe3d699-023e-4de7-8d42-6c9d8a5e68f3","Type":"ContainerStarted","Data":"4e02da5dec5be8e8f6d924d6c2fb726f7b25e71cacfc4eb1074f2a274b8a70bf"} Mar 12 18:13:27.884958 master-0 kubenswrapper[7337]: I0312 18:13:27.884575 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-hv2zw" Mar 12 18:13:27.889694 master-0 kubenswrapper[7337]: I0312 18:13:27.889675 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:13:27.892335 master-0 kubenswrapper[7337]: I0312 18:13:27.892189 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-2ltx9" podStartSLOduration=1.25405991 podStartE2EDuration="3.892177109s" podCreationTimestamp="2026-03-12 18:13:24 +0000 UTC" firstStartedPulling="2026-03-12 18:13:24.907339882 +0000 UTC m=+5.375940829" lastFinishedPulling="2026-03-12 18:13:27.545457081 +0000 UTC m=+8.014058028" observedRunningTime="2026-03-12 18:13:27.891782049 +0000 UTC m=+8.360383016" watchObservedRunningTime="2026-03-12 18:13:27.892177109 +0000 UTC m=+8.360778056" Mar 12 18:13:27.925936 master-0 kubenswrapper[7337]: I0312 18:13:27.925879 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5ff9688f8-27hlj"] Mar 12 18:13:27.953953 master-0 kubenswrapper[7337]: I0312 18:13:27.953901 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-hv2zw"] Mar 12 18:13:27.953953 master-0 kubenswrapper[7337]: I0312 18:13:27.953939 7337 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-hv2zw"] Mar 12 18:13:27.953953 master-0 kubenswrapper[7337]: I0312 18:13:27.953953 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5ff9688f8-27hlj"] Mar 12 18:13:27.954178 master-0 kubenswrapper[7337]: I0312 18:13:27.954020 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5ff9688f8-27hlj" Mar 12 18:13:27.956002 master-0 kubenswrapper[7337]: I0312 18:13:27.955710 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 18:13:27.956002 master-0 kubenswrapper[7337]: I0312 18:13:27.955710 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 18:13:27.956002 master-0 kubenswrapper[7337]: I0312 18:13:27.955971 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 18:13:27.957624 master-0 kubenswrapper[7337]: I0312 18:13:27.956655 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 18:13:27.957624 master-0 kubenswrapper[7337]: I0312 18:13:27.956810 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 18:13:27.964171 master-0 kubenswrapper[7337]: I0312 18:13:27.964121 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 18:13:28.009090 master-0 kubenswrapper[7337]: I0312 18:13:28.009035 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtg66\" (UniqueName: \"kubernetes.io/projected/f7998f02-c803-4adf-9198-7408513fe07e-kube-api-access-jtg66\") pod \"controller-manager-5ff9688f8-27hlj\" (UID: \"f7998f02-c803-4adf-9198-7408513fe07e\") " pod="openshift-controller-manager/controller-manager-5ff9688f8-27hlj" Mar 12 18:13:28.009966 master-0 kubenswrapper[7337]: I0312 18:13:28.009150 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f7998f02-c803-4adf-9198-7408513fe07e-proxy-ca-bundles\") pod \"controller-manager-5ff9688f8-27hlj\" (UID: \"f7998f02-c803-4adf-9198-7408513fe07e\") " pod="openshift-controller-manager/controller-manager-5ff9688f8-27hlj" Mar 12 18:13:28.009966 master-0 kubenswrapper[7337]: I0312 18:13:28.009219 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7998f02-c803-4adf-9198-7408513fe07e-serving-cert\") pod \"controller-manager-5ff9688f8-27hlj\" (UID: \"f7998f02-c803-4adf-9198-7408513fe07e\") " pod="openshift-controller-manager/controller-manager-5ff9688f8-27hlj" Mar 12 18:13:28.009966 master-0 kubenswrapper[7337]: I0312 18:13:28.009555 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7998f02-c803-4adf-9198-7408513fe07e-client-ca\") pod \"controller-manager-5ff9688f8-27hlj\" (UID: \"f7998f02-c803-4adf-9198-7408513fe07e\") " pod="openshift-controller-manager/controller-manager-5ff9688f8-27hlj" Mar 12 18:13:28.009966 master-0 kubenswrapper[7337]: I0312 18:13:28.009884 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7998f02-c803-4adf-9198-7408513fe07e-config\") pod \"controller-manager-5ff9688f8-27hlj\" (UID: \"f7998f02-c803-4adf-9198-7408513fe07e\") " pod="openshift-controller-manager/controller-manager-5ff9688f8-27hlj" Mar 12 18:13:28.009966 master-0 kubenswrapper[7337]: I0312 18:13:28.009959 7337 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 18:13:28.009966 master-0 kubenswrapper[7337]: I0312 18:13:28.009972 7337 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 18:13:28.022863 master-0 kubenswrapper[7337]: I0312 18:13:28.022031 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-84bfdbbb7f-769nb" podStartSLOduration=2.022017445 podStartE2EDuration="2.022017445s" podCreationTimestamp="2026-03-12 18:13:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:13:28.021972594 +0000 UTC m=+8.490573551" watchObservedRunningTime="2026-03-12 18:13:28.022017445 +0000 UTC m=+8.490618402" Mar 12 18:13:28.110904 master-0 kubenswrapper[7337]: I0312 18:13:28.110849 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtg66\" (UniqueName: \"kubernetes.io/projected/f7998f02-c803-4adf-9198-7408513fe07e-kube-api-access-jtg66\") pod \"controller-manager-5ff9688f8-27hlj\" (UID: \"f7998f02-c803-4adf-9198-7408513fe07e\") " pod="openshift-controller-manager/controller-manager-5ff9688f8-27hlj" Mar 12 18:13:28.111092 master-0 kubenswrapper[7337]: I0312 18:13:28.110925 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f7998f02-c803-4adf-9198-7408513fe07e-proxy-ca-bundles\") pod \"controller-manager-5ff9688f8-27hlj\" (UID: \"f7998f02-c803-4adf-9198-7408513fe07e\") " pod="openshift-controller-manager/controller-manager-5ff9688f8-27hlj" Mar 12 18:13:28.111092 master-0 kubenswrapper[7337]: I0312 18:13:28.110975 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7998f02-c803-4adf-9198-7408513fe07e-serving-cert\") pod \"controller-manager-5ff9688f8-27hlj\" (UID: \"f7998f02-c803-4adf-9198-7408513fe07e\") " pod="openshift-controller-manager/controller-manager-5ff9688f8-27hlj" Mar 12 18:13:28.111092 master-0 kubenswrapper[7337]: I0312 18:13:28.110990 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7998f02-c803-4adf-9198-7408513fe07e-client-ca\") pod \"controller-manager-5ff9688f8-27hlj\" (UID: \"f7998f02-c803-4adf-9198-7408513fe07e\") " pod="openshift-controller-manager/controller-manager-5ff9688f8-27hlj" Mar 12 18:13:28.111092 master-0 kubenswrapper[7337]: I0312 18:13:28.111039 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7998f02-c803-4adf-9198-7408513fe07e-config\") pod \"controller-manager-5ff9688f8-27hlj\" (UID: \"f7998f02-c803-4adf-9198-7408513fe07e\") " pod="openshift-controller-manager/controller-manager-5ff9688f8-27hlj" Mar 12 18:13:28.111407 master-0 kubenswrapper[7337]: E0312 18:13:28.111354 7337 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 18:13:28.111487 master-0 kubenswrapper[7337]: E0312 18:13:28.111444 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7998f02-c803-4adf-9198-7408513fe07e-serving-cert podName:f7998f02-c803-4adf-9198-7408513fe07e nodeName:}" failed. No retries permitted until 2026-03-12 18:13:28.611426432 +0000 UTC m=+9.080027379 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f7998f02-c803-4adf-9198-7408513fe07e-serving-cert") pod "controller-manager-5ff9688f8-27hlj" (UID: "f7998f02-c803-4adf-9198-7408513fe07e") : secret "serving-cert" not found Mar 12 18:13:28.111618 master-0 kubenswrapper[7337]: E0312 18:13:28.111592 7337 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 12 18:13:28.111662 master-0 kubenswrapper[7337]: E0312 18:13:28.111621 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f7998f02-c803-4adf-9198-7408513fe07e-client-ca podName:f7998f02-c803-4adf-9198-7408513fe07e nodeName:}" failed. No retries permitted until 2026-03-12 18:13:28.611615057 +0000 UTC m=+9.080216004 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/f7998f02-c803-4adf-9198-7408513fe07e-client-ca") pod "controller-manager-5ff9688f8-27hlj" (UID: "f7998f02-c803-4adf-9198-7408513fe07e") : configmap "client-ca" not found Mar 12 18:13:28.112165 master-0 kubenswrapper[7337]: I0312 18:13:28.112142 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f7998f02-c803-4adf-9198-7408513fe07e-proxy-ca-bundles\") pod \"controller-manager-5ff9688f8-27hlj\" (UID: \"f7998f02-c803-4adf-9198-7408513fe07e\") " pod="openshift-controller-manager/controller-manager-5ff9688f8-27hlj" Mar 12 18:13:28.112709 master-0 kubenswrapper[7337]: I0312 18:13:28.112685 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7998f02-c803-4adf-9198-7408513fe07e-config\") pod \"controller-manager-5ff9688f8-27hlj\" (UID: \"f7998f02-c803-4adf-9198-7408513fe07e\") " pod="openshift-controller-manager/controller-manager-5ff9688f8-27hlj" Mar 12 18:13:28.137585 master-0 kubenswrapper[7337]: I0312 18:13:28.137529 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtg66\" (UniqueName: \"kubernetes.io/projected/f7998f02-c803-4adf-9198-7408513fe07e-kube-api-access-jtg66\") pod \"controller-manager-5ff9688f8-27hlj\" (UID: \"f7998f02-c803-4adf-9198-7408513fe07e\") " pod="openshift-controller-manager/controller-manager-5ff9688f8-27hlj" Mar 12 18:13:28.416011 master-0 kubenswrapper[7337]: I0312 18:13:28.415708 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41644344-9d23-4ed1-9044-4fe65cab6159-client-ca\") pod \"route-controller-manager-5b45cc45cc-kjvgg\" (UID: \"41644344-9d23-4ed1-9044-4fe65cab6159\") " pod="openshift-route-controller-manager/route-controller-manager-5b45cc45cc-kjvgg" Mar 12 18:13:28.416196 master-0 kubenswrapper[7337]: I0312 18:13:28.416069 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-clkx5\" (UID: \"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:13:28.416196 master-0 kubenswrapper[7337]: E0312 18:13:28.415908 7337 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 12 18:13:28.416196 master-0 kubenswrapper[7337]: I0312 18:13:28.416138 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert\") pod \"catalog-operator-7d9c49f57b-pslh7\" (UID: \"47850839-bb4b-41e9-ac31-f1cabbb4926d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7" Mar 12 18:13:28.416196 master-0 kubenswrapper[7337]: E0312 18:13:28.416195 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41644344-9d23-4ed1-9044-4fe65cab6159-client-ca podName:41644344-9d23-4ed1-9044-4fe65cab6159 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:30.416164601 +0000 UTC m=+10.884765568 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/41644344-9d23-4ed1-9044-4fe65cab6159-client-ca") pod "route-controller-manager-5b45cc45cc-kjvgg" (UID: "41644344-9d23-4ed1-9044-4fe65cab6159") : configmap "client-ca" not found Mar 12 18:13:28.416336 master-0 kubenswrapper[7337]: E0312 18:13:28.416219 7337 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 12 18:13:28.416336 master-0 kubenswrapper[7337]: E0312 18:13:28.416232 7337 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 12 18:13:28.416336 master-0 kubenswrapper[7337]: E0312 18:13:28.416246 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert podName:47850839-bb4b-41e9-ac31-f1cabbb4926d nodeName:}" failed. No retries permitted until 2026-03-12 18:13:36.416236873 +0000 UTC m=+16.884837810 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert") pod "catalog-operator-7d9c49f57b-pslh7" (UID: "47850839-bb4b-41e9-ac31-f1cabbb4926d") : secret "catalog-operator-serving-cert" not found Mar 12 18:13:28.416421 master-0 kubenswrapper[7337]: I0312 18:13:28.416337 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert\") pod \"olm-operator-d64cfc9db-npt4r\" (UID: \"d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r" Mar 12 18:13:28.416421 master-0 kubenswrapper[7337]: I0312 18:13:28.416387 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-kwv7s\" (UID: \"51eb717b-d11f-4bc3-8df6-deb51d5889f3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:13:28.416421 master-0 kubenswrapper[7337]: I0312 18:13:28.416417 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41644344-9d23-4ed1-9044-4fe65cab6159-serving-cert\") pod \"route-controller-manager-5b45cc45cc-kjvgg\" (UID: \"41644344-9d23-4ed1-9044-4fe65cab6159\") " pod="openshift-route-controller-manager/route-controller-manager-5b45cc45cc-kjvgg" Mar 12 18:13:28.416503 master-0 kubenswrapper[7337]: I0312 18:13:28.416488 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-fz79c\" (UID: \"e94d098b-fbcc-4e85-b8ad-42f3a21c822c\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" Mar 12 18:13:28.416554 master-0 kubenswrapper[7337]: I0312 18:13:28.416507 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:13:28.416631 master-0 kubenswrapper[7337]: E0312 18:13:28.416607 7337 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 12 18:13:28.416670 master-0 kubenswrapper[7337]: E0312 18:13:28.416643 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls podName:e94d098b-fbcc-4e85-b8ad-42f3a21c822c nodeName:}" failed. No retries permitted until 2026-03-12 18:13:36.416628583 +0000 UTC m=+16.885229520 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-fz79c" (UID: "e94d098b-fbcc-4e85-b8ad-42f3a21c822c") : secret "cluster-monitoring-operator-tls" not found Mar 12 18:13:28.416670 master-0 kubenswrapper[7337]: E0312 18:13:28.416642 7337 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 12 18:13:28.416670 master-0 kubenswrapper[7337]: E0312 18:13:28.416670 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert podName:51eb717b-d11f-4bc3-8df6-deb51d5889f3 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:36.416663294 +0000 UTC m=+16.885264231 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-kwv7s" (UID: "51eb717b-d11f-4bc3-8df6-deb51d5889f3") : secret "package-server-manager-serving-cert" not found Mar 12 18:13:28.416752 master-0 kubenswrapper[7337]: E0312 18:13:28.416687 7337 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 18:13:28.416752 master-0 kubenswrapper[7337]: E0312 18:13:28.416708 7337 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 12 18:13:28.416752 master-0 kubenswrapper[7337]: E0312 18:13:28.416726 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41644344-9d23-4ed1-9044-4fe65cab6159-serving-cert podName:41644344-9d23-4ed1-9044-4fe65cab6159 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:30.416714335 +0000 UTC m=+10.885315302 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/41644344-9d23-4ed1-9044-4fe65cab6159-serving-cert") pod "route-controller-manager-5b45cc45cc-kjvgg" (UID: "41644344-9d23-4ed1-9044-4fe65cab6159") : secret "serving-cert" not found Mar 12 18:13:28.416752 master-0 kubenswrapper[7337]: E0312 18:13:28.416745 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls podName:d94dc349-c5cb-4f12-8e48-867030af4981 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:36.416736726 +0000 UTC m=+16.885337683 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls") pod "ingress-operator-677db989d6-4527l" (UID: "d94dc349-c5cb-4f12-8e48-867030af4981") : secret "metrics-tls" not found Mar 12 18:13:28.416752 master-0 kubenswrapper[7337]: E0312 18:13:28.416750 7337 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 12 18:13:28.416898 master-0 kubenswrapper[7337]: E0312 18:13:28.416764 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics podName:4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:36.416753446 +0000 UTC m=+16.885354403 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-clkx5" (UID: "4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64") : secret "marketplace-operator-metrics" not found Mar 12 18:13:28.416898 master-0 kubenswrapper[7337]: E0312 18:13:28.416790 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert podName:d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:36.416773486 +0000 UTC m=+16.885374453 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert") pod "olm-operator-d64cfc9db-npt4r" (UID: "d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27") : secret "olm-operator-serving-cert" not found Mar 12 18:13:28.503077 master-0 kubenswrapper[7337]: I0312 18:13:28.503016 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5ff9688f8-27hlj"] Mar 12 18:13:28.503328 master-0 kubenswrapper[7337]: E0312 18:13:28.503287 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-5ff9688f8-27hlj" podUID="f7998f02-c803-4adf-9198-7408513fe07e" Mar 12 18:13:28.523431 master-0 kubenswrapper[7337]: I0312 18:13:28.517902 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs\") pod \"network-metrics-daemon-z4sc9\" (UID: \"5b48f8fd-2efe-44e3-a6d7-c71358b83a2f\") " pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:13:28.523431 master-0 kubenswrapper[7337]: I0312 18:13:28.517980 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:28.523431 master-0 kubenswrapper[7337]: E0312 18:13:28.518169 7337 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 12 18:13:28.523431 master-0 kubenswrapper[7337]: E0312 18:13:28.518257 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs podName:5b48f8fd-2efe-44e3-a6d7-c71358b83a2f nodeName:}" failed. No retries permitted until 2026-03-12 18:13:36.51823357 +0000 UTC m=+16.986834617 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs") pod "network-metrics-daemon-z4sc9" (UID: "5b48f8fd-2efe-44e3-a6d7-c71358b83a2f") : secret "metrics-daemon-secret" not found Mar 12 18:13:28.523431 master-0 kubenswrapper[7337]: I0312 18:13:28.518670 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:13:28.523431 master-0 kubenswrapper[7337]: I0312 18:13:28.518732 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:13:28.523431 master-0 kubenswrapper[7337]: I0312 18:13:28.519017 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls\") pod \"dns-operator-589895fbb7-jqj5k\" (UID: \"8ad05507-e242-4ff8-ae80-c16ff9ee68e2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-jqj5k" Mar 12 18:13:28.523431 master-0 kubenswrapper[7337]: I0312 18:13:28.519080 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs\") pod \"multus-admission-controller-8d675b596-kcpg5\" (UID: \"875bdfaa-b0a4-4412-a477-c962844e7057\") " pod="openshift-multus/multus-admission-controller-8d675b596-kcpg5" Mar 12 18:13:28.523431 master-0 kubenswrapper[7337]: I0312 18:13:28.519107 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:28.523431 master-0 kubenswrapper[7337]: E0312 18:13:28.519257 7337 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 12 18:13:28.523431 master-0 kubenswrapper[7337]: E0312 18:13:28.519289 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls podName:306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed nodeName:}" failed. No retries permitted until 2026-03-12 18:13:36.519278716 +0000 UTC m=+16.987879663 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-lqpbp" (UID: "306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed") : secret "node-tuning-operator-tls" not found Mar 12 18:13:28.523431 master-0 kubenswrapper[7337]: E0312 18:13:28.519335 7337 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 12 18:13:28.523431 master-0 kubenswrapper[7337]: E0312 18:13:28.519358 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert podName:306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed nodeName:}" failed. No retries permitted until 2026-03-12 18:13:36.519350088 +0000 UTC m=+16.987951035 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-lqpbp" (UID: "306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed") : secret "performance-addon-operator-webhook-cert" not found Mar 12 18:13:28.523431 master-0 kubenswrapper[7337]: E0312 18:13:28.519401 7337 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 18:13:28.523431 master-0 kubenswrapper[7337]: E0312 18:13:28.519425 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert podName:00755a4e-124c-4a51-b1c5-7c505b3637a8 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:36.51941775 +0000 UTC m=+16.988018817 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert") pod "cluster-version-operator-745944c6b7-cxwmx" (UID: "00755a4e-124c-4a51-b1c5-7c505b3637a8") : secret "cluster-version-operator-serving-cert" not found Mar 12 18:13:28.523431 master-0 kubenswrapper[7337]: E0312 18:13:28.519470 7337 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 12 18:13:28.523431 master-0 kubenswrapper[7337]: E0312 18:13:28.519493 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls podName:e22c7035-4b7a-48cb-9abb-db277b387842 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:36.519485362 +0000 UTC m=+16.988086309 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-l4krq" (UID: "e22c7035-4b7a-48cb-9abb-db277b387842") : secret "image-registry-operator-tls" not found Mar 12 18:13:28.523431 master-0 kubenswrapper[7337]: E0312 18:13:28.519555 7337 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 12 18:13:28.523431 master-0 kubenswrapper[7337]: E0312 18:13:28.519577 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls podName:8ad05507-e242-4ff8-ae80-c16ff9ee68e2 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:36.519570274 +0000 UTC m=+16.988171321 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls") pod "dns-operator-589895fbb7-jqj5k" (UID: "8ad05507-e242-4ff8-ae80-c16ff9ee68e2") : secret "metrics-tls" not found Mar 12 18:13:28.523431 master-0 kubenswrapper[7337]: E0312 18:13:28.519737 7337 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 12 18:13:28.523431 master-0 kubenswrapper[7337]: E0312 18:13:28.519831 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs podName:875bdfaa-b0a4-4412-a477-c962844e7057 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:36.51980861 +0000 UTC m=+16.988409597 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs") pod "multus-admission-controller-8d675b596-kcpg5" (UID: "875bdfaa-b0a4-4412-a477-c962844e7057") : secret "multus-admission-controller-secret" not found Mar 12 18:13:28.620187 master-0 kubenswrapper[7337]: I0312 18:13:28.620111 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7998f02-c803-4adf-9198-7408513fe07e-serving-cert\") pod \"controller-manager-5ff9688f8-27hlj\" (UID: \"f7998f02-c803-4adf-9198-7408513fe07e\") " pod="openshift-controller-manager/controller-manager-5ff9688f8-27hlj" Mar 12 18:13:28.620187 master-0 kubenswrapper[7337]: I0312 18:13:28.620166 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7998f02-c803-4adf-9198-7408513fe07e-client-ca\") pod \"controller-manager-5ff9688f8-27hlj\" (UID: \"f7998f02-c803-4adf-9198-7408513fe07e\") " pod="openshift-controller-manager/controller-manager-5ff9688f8-27hlj" Mar 12 18:13:28.620398 master-0 kubenswrapper[7337]: E0312 18:13:28.620278 7337 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 12 18:13:28.620398 master-0 kubenswrapper[7337]: E0312 18:13:28.620325 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f7998f02-c803-4adf-9198-7408513fe07e-client-ca podName:f7998f02-c803-4adf-9198-7408513fe07e nodeName:}" failed. No retries permitted until 2026-03-12 18:13:29.620311879 +0000 UTC m=+10.088912826 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/f7998f02-c803-4adf-9198-7408513fe07e-client-ca") pod "controller-manager-5ff9688f8-27hlj" (UID: "f7998f02-c803-4adf-9198-7408513fe07e") : configmap "client-ca" not found Mar 12 18:13:28.620501 master-0 kubenswrapper[7337]: E0312 18:13:28.620454 7337 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 18:13:28.620592 master-0 kubenswrapper[7337]: E0312 18:13:28.620570 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7998f02-c803-4adf-9198-7408513fe07e-serving-cert podName:f7998f02-c803-4adf-9198-7408513fe07e nodeName:}" failed. No retries permitted until 2026-03-12 18:13:29.620550975 +0000 UTC m=+10.089151932 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f7998f02-c803-4adf-9198-7408513fe07e-serving-cert") pod "controller-manager-5ff9688f8-27hlj" (UID: "f7998f02-c803-4adf-9198-7408513fe07e") : secret "serving-cert" not found Mar 12 18:13:29.027920 master-0 kubenswrapper[7337]: I0312 18:13:29.027884 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5ff9688f8-27hlj" Mar 12 18:13:29.033147 master-0 kubenswrapper[7337]: I0312 18:13:29.033123 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5ff9688f8-27hlj" Mar 12 18:13:29.125134 master-0 kubenswrapper[7337]: I0312 18:13:29.125092 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7998f02-c803-4adf-9198-7408513fe07e-config\") pod \"f7998f02-c803-4adf-9198-7408513fe07e\" (UID: \"f7998f02-c803-4adf-9198-7408513fe07e\") " Mar 12 18:13:29.125304 master-0 kubenswrapper[7337]: I0312 18:13:29.125155 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f7998f02-c803-4adf-9198-7408513fe07e-proxy-ca-bundles\") pod \"f7998f02-c803-4adf-9198-7408513fe07e\" (UID: \"f7998f02-c803-4adf-9198-7408513fe07e\") " Mar 12 18:13:29.125304 master-0 kubenswrapper[7337]: I0312 18:13:29.125183 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtg66\" (UniqueName: \"kubernetes.io/projected/f7998f02-c803-4adf-9198-7408513fe07e-kube-api-access-jtg66\") pod \"f7998f02-c803-4adf-9198-7408513fe07e\" (UID: \"f7998f02-c803-4adf-9198-7408513fe07e\") " Mar 12 18:13:29.126342 master-0 kubenswrapper[7337]: I0312 18:13:29.126320 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7998f02-c803-4adf-9198-7408513fe07e-config" (OuterVolumeSpecName: "config") pod "f7998f02-c803-4adf-9198-7408513fe07e" (UID: "f7998f02-c803-4adf-9198-7408513fe07e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:13:29.126601 master-0 kubenswrapper[7337]: I0312 18:13:29.126584 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7998f02-c803-4adf-9198-7408513fe07e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f7998f02-c803-4adf-9198-7408513fe07e" (UID: "f7998f02-c803-4adf-9198-7408513fe07e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:13:29.227099 master-0 kubenswrapper[7337]: I0312 18:13:29.227053 7337 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f7998f02-c803-4adf-9198-7408513fe07e-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 12 18:13:29.227099 master-0 kubenswrapper[7337]: I0312 18:13:29.227087 7337 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7998f02-c803-4adf-9198-7408513fe07e-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:13:29.766988 master-0 kubenswrapper[7337]: I0312 18:13:29.766441 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7998f02-c803-4adf-9198-7408513fe07e-kube-api-access-jtg66" (OuterVolumeSpecName: "kube-api-access-jtg66") pod "f7998f02-c803-4adf-9198-7408513fe07e" (UID: "f7998f02-c803-4adf-9198-7408513fe07e"). InnerVolumeSpecName "kube-api-access-jtg66". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:13:29.783156 master-0 kubenswrapper[7337]: I0312 18:13:29.780997 7337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d11f3b8-c47f-46a1-9590-5b27f36ae0d5" path="/var/lib/kubelet/pods/4d11f3b8-c47f-46a1-9590-5b27f36ae0d5/volumes" Mar 12 18:13:29.857670 master-0 kubenswrapper[7337]: I0312 18:13:29.852687 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7998f02-c803-4adf-9198-7408513fe07e-serving-cert\") pod \"controller-manager-5ff9688f8-27hlj\" (UID: \"f7998f02-c803-4adf-9198-7408513fe07e\") " pod="openshift-controller-manager/controller-manager-5ff9688f8-27hlj" Mar 12 18:13:29.857670 master-0 kubenswrapper[7337]: I0312 18:13:29.852728 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7998f02-c803-4adf-9198-7408513fe07e-client-ca\") pod \"controller-manager-5ff9688f8-27hlj\" (UID: \"f7998f02-c803-4adf-9198-7408513fe07e\") " pod="openshift-controller-manager/controller-manager-5ff9688f8-27hlj" Mar 12 18:13:29.857670 master-0 kubenswrapper[7337]: I0312 18:13:29.852798 7337 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtg66\" (UniqueName: \"kubernetes.io/projected/f7998f02-c803-4adf-9198-7408513fe07e-kube-api-access-jtg66\") on node \"master-0\" DevicePath \"\"" Mar 12 18:13:29.857670 master-0 kubenswrapper[7337]: E0312 18:13:29.852859 7337 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 12 18:13:29.857670 master-0 kubenswrapper[7337]: E0312 18:13:29.852901 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f7998f02-c803-4adf-9198-7408513fe07e-client-ca podName:f7998f02-c803-4adf-9198-7408513fe07e nodeName:}" failed. No retries permitted until 2026-03-12 18:13:31.85288818 +0000 UTC m=+12.321489127 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/f7998f02-c803-4adf-9198-7408513fe07e-client-ca") pod "controller-manager-5ff9688f8-27hlj" (UID: "f7998f02-c803-4adf-9198-7408513fe07e") : configmap "client-ca" not found Mar 12 18:13:29.857670 master-0 kubenswrapper[7337]: E0312 18:13:29.853725 7337 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 18:13:29.857670 master-0 kubenswrapper[7337]: E0312 18:13:29.853758 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7998f02-c803-4adf-9198-7408513fe07e-serving-cert podName:f7998f02-c803-4adf-9198-7408513fe07e nodeName:}" failed. No retries permitted until 2026-03-12 18:13:31.853748092 +0000 UTC m=+12.322349039 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f7998f02-c803-4adf-9198-7408513fe07e-serving-cert") pod "controller-manager-5ff9688f8-27hlj" (UID: "f7998f02-c803-4adf-9198-7408513fe07e") : secret "serving-cert" not found Mar 12 18:13:30.037903 master-0 kubenswrapper[7337]: I0312 18:13:30.037775 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5ff9688f8-27hlj" Mar 12 18:13:30.242538 master-0 kubenswrapper[7337]: I0312 18:13:30.242169 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5cc4c76f78-8gkkj"] Mar 12 18:13:30.242710 master-0 kubenswrapper[7337]: I0312 18:13:30.242618 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cc4c76f78-8gkkj" Mar 12 18:13:30.248956 master-0 kubenswrapper[7337]: I0312 18:13:30.247055 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5ff9688f8-27hlj"] Mar 12 18:13:30.248956 master-0 kubenswrapper[7337]: I0312 18:13:30.247355 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 18:13:30.248956 master-0 kubenswrapper[7337]: I0312 18:13:30.247501 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 18:13:30.248956 master-0 kubenswrapper[7337]: I0312 18:13:30.247964 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 18:13:30.248956 master-0 kubenswrapper[7337]: I0312 18:13:30.247971 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 18:13:30.248956 master-0 kubenswrapper[7337]: I0312 18:13:30.248165 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 18:13:30.258138 master-0 kubenswrapper[7337]: I0312 18:13:30.252895 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 18:13:30.258138 master-0 kubenswrapper[7337]: I0312 18:13:30.254014 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cc4c76f78-8gkkj"] Mar 12 18:13:30.258138 master-0 kubenswrapper[7337]: I0312 18:13:30.258088 7337 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5ff9688f8-27hlj"] Mar 12 18:13:30.368890 master-0 kubenswrapper[7337]: I0312 18:13:30.368832 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" Mar 12 18:13:30.373654 master-0 kubenswrapper[7337]: I0312 18:13:30.373185 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/910180ff-efca-4c61-b250-6b0ba7a76089-proxy-ca-bundles\") pod \"controller-manager-5cc4c76f78-8gkkj\" (UID: \"910180ff-efca-4c61-b250-6b0ba7a76089\") " pod="openshift-controller-manager/controller-manager-5cc4c76f78-8gkkj" Mar 12 18:13:30.373654 master-0 kubenswrapper[7337]: I0312 18:13:30.373319 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/910180ff-efca-4c61-b250-6b0ba7a76089-client-ca\") pod \"controller-manager-5cc4c76f78-8gkkj\" (UID: \"910180ff-efca-4c61-b250-6b0ba7a76089\") " pod="openshift-controller-manager/controller-manager-5cc4c76f78-8gkkj" Mar 12 18:13:30.373654 master-0 kubenswrapper[7337]: I0312 18:13:30.373347 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-978rf\" (UniqueName: \"kubernetes.io/projected/910180ff-efca-4c61-b250-6b0ba7a76089-kube-api-access-978rf\") pod \"controller-manager-5cc4c76f78-8gkkj\" (UID: \"910180ff-efca-4c61-b250-6b0ba7a76089\") " pod="openshift-controller-manager/controller-manager-5cc4c76f78-8gkkj" Mar 12 18:13:30.373654 master-0 kubenswrapper[7337]: I0312 18:13:30.373460 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/910180ff-efca-4c61-b250-6b0ba7a76089-config\") pod \"controller-manager-5cc4c76f78-8gkkj\" (UID: \"910180ff-efca-4c61-b250-6b0ba7a76089\") " pod="openshift-controller-manager/controller-manager-5cc4c76f78-8gkkj" Mar 12 18:13:30.373654 master-0 kubenswrapper[7337]: I0312 18:13:30.373483 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/910180ff-efca-4c61-b250-6b0ba7a76089-serving-cert\") pod \"controller-manager-5cc4c76f78-8gkkj\" (UID: \"910180ff-efca-4c61-b250-6b0ba7a76089\") " pod="openshift-controller-manager/controller-manager-5cc4c76f78-8gkkj" Mar 12 18:13:30.373654 master-0 kubenswrapper[7337]: I0312 18:13:30.373578 7337 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7998f02-c803-4adf-9198-7408513fe07e-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 18:13:30.373654 master-0 kubenswrapper[7337]: I0312 18:13:30.373594 7337 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7998f02-c803-4adf-9198-7408513fe07e-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 18:13:30.474224 master-0 kubenswrapper[7337]: I0312 18:13:30.474158 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/910180ff-efca-4c61-b250-6b0ba7a76089-client-ca\") pod \"controller-manager-5cc4c76f78-8gkkj\" (UID: \"910180ff-efca-4c61-b250-6b0ba7a76089\") " pod="openshift-controller-manager/controller-manager-5cc4c76f78-8gkkj" Mar 12 18:13:30.474224 master-0 kubenswrapper[7337]: I0312 18:13:30.474204 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-978rf\" (UniqueName: \"kubernetes.io/projected/910180ff-efca-4c61-b250-6b0ba7a76089-kube-api-access-978rf\") pod \"controller-manager-5cc4c76f78-8gkkj\" (UID: \"910180ff-efca-4c61-b250-6b0ba7a76089\") " pod="openshift-controller-manager/controller-manager-5cc4c76f78-8gkkj" Mar 12 18:13:30.474479 master-0 kubenswrapper[7337]: I0312 18:13:30.474266 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41644344-9d23-4ed1-9044-4fe65cab6159-serving-cert\") pod \"route-controller-manager-5b45cc45cc-kjvgg\" (UID: \"41644344-9d23-4ed1-9044-4fe65cab6159\") " pod="openshift-route-controller-manager/route-controller-manager-5b45cc45cc-kjvgg" Mar 12 18:13:30.474479 master-0 kubenswrapper[7337]: I0312 18:13:30.474294 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/910180ff-efca-4c61-b250-6b0ba7a76089-config\") pod \"controller-manager-5cc4c76f78-8gkkj\" (UID: \"910180ff-efca-4c61-b250-6b0ba7a76089\") " pod="openshift-controller-manager/controller-manager-5cc4c76f78-8gkkj" Mar 12 18:13:30.474479 master-0 kubenswrapper[7337]: I0312 18:13:30.474310 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/910180ff-efca-4c61-b250-6b0ba7a76089-serving-cert\") pod \"controller-manager-5cc4c76f78-8gkkj\" (UID: \"910180ff-efca-4c61-b250-6b0ba7a76089\") " pod="openshift-controller-manager/controller-manager-5cc4c76f78-8gkkj" Mar 12 18:13:30.474479 master-0 kubenswrapper[7337]: I0312 18:13:30.474368 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41644344-9d23-4ed1-9044-4fe65cab6159-client-ca\") pod \"route-controller-manager-5b45cc45cc-kjvgg\" (UID: \"41644344-9d23-4ed1-9044-4fe65cab6159\") " pod="openshift-route-controller-manager/route-controller-manager-5b45cc45cc-kjvgg" Mar 12 18:13:30.474479 master-0 kubenswrapper[7337]: I0312 18:13:30.474382 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/910180ff-efca-4c61-b250-6b0ba7a76089-proxy-ca-bundles\") pod \"controller-manager-5cc4c76f78-8gkkj\" (UID: \"910180ff-efca-4c61-b250-6b0ba7a76089\") " pod="openshift-controller-manager/controller-manager-5cc4c76f78-8gkkj" Mar 12 18:13:30.474805 master-0 kubenswrapper[7337]: E0312 18:13:30.474703 7337 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 18:13:30.474846 master-0 kubenswrapper[7337]: E0312 18:13:30.474827 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41644344-9d23-4ed1-9044-4fe65cab6159-serving-cert podName:41644344-9d23-4ed1-9044-4fe65cab6159 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:34.474753004 +0000 UTC m=+14.943353951 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/41644344-9d23-4ed1-9044-4fe65cab6159-serving-cert") pod "route-controller-manager-5b45cc45cc-kjvgg" (UID: "41644344-9d23-4ed1-9044-4fe65cab6159") : secret "serving-cert" not found Mar 12 18:13:30.475975 master-0 kubenswrapper[7337]: E0312 18:13:30.474975 7337 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 18:13:30.475975 master-0 kubenswrapper[7337]: E0312 18:13:30.475086 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/910180ff-efca-4c61-b250-6b0ba7a76089-serving-cert podName:910180ff-efca-4c61-b250-6b0ba7a76089 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:30.975062312 +0000 UTC m=+11.443663259 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/910180ff-efca-4c61-b250-6b0ba7a76089-serving-cert") pod "controller-manager-5cc4c76f78-8gkkj" (UID: "910180ff-efca-4c61-b250-6b0ba7a76089") : secret "serving-cert" not found Mar 12 18:13:30.475975 master-0 kubenswrapper[7337]: E0312 18:13:30.475725 7337 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 12 18:13:30.475975 master-0 kubenswrapper[7337]: E0312 18:13:30.475828 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/910180ff-efca-4c61-b250-6b0ba7a76089-client-ca podName:910180ff-efca-4c61-b250-6b0ba7a76089 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:30.975802491 +0000 UTC m=+11.444403478 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/910180ff-efca-4c61-b250-6b0ba7a76089-client-ca") pod "controller-manager-5cc4c76f78-8gkkj" (UID: "910180ff-efca-4c61-b250-6b0ba7a76089") : configmap "client-ca" not found Mar 12 18:13:30.476227 master-0 kubenswrapper[7337]: E0312 18:13:30.476190 7337 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 12 18:13:30.478933 master-0 kubenswrapper[7337]: I0312 18:13:30.478794 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/910180ff-efca-4c61-b250-6b0ba7a76089-proxy-ca-bundles\") pod \"controller-manager-5cc4c76f78-8gkkj\" (UID: \"910180ff-efca-4c61-b250-6b0ba7a76089\") " pod="openshift-controller-manager/controller-manager-5cc4c76f78-8gkkj" Mar 12 18:13:30.478933 master-0 kubenswrapper[7337]: I0312 18:13:30.478840 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/910180ff-efca-4c61-b250-6b0ba7a76089-config\") pod \"controller-manager-5cc4c76f78-8gkkj\" (UID: \"910180ff-efca-4c61-b250-6b0ba7a76089\") " pod="openshift-controller-manager/controller-manager-5cc4c76f78-8gkkj" Mar 12 18:13:30.478933 master-0 kubenswrapper[7337]: E0312 18:13:30.478848 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41644344-9d23-4ed1-9044-4fe65cab6159-client-ca podName:41644344-9d23-4ed1-9044-4fe65cab6159 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:34.476254522 +0000 UTC m=+14.944855469 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/41644344-9d23-4ed1-9044-4fe65cab6159-client-ca") pod "route-controller-manager-5b45cc45cc-kjvgg" (UID: "41644344-9d23-4ed1-9044-4fe65cab6159") : configmap "client-ca" not found Mar 12 18:13:30.516475 master-0 kubenswrapper[7337]: I0312 18:13:30.516430 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-978rf\" (UniqueName: \"kubernetes.io/projected/910180ff-efca-4c61-b250-6b0ba7a76089-kube-api-access-978rf\") pod \"controller-manager-5cc4c76f78-8gkkj\" (UID: \"910180ff-efca-4c61-b250-6b0ba7a76089\") " pod="openshift-controller-manager/controller-manager-5cc4c76f78-8gkkj" Mar 12 18:13:30.750744 master-0 kubenswrapper[7337]: I0312 18:13:30.750639 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:13:30.753842 master-0 kubenswrapper[7337]: I0312 18:13:30.753822 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:13:31.012024 master-0 kubenswrapper[7337]: I0312 18:13:31.011898 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/910180ff-efca-4c61-b250-6b0ba7a76089-serving-cert\") pod \"controller-manager-5cc4c76f78-8gkkj\" (UID: \"910180ff-efca-4c61-b250-6b0ba7a76089\") " pod="openshift-controller-manager/controller-manager-5cc4c76f78-8gkkj" Mar 12 18:13:31.012284 master-0 kubenswrapper[7337]: E0312 18:13:31.012088 7337 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 18:13:31.012284 master-0 kubenswrapper[7337]: E0312 18:13:31.012178 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/910180ff-efca-4c61-b250-6b0ba7a76089-serving-cert podName:910180ff-efca-4c61-b250-6b0ba7a76089 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:32.012161797 +0000 UTC m=+12.480762744 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/910180ff-efca-4c61-b250-6b0ba7a76089-serving-cert") pod "controller-manager-5cc4c76f78-8gkkj" (UID: "910180ff-efca-4c61-b250-6b0ba7a76089") : secret "serving-cert" not found Mar 12 18:13:31.012284 master-0 kubenswrapper[7337]: I0312 18:13:31.012235 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/910180ff-efca-4c61-b250-6b0ba7a76089-client-ca\") pod \"controller-manager-5cc4c76f78-8gkkj\" (UID: \"910180ff-efca-4c61-b250-6b0ba7a76089\") " pod="openshift-controller-manager/controller-manager-5cc4c76f78-8gkkj" Mar 12 18:13:31.012382 master-0 kubenswrapper[7337]: E0312 18:13:31.012357 7337 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 12 18:13:31.012416 master-0 kubenswrapper[7337]: E0312 18:13:31.012405 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/910180ff-efca-4c61-b250-6b0ba7a76089-client-ca podName:910180ff-efca-4c61-b250-6b0ba7a76089 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:32.012395293 +0000 UTC m=+12.480996430 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/910180ff-efca-4c61-b250-6b0ba7a76089-client-ca") pod "controller-manager-5cc4c76f78-8gkkj" (UID: "910180ff-efca-4c61-b250-6b0ba7a76089") : configmap "client-ca" not found Mar 12 18:13:32.022109 master-0 kubenswrapper[7337]: I0312 18:13:32.022032 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/910180ff-efca-4c61-b250-6b0ba7a76089-client-ca\") pod \"controller-manager-5cc4c76f78-8gkkj\" (UID: \"910180ff-efca-4c61-b250-6b0ba7a76089\") " pod="openshift-controller-manager/controller-manager-5cc4c76f78-8gkkj" Mar 12 18:13:32.022731 master-0 kubenswrapper[7337]: I0312 18:13:32.022171 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/910180ff-efca-4c61-b250-6b0ba7a76089-serving-cert\") pod \"controller-manager-5cc4c76f78-8gkkj\" (UID: \"910180ff-efca-4c61-b250-6b0ba7a76089\") " pod="openshift-controller-manager/controller-manager-5cc4c76f78-8gkkj" Mar 12 18:13:32.022731 master-0 kubenswrapper[7337]: E0312 18:13:32.022341 7337 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 18:13:32.022731 master-0 kubenswrapper[7337]: E0312 18:13:32.022397 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/910180ff-efca-4c61-b250-6b0ba7a76089-serving-cert podName:910180ff-efca-4c61-b250-6b0ba7a76089 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:34.022380729 +0000 UTC m=+14.490981676 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/910180ff-efca-4c61-b250-6b0ba7a76089-serving-cert") pod "controller-manager-5cc4c76f78-8gkkj" (UID: "910180ff-efca-4c61-b250-6b0ba7a76089") : secret "serving-cert" not found Mar 12 18:13:32.022866 master-0 kubenswrapper[7337]: E0312 18:13:32.022839 7337 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 12 18:13:32.022906 master-0 kubenswrapper[7337]: E0312 18:13:32.022872 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/910180ff-efca-4c61-b250-6b0ba7a76089-client-ca podName:910180ff-efca-4c61-b250-6b0ba7a76089 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:34.022863261 +0000 UTC m=+14.491464218 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/910180ff-efca-4c61-b250-6b0ba7a76089-client-ca") pod "controller-manager-5cc4c76f78-8gkkj" (UID: "910180ff-efca-4c61-b250-6b0ba7a76089") : configmap "client-ca" not found Mar 12 18:13:32.189601 master-0 kubenswrapper[7337]: I0312 18:13:32.189465 7337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7998f02-c803-4adf-9198-7408513fe07e" path="/var/lib/kubelet/pods/f7998f02-c803-4adf-9198-7408513fe07e/volumes" Mar 12 18:13:33.193066 master-0 kubenswrapper[7337]: I0312 18:13:33.192735 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-w72wh" event={"ID":"8c241720-7815-40fd-8d4a-1685a43b5893","Type":"ContainerStarted","Data":"b117b33f178ede5050f80ca0f057bb2d96e67c8bb68c877eff5dbb503fbb77cb"} Mar 12 18:13:33.193066 master-0 kubenswrapper[7337]: I0312 18:13:33.193054 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-w72wh" event={"ID":"8c241720-7815-40fd-8d4a-1685a43b5893","Type":"ContainerStarted","Data":"2c6a1ea9c6d5a10a2028d55394f436ce0c221ae1f4991be43bc1da6dd03da9a3"} Mar 12 18:13:33.216688 master-0 kubenswrapper[7337]: I0312 18:13:33.215284 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-w72wh" podStartSLOduration=3.180339675 podStartE2EDuration="8.21526701s" podCreationTimestamp="2026-03-12 18:13:25 +0000 UTC" firstStartedPulling="2026-03-12 18:13:27.707684622 +0000 UTC m=+8.176285579" lastFinishedPulling="2026-03-12 18:13:32.742611967 +0000 UTC m=+13.211212914" observedRunningTime="2026-03-12 18:13:33.214599713 +0000 UTC m=+13.683200680" watchObservedRunningTime="2026-03-12 18:13:33.21526701 +0000 UTC m=+13.683867967" Mar 12 18:13:33.351537 master-0 kubenswrapper[7337]: I0312 18:13:33.351480 7337 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tjp2j container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 12 18:13:33.351719 master-0 kubenswrapper[7337]: I0312 18:13:33.351563 7337 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" podUID="37cd9c0a-697e-4e67-932b-b331ff77c8c0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 12 18:13:34.050100 master-0 kubenswrapper[7337]: I0312 18:13:34.050040 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/910180ff-efca-4c61-b250-6b0ba7a76089-serving-cert\") pod \"controller-manager-5cc4c76f78-8gkkj\" (UID: \"910180ff-efca-4c61-b250-6b0ba7a76089\") " pod="openshift-controller-manager/controller-manager-5cc4c76f78-8gkkj" Mar 12 18:13:34.050372 master-0 kubenswrapper[7337]: I0312 18:13:34.050337 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/910180ff-efca-4c61-b250-6b0ba7a76089-client-ca\") pod \"controller-manager-5cc4c76f78-8gkkj\" (UID: \"910180ff-efca-4c61-b250-6b0ba7a76089\") " pod="openshift-controller-manager/controller-manager-5cc4c76f78-8gkkj" Mar 12 18:13:34.050551 master-0 kubenswrapper[7337]: E0312 18:13:34.050506 7337 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 12 18:13:34.050621 master-0 kubenswrapper[7337]: E0312 18:13:34.050609 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/910180ff-efca-4c61-b250-6b0ba7a76089-client-ca podName:910180ff-efca-4c61-b250-6b0ba7a76089 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:38.050592168 +0000 UTC m=+18.519193115 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/910180ff-efca-4c61-b250-6b0ba7a76089-client-ca") pod "controller-manager-5cc4c76f78-8gkkj" (UID: "910180ff-efca-4c61-b250-6b0ba7a76089") : configmap "client-ca" not found Mar 12 18:13:34.054970 master-0 kubenswrapper[7337]: I0312 18:13:34.054934 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/910180ff-efca-4c61-b250-6b0ba7a76089-serving-cert\") pod \"controller-manager-5cc4c76f78-8gkkj\" (UID: \"910180ff-efca-4c61-b250-6b0ba7a76089\") " pod="openshift-controller-manager/controller-manager-5cc4c76f78-8gkkj" Mar 12 18:13:34.199477 master-0 kubenswrapper[7337]: I0312 18:13:34.199305 7337 generic.go:334] "Generic (PLEG): container finished" podID="37cd9c0a-697e-4e67-932b-b331ff77c8c0" containerID="3100d6853d6653605e1a09e2cf985a9ecb63a1450916f3d98d5854fad367310a" exitCode=0 Mar 12 18:13:34.199477 master-0 kubenswrapper[7337]: I0312 18:13:34.199350 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" event={"ID":"37cd9c0a-697e-4e67-932b-b331ff77c8c0","Type":"ContainerDied","Data":"3100d6853d6653605e1a09e2cf985a9ecb63a1450916f3d98d5854fad367310a"} Mar 12 18:13:34.200923 master-0 kubenswrapper[7337]: I0312 18:13:34.199827 7337 scope.go:117] "RemoveContainer" containerID="3100d6853d6653605e1a09e2cf985a9ecb63a1450916f3d98d5854fad367310a" Mar 12 18:13:34.202119 master-0 kubenswrapper[7337]: I0312 18:13:34.202076 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4" event={"ID":"f3a2cda2-b70f-4128-a1be-48503f5aad6d","Type":"ContainerStarted","Data":"3a34f5e0d15dd4e7c330e2c8919e65deba96f9d77b56fa794a4877221990e20a"} Mar 12 18:13:34.535277 master-0 kubenswrapper[7337]: I0312 18:13:34.535132 7337 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" Mar 12 18:13:34.556529 master-0 kubenswrapper[7337]: I0312 18:13:34.556431 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41644344-9d23-4ed1-9044-4fe65cab6159-client-ca\") pod \"route-controller-manager-5b45cc45cc-kjvgg\" (UID: \"41644344-9d23-4ed1-9044-4fe65cab6159\") " pod="openshift-route-controller-manager/route-controller-manager-5b45cc45cc-kjvgg" Mar 12 18:13:34.556768 master-0 kubenswrapper[7337]: E0312 18:13:34.556656 7337 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 12 18:13:34.556768 master-0 kubenswrapper[7337]: E0312 18:13:34.556728 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41644344-9d23-4ed1-9044-4fe65cab6159-client-ca podName:41644344-9d23-4ed1-9044-4fe65cab6159 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:42.556710715 +0000 UTC m=+23.025311672 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/41644344-9d23-4ed1-9044-4fe65cab6159-client-ca") pod "route-controller-manager-5b45cc45cc-kjvgg" (UID: "41644344-9d23-4ed1-9044-4fe65cab6159") : configmap "client-ca" not found Mar 12 18:13:34.556911 master-0 kubenswrapper[7337]: I0312 18:13:34.556810 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41644344-9d23-4ed1-9044-4fe65cab6159-serving-cert\") pod \"route-controller-manager-5b45cc45cc-kjvgg\" (UID: \"41644344-9d23-4ed1-9044-4fe65cab6159\") " pod="openshift-route-controller-manager/route-controller-manager-5b45cc45cc-kjvgg" Mar 12 18:13:34.556967 master-0 kubenswrapper[7337]: E0312 18:13:34.556921 7337 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 18:13:34.556967 master-0 kubenswrapper[7337]: E0312 18:13:34.556964 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41644344-9d23-4ed1-9044-4fe65cab6159-serving-cert podName:41644344-9d23-4ed1-9044-4fe65cab6159 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:42.556953221 +0000 UTC m=+23.025554188 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/41644344-9d23-4ed1-9044-4fe65cab6159-serving-cert") pod "route-controller-manager-5b45cc45cc-kjvgg" (UID: "41644344-9d23-4ed1-9044-4fe65cab6159") : secret "serving-cert" not found Mar 12 18:13:35.207156 master-0 kubenswrapper[7337]: I0312 18:13:35.207082 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" event={"ID":"37cd9c0a-697e-4e67-932b-b331ff77c8c0","Type":"ContainerStarted","Data":"37af6c94f0de0a2163a4ac4e6ab6085ad4d71da179fad764b86f087db1506c46"} Mar 12 18:13:36.210576 master-0 kubenswrapper[7337]: I0312 18:13:36.210480 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" Mar 12 18:13:36.484402 master-0 kubenswrapper[7337]: I0312 18:13:36.484245 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-fz79c\" (UID: \"e94d098b-fbcc-4e85-b8ad-42f3a21c822c\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" Mar 12 18:13:36.484402 master-0 kubenswrapper[7337]: I0312 18:13:36.484355 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:13:36.484402 master-0 kubenswrapper[7337]: E0312 18:13:36.484386 7337 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 12 18:13:36.484692 master-0 kubenswrapper[7337]: E0312 18:13:36.484480 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls podName:e94d098b-fbcc-4e85-b8ad-42f3a21c822c nodeName:}" failed. No retries permitted until 2026-03-12 18:13:52.484458607 +0000 UTC m=+32.953059624 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-fz79c" (UID: "e94d098b-fbcc-4e85-b8ad-42f3a21c822c") : secret "cluster-monitoring-operator-tls" not found Mar 12 18:13:36.484692 master-0 kubenswrapper[7337]: I0312 18:13:36.484554 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-clkx5\" (UID: \"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:13:36.484692 master-0 kubenswrapper[7337]: I0312 18:13:36.484585 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert\") pod \"catalog-operator-7d9c49f57b-pslh7\" (UID: \"47850839-bb4b-41e9-ac31-f1cabbb4926d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7" Mar 12 18:13:36.484692 master-0 kubenswrapper[7337]: I0312 18:13:36.484621 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert\") pod \"olm-operator-d64cfc9db-npt4r\" (UID: \"d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r" Mar 12 18:13:36.484861 master-0 kubenswrapper[7337]: E0312 18:13:36.484790 7337 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 12 18:13:36.484929 master-0 kubenswrapper[7337]: E0312 18:13:36.484893 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics podName:4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:52.484866207 +0000 UTC m=+32.953467194 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-clkx5" (UID: "4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64") : secret "marketplace-operator-metrics" not found Mar 12 18:13:36.484981 master-0 kubenswrapper[7337]: I0312 18:13:36.484954 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-kwv7s\" (UID: \"51eb717b-d11f-4bc3-8df6-deb51d5889f3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:13:36.485200 master-0 kubenswrapper[7337]: E0312 18:13:36.485171 7337 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 12 18:13:36.485250 master-0 kubenswrapper[7337]: E0312 18:13:36.485238 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert podName:51eb717b-d11f-4bc3-8df6-deb51d5889f3 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:52.485218946 +0000 UTC m=+32.953819963 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-kwv7s" (UID: "51eb717b-d11f-4bc3-8df6-deb51d5889f3") : secret "package-server-manager-serving-cert" not found Mar 12 18:13:36.485296 master-0 kubenswrapper[7337]: E0312 18:13:36.485261 7337 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 12 18:13:36.485331 master-0 kubenswrapper[7337]: E0312 18:13:36.485297 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert podName:47850839-bb4b-41e9-ac31-f1cabbb4926d nodeName:}" failed. No retries permitted until 2026-03-12 18:13:52.485286118 +0000 UTC m=+32.953887135 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert") pod "catalog-operator-7d9c49f57b-pslh7" (UID: "47850839-bb4b-41e9-ac31-f1cabbb4926d") : secret "catalog-operator-serving-cert" not found Mar 12 18:13:36.485405 master-0 kubenswrapper[7337]: E0312 18:13:36.485183 7337 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 12 18:13:36.485558 master-0 kubenswrapper[7337]: E0312 18:13:36.485544 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert podName:d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:52.485500193 +0000 UTC m=+32.954101210 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert") pod "olm-operator-d64cfc9db-npt4r" (UID: "d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27") : secret "olm-operator-serving-cert" not found Mar 12 18:13:36.490927 master-0 kubenswrapper[7337]: I0312 18:13:36.490875 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:13:36.572853 master-0 kubenswrapper[7337]: I0312 18:13:36.572800 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:13:36.585921 master-0 kubenswrapper[7337]: I0312 18:13:36.585870 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:13:36.586011 master-0 kubenswrapper[7337]: I0312 18:13:36.585943 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls\") pod \"dns-operator-589895fbb7-jqj5k\" (UID: \"8ad05507-e242-4ff8-ae80-c16ff9ee68e2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-jqj5k" Mar 12 18:13:36.586631 master-0 kubenswrapper[7337]: E0312 18:13:36.586502 7337 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 12 18:13:36.586688 master-0 kubenswrapper[7337]: E0312 18:13:36.586642 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs podName:875bdfaa-b0a4-4412-a477-c962844e7057 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:52.586615967 +0000 UTC m=+33.055216984 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs") pod "multus-admission-controller-8d675b596-kcpg5" (UID: "875bdfaa-b0a4-4412-a477-c962844e7057") : secret "multus-admission-controller-secret" not found Mar 12 18:13:36.586688 master-0 kubenswrapper[7337]: I0312 18:13:36.586504 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs\") pod \"multus-admission-controller-8d675b596-kcpg5\" (UID: \"875bdfaa-b0a4-4412-a477-c962844e7057\") " pod="openshift-multus/multus-admission-controller-8d675b596-kcpg5" Mar 12 18:13:36.586774 master-0 kubenswrapper[7337]: I0312 18:13:36.586716 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:36.586825 master-0 kubenswrapper[7337]: I0312 18:13:36.586812 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs\") pod \"network-metrics-daemon-z4sc9\" (UID: \"5b48f8fd-2efe-44e3-a6d7-c71358b83a2f\") " pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:13:36.586965 master-0 kubenswrapper[7337]: I0312 18:13:36.586871 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:36.587015 master-0 kubenswrapper[7337]: I0312 18:13:36.586968 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:13:36.587321 master-0 kubenswrapper[7337]: E0312 18:13:36.587302 7337 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 12 18:13:36.587444 master-0 kubenswrapper[7337]: E0312 18:13:36.587425 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs podName:5b48f8fd-2efe-44e3-a6d7-c71358b83a2f nodeName:}" failed. No retries permitted until 2026-03-12 18:13:52.587405457 +0000 UTC m=+33.056006404 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs") pod "network-metrics-daemon-z4sc9" (UID: "5b48f8fd-2efe-44e3-a6d7-c71358b83a2f") : secret "metrics-daemon-secret" not found Mar 12 18:13:36.588717 master-0 kubenswrapper[7337]: I0312 18:13:36.588684 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:13:36.590145 master-0 kubenswrapper[7337]: I0312 18:13:36.590087 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert\") pod \"cluster-version-operator-745944c6b7-cxwmx\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:13:36.590201 master-0 kubenswrapper[7337]: I0312 18:13:36.590189 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls\") pod \"dns-operator-589895fbb7-jqj5k\" (UID: \"8ad05507-e242-4ff8-ae80-c16ff9ee68e2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-jqj5k" Mar 12 18:13:36.590748 master-0 kubenswrapper[7337]: I0312 18:13:36.590704 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:36.590955 master-0 kubenswrapper[7337]: I0312 18:13:36.590913 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:36.876465 master-0 kubenswrapper[7337]: I0312 18:13:36.876214 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:13:36.880773 master-0 kubenswrapper[7337]: I0312 18:13:36.880745 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:13:36.880773 master-0 kubenswrapper[7337]: I0312 18:13:36.880748 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-589895fbb7-jqj5k" Mar 12 18:13:36.880976 master-0 kubenswrapper[7337]: I0312 18:13:36.880954 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:13:36.907440 master-0 kubenswrapper[7337]: W0312 18:13:36.907383 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00755a4e_124c_4a51_b1c5_7c505b3637a8.slice/crio-7c9e68c50d09c9f8a89015bdfd2c1cf33c28b6a7d845aef581a57e003e8e6cc7 WatchSource:0}: Error finding container 7c9e68c50d09c9f8a89015bdfd2c1cf33c28b6a7d845aef581a57e003e8e6cc7: Status 404 returned error can't find the container with id 7c9e68c50d09c9f8a89015bdfd2c1cf33c28b6a7d845aef581a57e003e8e6cc7 Mar 12 18:13:37.214079 master-0 kubenswrapper[7337]: I0312 18:13:37.214040 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77899cf6d-6nvn4_f3a2cda2-b70f-4128-a1be-48503f5aad6d/cluster-olm-operator/0.log" Mar 12 18:13:37.215183 master-0 kubenswrapper[7337]: I0312 18:13:37.215163 7337 generic.go:334] "Generic (PLEG): container finished" podID="f3a2cda2-b70f-4128-a1be-48503f5aad6d" containerID="3a34f5e0d15dd4e7c330e2c8919e65deba96f9d77b56fa794a4877221990e20a" exitCode=255 Mar 12 18:13:37.215256 master-0 kubenswrapper[7337]: I0312 18:13:37.215211 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4" event={"ID":"f3a2cda2-b70f-4128-a1be-48503f5aad6d","Type":"ContainerDied","Data":"3a34f5e0d15dd4e7c330e2c8919e65deba96f9d77b56fa794a4877221990e20a"} Mar 12 18:13:37.215650 master-0 kubenswrapper[7337]: I0312 18:13:37.215534 7337 scope.go:117] "RemoveContainer" containerID="3a34f5e0d15dd4e7c330e2c8919e65deba96f9d77b56fa794a4877221990e20a" Mar 12 18:13:37.217615 master-0 kubenswrapper[7337]: I0312 18:13:37.217560 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" event={"ID":"00755a4e-124c-4a51-b1c5-7c505b3637a8","Type":"ContainerStarted","Data":"7c9e68c50d09c9f8a89015bdfd2c1cf33c28b6a7d845aef581a57e003e8e6cc7"} Mar 12 18:13:38.106291 master-0 kubenswrapper[7337]: I0312 18:13:38.105937 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/910180ff-efca-4c61-b250-6b0ba7a76089-client-ca\") pod \"controller-manager-5cc4c76f78-8gkkj\" (UID: \"910180ff-efca-4c61-b250-6b0ba7a76089\") " pod="openshift-controller-manager/controller-manager-5cc4c76f78-8gkkj" Mar 12 18:13:38.106546 master-0 kubenswrapper[7337]: E0312 18:13:38.106130 7337 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 12 18:13:38.106546 master-0 kubenswrapper[7337]: E0312 18:13:38.106420 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/910180ff-efca-4c61-b250-6b0ba7a76089-client-ca podName:910180ff-efca-4c61-b250-6b0ba7a76089 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:46.106396443 +0000 UTC m=+26.574997420 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/910180ff-efca-4c61-b250-6b0ba7a76089-client-ca") pod "controller-manager-5cc4c76f78-8gkkj" (UID: "910180ff-efca-4c61-b250-6b0ba7a76089") : configmap "client-ca" not found Mar 12 18:13:38.222617 master-0 kubenswrapper[7337]: I0312 18:13:38.222021 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77899cf6d-6nvn4_f3a2cda2-b70f-4128-a1be-48503f5aad6d/cluster-olm-operator/0.log" Mar 12 18:13:38.223781 master-0 kubenswrapper[7337]: I0312 18:13:38.223718 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4" event={"ID":"f3a2cda2-b70f-4128-a1be-48503f5aad6d","Type":"ContainerStarted","Data":"14be846643126f0f684988fbee828e3ae28a2a3ed42495436ab25923fcd90c1e"} Mar 12 18:13:38.325417 master-0 kubenswrapper[7337]: I0312 18:13:38.325354 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:38.325661 master-0 kubenswrapper[7337]: I0312 18:13:38.325495 7337 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 18:13:38.344288 master-0 kubenswrapper[7337]: I0312 18:13:38.344242 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:13:39.005616 master-0 kubenswrapper[7337]: I0312 18:13:39.004896 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq"] Mar 12 18:13:39.006912 master-0 kubenswrapper[7337]: I0312 18:13:39.006738 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-589895fbb7-jqj5k"] Mar 12 18:13:39.006912 master-0 kubenswrapper[7337]: I0312 18:13:39.006810 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp"] Mar 12 18:13:39.007810 master-0 kubenswrapper[7337]: I0312 18:13:39.007749 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-677db989d6-4527l"] Mar 12 18:13:40.080365 master-0 kubenswrapper[7337]: I0312 18:13:40.075966 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-85689756c5-8jg6d"] Mar 12 18:13:40.080365 master-0 kubenswrapper[7337]: I0312 18:13:40.076840 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:40.087586 master-0 kubenswrapper[7337]: I0312 18:13:40.086049 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" Mar 12 18:13:40.087586 master-0 kubenswrapper[7337]: I0312 18:13:40.086565 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 12 18:13:40.087586 master-0 kubenswrapper[7337]: I0312 18:13:40.086698 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 12 18:13:40.087586 master-0 kubenswrapper[7337]: I0312 18:13:40.086763 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 12 18:13:40.087586 master-0 kubenswrapper[7337]: I0312 18:13:40.086849 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 12 18:13:40.087586 master-0 kubenswrapper[7337]: I0312 18:13:40.086725 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 12 18:13:40.087586 master-0 kubenswrapper[7337]: I0312 18:13:40.086940 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Mar 12 18:13:40.087586 master-0 kubenswrapper[7337]: I0312 18:13:40.086996 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Mar 12 18:13:40.087586 master-0 kubenswrapper[7337]: I0312 18:13:40.087048 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 12 18:13:40.087586 master-0 kubenswrapper[7337]: I0312 18:13:40.087056 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 12 18:13:40.093560 master-0 kubenswrapper[7337]: I0312 18:13:40.090792 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 12 18:13:40.099347 master-0 kubenswrapper[7337]: I0312 18:13:40.096633 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-85689756c5-8jg6d"] Mar 12 18:13:40.203298 master-0 kubenswrapper[7337]: I0312 18:13:40.203264 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 12 18:13:40.203781 master-0 kubenswrapper[7337]: I0312 18:13:40.203756 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 18:13:40.206399 master-0 kubenswrapper[7337]: I0312 18:13:40.205959 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 12 18:13:40.215496 master-0 kubenswrapper[7337]: I0312 18:13:40.215209 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 12 18:13:40.232550 master-0 kubenswrapper[7337]: I0312 18:13:40.232500 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-audit\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:40.232626 master-0 kubenswrapper[7337]: I0312 18:13:40.232612 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9548479a-8ca1-400d-b682-c78fb116e7b6-encryption-config\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:40.232659 master-0 kubenswrapper[7337]: I0312 18:13:40.232638 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-trusted-ca-bundle\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:40.232690 master-0 kubenswrapper[7337]: I0312 18:13:40.232657 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-etcd-serving-ca\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:40.232718 master-0 kubenswrapper[7337]: I0312 18:13:40.232689 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-image-import-ca\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:40.232718 master-0 kubenswrapper[7337]: I0312 18:13:40.232713 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9548479a-8ca1-400d-b682-c78fb116e7b6-audit-dir\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:40.232774 master-0 kubenswrapper[7337]: I0312 18:13:40.232735 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb54c\" (UniqueName: \"kubernetes.io/projected/9548479a-8ca1-400d-b682-c78fb116e7b6-kube-api-access-zb54c\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:40.232774 master-0 kubenswrapper[7337]: I0312 18:13:40.232760 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9548479a-8ca1-400d-b682-c78fb116e7b6-etcd-client\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:40.232827 master-0 kubenswrapper[7337]: I0312 18:13:40.232779 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9548479a-8ca1-400d-b682-c78fb116e7b6-serving-cert\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:40.232827 master-0 kubenswrapper[7337]: I0312 18:13:40.232815 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9548479a-8ca1-400d-b682-c78fb116e7b6-node-pullsecrets\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:40.232884 master-0 kubenswrapper[7337]: I0312 18:13:40.232847 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-config\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:40.246463 master-0 kubenswrapper[7337]: I0312 18:13:40.246399 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" event={"ID":"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed","Type":"ContainerStarted","Data":"85053d5f110db3eb5945372c71fba0aaee9c7dfe111d937780aa0b35eca2e681"} Mar 12 18:13:40.250375 master-0 kubenswrapper[7337]: I0312 18:13:40.250326 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" event={"ID":"d94dc349-c5cb-4f12-8e48-867030af4981","Type":"ContainerStarted","Data":"2acd016733769b6d86086e869e4b5b990685163236e57389d21ff18ee823169b"} Mar 12 18:13:40.253194 master-0 kubenswrapper[7337]: I0312 18:13:40.253037 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-jqj5k" event={"ID":"8ad05507-e242-4ff8-ae80-c16ff9ee68e2","Type":"ContainerStarted","Data":"9c936c5cf4325b6eaecd87ab37df8b339b08dfc494b408b448e5f3edd8efcd5a"} Mar 12 18:13:40.258168 master-0 kubenswrapper[7337]: I0312 18:13:40.257667 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" event={"ID":"e22c7035-4b7a-48cb-9abb-db277b387842","Type":"ContainerStarted","Data":"238ce3dd6965f9273cbd743e0b3e1979d392d0ae170e37e7a7824e217686dfd8"} Mar 12 18:13:40.335215 master-0 kubenswrapper[7337]: I0312 18:13:40.335100 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-image-import-ca\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:40.335215 master-0 kubenswrapper[7337]: I0312 18:13:40.335159 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56a4489e-252b-44a7-8310-3b699b2af7d6-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"56a4489e-252b-44a7-8310-3b699b2af7d6\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 18:13:40.335215 master-0 kubenswrapper[7337]: I0312 18:13:40.335182 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9548479a-8ca1-400d-b682-c78fb116e7b6-audit-dir\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:40.335215 master-0 kubenswrapper[7337]: I0312 18:13:40.335212 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb54c\" (UniqueName: \"kubernetes.io/projected/9548479a-8ca1-400d-b682-c78fb116e7b6-kube-api-access-zb54c\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:40.335558 master-0 kubenswrapper[7337]: I0312 18:13:40.335231 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9548479a-8ca1-400d-b682-c78fb116e7b6-etcd-client\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:40.335558 master-0 kubenswrapper[7337]: I0312 18:13:40.335260 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9548479a-8ca1-400d-b682-c78fb116e7b6-serving-cert\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:40.335558 master-0 kubenswrapper[7337]: I0312 18:13:40.335314 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56a4489e-252b-44a7-8310-3b699b2af7d6-var-lock\") pod \"installer-1-master-0\" (UID: \"56a4489e-252b-44a7-8310-3b699b2af7d6\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 18:13:40.335558 master-0 kubenswrapper[7337]: I0312 18:13:40.335333 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56a4489e-252b-44a7-8310-3b699b2af7d6-kube-api-access\") pod \"installer-1-master-0\" (UID: \"56a4489e-252b-44a7-8310-3b699b2af7d6\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 18:13:40.335558 master-0 kubenswrapper[7337]: I0312 18:13:40.335351 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9548479a-8ca1-400d-b682-c78fb116e7b6-node-pullsecrets\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:40.335558 master-0 kubenswrapper[7337]: I0312 18:13:40.335383 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-config\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:40.335558 master-0 kubenswrapper[7337]: I0312 18:13:40.335407 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-audit\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:40.335558 master-0 kubenswrapper[7337]: I0312 18:13:40.335560 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9548479a-8ca1-400d-b682-c78fb116e7b6-encryption-config\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:40.335905 master-0 kubenswrapper[7337]: I0312 18:13:40.335597 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-trusted-ca-bundle\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:40.335905 master-0 kubenswrapper[7337]: I0312 18:13:40.335622 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-etcd-serving-ca\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:40.335905 master-0 kubenswrapper[7337]: I0312 18:13:40.335862 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9548479a-8ca1-400d-b682-c78fb116e7b6-audit-dir\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:40.336034 master-0 kubenswrapper[7337]: E0312 18:13:40.335921 7337 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 12 18:13:40.336034 master-0 kubenswrapper[7337]: I0312 18:13:40.335960 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-image-import-ca\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:40.336034 master-0 kubenswrapper[7337]: E0312 18:13:40.336002 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-audit podName:9548479a-8ca1-400d-b682-c78fb116e7b6 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:40.83598378 +0000 UTC m=+21.304584717 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-audit") pod "apiserver-85689756c5-8jg6d" (UID: "9548479a-8ca1-400d-b682-c78fb116e7b6") : configmap "audit-0" not found Mar 12 18:13:40.336170 master-0 kubenswrapper[7337]: I0312 18:13:40.336039 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9548479a-8ca1-400d-b682-c78fb116e7b6-node-pullsecrets\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:40.336291 master-0 kubenswrapper[7337]: I0312 18:13:40.336268 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-etcd-serving-ca\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:40.336624 master-0 kubenswrapper[7337]: I0312 18:13:40.336601 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-config\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:40.336745 master-0 kubenswrapper[7337]: E0312 18:13:40.336709 7337 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 12 18:13:40.336745 master-0 kubenswrapper[7337]: E0312 18:13:40.336737 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9548479a-8ca1-400d-b682-c78fb116e7b6-serving-cert podName:9548479a-8ca1-400d-b682-c78fb116e7b6 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:40.836729569 +0000 UTC m=+21.305330516 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9548479a-8ca1-400d-b682-c78fb116e7b6-serving-cert") pod "apiserver-85689756c5-8jg6d" (UID: "9548479a-8ca1-400d-b682-c78fb116e7b6") : secret "serving-cert" not found Mar 12 18:13:40.339936 master-0 kubenswrapper[7337]: I0312 18:13:40.339640 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-trusted-ca-bundle\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:40.344636 master-0 kubenswrapper[7337]: I0312 18:13:40.344118 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9548479a-8ca1-400d-b682-c78fb116e7b6-etcd-client\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:40.346404 master-0 kubenswrapper[7337]: I0312 18:13:40.346313 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9548479a-8ca1-400d-b682-c78fb116e7b6-encryption-config\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:40.374554 master-0 kubenswrapper[7337]: I0312 18:13:40.371716 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb54c\" (UniqueName: \"kubernetes.io/projected/9548479a-8ca1-400d-b682-c78fb116e7b6-kube-api-access-zb54c\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:40.436438 master-0 kubenswrapper[7337]: I0312 18:13:40.436395 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56a4489e-252b-44a7-8310-3b699b2af7d6-var-lock\") pod \"installer-1-master-0\" (UID: \"56a4489e-252b-44a7-8310-3b699b2af7d6\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 18:13:40.436737 master-0 kubenswrapper[7337]: I0312 18:13:40.436450 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56a4489e-252b-44a7-8310-3b699b2af7d6-kube-api-access\") pod \"installer-1-master-0\" (UID: \"56a4489e-252b-44a7-8310-3b699b2af7d6\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 18:13:40.436737 master-0 kubenswrapper[7337]: I0312 18:13:40.436557 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56a4489e-252b-44a7-8310-3b699b2af7d6-var-lock\") pod \"installer-1-master-0\" (UID: \"56a4489e-252b-44a7-8310-3b699b2af7d6\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 18:13:40.438672 master-0 kubenswrapper[7337]: I0312 18:13:40.437024 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56a4489e-252b-44a7-8310-3b699b2af7d6-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"56a4489e-252b-44a7-8310-3b699b2af7d6\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 18:13:40.438672 master-0 kubenswrapper[7337]: I0312 18:13:40.437119 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56a4489e-252b-44a7-8310-3b699b2af7d6-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"56a4489e-252b-44a7-8310-3b699b2af7d6\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 18:13:40.530032 master-0 kubenswrapper[7337]: I0312 18:13:40.529968 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56a4489e-252b-44a7-8310-3b699b2af7d6-kube-api-access\") pod \"installer-1-master-0\" (UID: \"56a4489e-252b-44a7-8310-3b699b2af7d6\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 18:13:40.545477 master-0 kubenswrapper[7337]: I0312 18:13:40.545421 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 18:13:40.778722 master-0 kubenswrapper[7337]: I0312 18:13:40.778661 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 12 18:13:40.791585 master-0 kubenswrapper[7337]: W0312 18:13:40.791533 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod56a4489e_252b_44a7_8310_3b699b2af7d6.slice/crio-43f219edc6bd711e95467d2fb5b26294bc0d574e573848834698d8c0e26127fa WatchSource:0}: Error finding container 43f219edc6bd711e95467d2fb5b26294bc0d574e573848834698d8c0e26127fa: Status 404 returned error can't find the container with id 43f219edc6bd711e95467d2fb5b26294bc0d574e573848834698d8c0e26127fa Mar 12 18:13:40.845521 master-0 kubenswrapper[7337]: I0312 18:13:40.845462 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9548479a-8ca1-400d-b682-c78fb116e7b6-serving-cert\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:40.845630 master-0 kubenswrapper[7337]: I0312 18:13:40.845543 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-audit\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:40.845685 master-0 kubenswrapper[7337]: E0312 18:13:40.845621 7337 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 12 18:13:40.845732 master-0 kubenswrapper[7337]: E0312 18:13:40.845704 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-audit podName:9548479a-8ca1-400d-b682-c78fb116e7b6 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:41.845679228 +0000 UTC m=+22.314280175 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-audit") pod "apiserver-85689756c5-8jg6d" (UID: "9548479a-8ca1-400d-b682-c78fb116e7b6") : configmap "audit-0" not found Mar 12 18:13:40.845793 master-0 kubenswrapper[7337]: E0312 18:13:40.845761 7337 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 12 18:13:40.845879 master-0 kubenswrapper[7337]: E0312 18:13:40.845860 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9548479a-8ca1-400d-b682-c78fb116e7b6-serving-cert podName:9548479a-8ca1-400d-b682-c78fb116e7b6 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:41.845836642 +0000 UTC m=+22.314437659 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9548479a-8ca1-400d-b682-c78fb116e7b6-serving-cert") pod "apiserver-85689756c5-8jg6d" (UID: "9548479a-8ca1-400d-b682-c78fb116e7b6") : secret "serving-cert" not found Mar 12 18:13:41.270780 master-0 kubenswrapper[7337]: I0312 18:13:41.269827 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" event={"ID":"062f1b21-2ffc-47da-8334-427c3b2a1a90","Type":"ContainerStarted","Data":"385dbea872bd37bf8e7a76f3902ee88fd0be82523d84cbe8f74298971654ec6b"} Mar 12 18:13:41.274230 master-0 kubenswrapper[7337]: I0312 18:13:41.273069 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"56a4489e-252b-44a7-8310-3b699b2af7d6","Type":"ContainerStarted","Data":"53ddad5a8c9cbca89afea7f839486d18707671bc74821fb0560b014cdd65817f"} Mar 12 18:13:41.274230 master-0 kubenswrapper[7337]: I0312 18:13:41.273099 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"56a4489e-252b-44a7-8310-3b699b2af7d6","Type":"ContainerStarted","Data":"43f219edc6bd711e95467d2fb5b26294bc0d574e573848834698d8c0e26127fa"} Mar 12 18:13:41.311568 master-0 kubenswrapper[7337]: I0312 18:13:41.311465 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-1-master-0" podStartSLOduration=1.311433997 podStartE2EDuration="1.311433997s" podCreationTimestamp="2026-03-12 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:13:41.309458826 +0000 UTC m=+21.778059783" watchObservedRunningTime="2026-03-12 18:13:41.311433997 +0000 UTC m=+21.780034944" Mar 12 18:13:41.859069 master-0 kubenswrapper[7337]: I0312 18:13:41.858985 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9548479a-8ca1-400d-b682-c78fb116e7b6-serving-cert\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:41.859266 master-0 kubenswrapper[7337]: I0312 18:13:41.859098 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-audit\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:41.859266 master-0 kubenswrapper[7337]: E0312 18:13:41.859240 7337 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 12 18:13:41.859330 master-0 kubenswrapper[7337]: E0312 18:13:41.859312 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-audit podName:9548479a-8ca1-400d-b682-c78fb116e7b6 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:43.859288236 +0000 UTC m=+24.327889173 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-audit") pod "apiserver-85689756c5-8jg6d" (UID: "9548479a-8ca1-400d-b682-c78fb116e7b6") : configmap "audit-0" not found Mar 12 18:13:41.859762 master-0 kubenswrapper[7337]: E0312 18:13:41.859543 7337 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 12 18:13:41.859762 master-0 kubenswrapper[7337]: E0312 18:13:41.859639 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9548479a-8ca1-400d-b682-c78fb116e7b6-serving-cert podName:9548479a-8ca1-400d-b682-c78fb116e7b6 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:43.859618214 +0000 UTC m=+24.328219161 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9548479a-8ca1-400d-b682-c78fb116e7b6-serving-cert") pod "apiserver-85689756c5-8jg6d" (UID: "9548479a-8ca1-400d-b682-c78fb116e7b6") : secret "serving-cert" not found Mar 12 18:13:42.574309 master-0 kubenswrapper[7337]: I0312 18:13:42.574182 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41644344-9d23-4ed1-9044-4fe65cab6159-client-ca\") pod \"route-controller-manager-5b45cc45cc-kjvgg\" (UID: \"41644344-9d23-4ed1-9044-4fe65cab6159\") " pod="openshift-route-controller-manager/route-controller-manager-5b45cc45cc-kjvgg" Mar 12 18:13:42.575335 master-0 kubenswrapper[7337]: E0312 18:13:42.574415 7337 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 12 18:13:42.575335 master-0 kubenswrapper[7337]: E0312 18:13:42.574597 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41644344-9d23-4ed1-9044-4fe65cab6159-client-ca podName:41644344-9d23-4ed1-9044-4fe65cab6159 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:58.574564168 +0000 UTC m=+39.043165125 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/41644344-9d23-4ed1-9044-4fe65cab6159-client-ca") pod "route-controller-manager-5b45cc45cc-kjvgg" (UID: "41644344-9d23-4ed1-9044-4fe65cab6159") : configmap "client-ca" not found Mar 12 18:13:42.575335 master-0 kubenswrapper[7337]: I0312 18:13:42.574807 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41644344-9d23-4ed1-9044-4fe65cab6159-serving-cert\") pod \"route-controller-manager-5b45cc45cc-kjvgg\" (UID: \"41644344-9d23-4ed1-9044-4fe65cab6159\") " pod="openshift-route-controller-manager/route-controller-manager-5b45cc45cc-kjvgg" Mar 12 18:13:42.575335 master-0 kubenswrapper[7337]: E0312 18:13:42.575050 7337 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 18:13:42.575335 master-0 kubenswrapper[7337]: E0312 18:13:42.575090 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41644344-9d23-4ed1-9044-4fe65cab6159-serving-cert podName:41644344-9d23-4ed1-9044-4fe65cab6159 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:58.575078821 +0000 UTC m=+39.043679778 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/41644344-9d23-4ed1-9044-4fe65cab6159-serving-cert") pod "route-controller-manager-5b45cc45cc-kjvgg" (UID: "41644344-9d23-4ed1-9044-4fe65cab6159") : secret "serving-cert" not found Mar 12 18:13:43.292423 master-0 kubenswrapper[7337]: I0312 18:13:43.292113 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4k8wm" event={"ID":"d92dddc8-a810-43f5-8beb-32d1c8ad8381","Type":"ContainerStarted","Data":"78e51e21abb2ecc673ee6cbfbbed20a5aaa99d523e60331e6a522904a2085fea"} Mar 12 18:13:43.371488 master-0 kubenswrapper[7337]: I0312 18:13:43.371149 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-85689756c5-8jg6d"] Mar 12 18:13:43.371488 master-0 kubenswrapper[7337]: E0312 18:13:43.371461 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-85689756c5-8jg6d" podUID="9548479a-8ca1-400d-b682-c78fb116e7b6" Mar 12 18:13:43.757074 master-0 kubenswrapper[7337]: I0312 18:13:43.756982 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 12 18:13:43.757782 master-0 kubenswrapper[7337]: I0312 18:13:43.757406 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 12 18:13:43.759009 master-0 kubenswrapper[7337]: I0312 18:13:43.758976 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 12 18:13:43.764680 master-0 kubenswrapper[7337]: I0312 18:13:43.764633 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 12 18:13:43.902222 master-0 kubenswrapper[7337]: I0312 18:13:43.902126 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e418d797-2c31-404b-9dc3-251399e42542-kube-api-access\") pod \"installer-1-master-0\" (UID: \"e418d797-2c31-404b-9dc3-251399e42542\") " pod="openshift-etcd/installer-1-master-0" Mar 12 18:13:43.902222 master-0 kubenswrapper[7337]: I0312 18:13:43.902168 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e418d797-2c31-404b-9dc3-251399e42542-var-lock\") pod \"installer-1-master-0\" (UID: \"e418d797-2c31-404b-9dc3-251399e42542\") " pod="openshift-etcd/installer-1-master-0" Mar 12 18:13:43.902222 master-0 kubenswrapper[7337]: I0312 18:13:43.902224 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9548479a-8ca1-400d-b682-c78fb116e7b6-serving-cert\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:43.902500 master-0 kubenswrapper[7337]: I0312 18:13:43.902265 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e418d797-2c31-404b-9dc3-251399e42542-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"e418d797-2c31-404b-9dc3-251399e42542\") " pod="openshift-etcd/installer-1-master-0" Mar 12 18:13:43.902500 master-0 kubenswrapper[7337]: I0312 18:13:43.902286 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-audit\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:43.902500 master-0 kubenswrapper[7337]: E0312 18:13:43.902382 7337 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 12 18:13:43.902500 master-0 kubenswrapper[7337]: E0312 18:13:43.902433 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-audit podName:9548479a-8ca1-400d-b682-c78fb116e7b6 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:47.902415536 +0000 UTC m=+28.371016563 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-audit") pod "apiserver-85689756c5-8jg6d" (UID: "9548479a-8ca1-400d-b682-c78fb116e7b6") : configmap "audit-0" not found Mar 12 18:13:43.908274 master-0 kubenswrapper[7337]: I0312 18:13:43.908242 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9548479a-8ca1-400d-b682-c78fb116e7b6-serving-cert\") pod \"apiserver-85689756c5-8jg6d\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:44.004170 master-0 kubenswrapper[7337]: I0312 18:13:44.004093 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e418d797-2c31-404b-9dc3-251399e42542-var-lock\") pod \"installer-1-master-0\" (UID: \"e418d797-2c31-404b-9dc3-251399e42542\") " pod="openshift-etcd/installer-1-master-0" Mar 12 18:13:44.004367 master-0 kubenswrapper[7337]: I0312 18:13:44.004257 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e418d797-2c31-404b-9dc3-251399e42542-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"e418d797-2c31-404b-9dc3-251399e42542\") " pod="openshift-etcd/installer-1-master-0" Mar 12 18:13:44.005459 master-0 kubenswrapper[7337]: I0312 18:13:44.004425 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e418d797-2c31-404b-9dc3-251399e42542-kube-api-access\") pod \"installer-1-master-0\" (UID: \"e418d797-2c31-404b-9dc3-251399e42542\") " pod="openshift-etcd/installer-1-master-0" Mar 12 18:13:44.005459 master-0 kubenswrapper[7337]: I0312 18:13:44.004501 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e418d797-2c31-404b-9dc3-251399e42542-var-lock\") pod \"installer-1-master-0\" (UID: \"e418d797-2c31-404b-9dc3-251399e42542\") " pod="openshift-etcd/installer-1-master-0" Mar 12 18:13:44.005459 master-0 kubenswrapper[7337]: I0312 18:13:44.004606 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e418d797-2c31-404b-9dc3-251399e42542-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"e418d797-2c31-404b-9dc3-251399e42542\") " pod="openshift-etcd/installer-1-master-0" Mar 12 18:13:44.019946 master-0 kubenswrapper[7337]: I0312 18:13:44.019860 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e418d797-2c31-404b-9dc3-251399e42542-kube-api-access\") pod \"installer-1-master-0\" (UID: \"e418d797-2c31-404b-9dc3-251399e42542\") " pod="openshift-etcd/installer-1-master-0" Mar 12 18:13:44.083065 master-0 kubenswrapper[7337]: I0312 18:13:44.083011 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 12 18:13:44.299072 master-0 kubenswrapper[7337]: I0312 18:13:44.298417 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:44.305277 master-0 kubenswrapper[7337]: I0312 18:13:44.305256 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:44.409942 master-0 kubenswrapper[7337]: I0312 18:13:44.409900 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-image-import-ca\") pod \"9548479a-8ca1-400d-b682-c78fb116e7b6\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " Mar 12 18:13:44.409942 master-0 kubenswrapper[7337]: I0312 18:13:44.409940 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9548479a-8ca1-400d-b682-c78fb116e7b6-encryption-config\") pod \"9548479a-8ca1-400d-b682-c78fb116e7b6\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " Mar 12 18:13:44.410147 master-0 kubenswrapper[7337]: I0312 18:13:44.409986 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9548479a-8ca1-400d-b682-c78fb116e7b6-serving-cert\") pod \"9548479a-8ca1-400d-b682-c78fb116e7b6\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " Mar 12 18:13:44.410147 master-0 kubenswrapper[7337]: I0312 18:13:44.410000 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9548479a-8ca1-400d-b682-c78fb116e7b6-audit-dir\") pod \"9548479a-8ca1-400d-b682-c78fb116e7b6\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " Mar 12 18:13:44.410147 master-0 kubenswrapper[7337]: I0312 18:13:44.410015 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-etcd-serving-ca\") pod \"9548479a-8ca1-400d-b682-c78fb116e7b6\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " Mar 12 18:13:44.410147 master-0 kubenswrapper[7337]: I0312 18:13:44.410062 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-trusted-ca-bundle\") pod \"9548479a-8ca1-400d-b682-c78fb116e7b6\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " Mar 12 18:13:44.410147 master-0 kubenswrapper[7337]: I0312 18:13:44.410092 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9548479a-8ca1-400d-b682-c78fb116e7b6-node-pullsecrets\") pod \"9548479a-8ca1-400d-b682-c78fb116e7b6\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " Mar 12 18:13:44.410147 master-0 kubenswrapper[7337]: I0312 18:13:44.410113 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb54c\" (UniqueName: \"kubernetes.io/projected/9548479a-8ca1-400d-b682-c78fb116e7b6-kube-api-access-zb54c\") pod \"9548479a-8ca1-400d-b682-c78fb116e7b6\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " Mar 12 18:13:44.410147 master-0 kubenswrapper[7337]: I0312 18:13:44.410134 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9548479a-8ca1-400d-b682-c78fb116e7b6-etcd-client\") pod \"9548479a-8ca1-400d-b682-c78fb116e7b6\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " Mar 12 18:13:44.410353 master-0 kubenswrapper[7337]: I0312 18:13:44.410156 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-config\") pod \"9548479a-8ca1-400d-b682-c78fb116e7b6\" (UID: \"9548479a-8ca1-400d-b682-c78fb116e7b6\") " Mar 12 18:13:44.410974 master-0 kubenswrapper[7337]: I0312 18:13:44.410723 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9548479a-8ca1-400d-b682-c78fb116e7b6-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "9548479a-8ca1-400d-b682-c78fb116e7b6" (UID: "9548479a-8ca1-400d-b682-c78fb116e7b6"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:13:44.410974 master-0 kubenswrapper[7337]: I0312 18:13:44.410762 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9548479a-8ca1-400d-b682-c78fb116e7b6-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "9548479a-8ca1-400d-b682-c78fb116e7b6" (UID: "9548479a-8ca1-400d-b682-c78fb116e7b6"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:13:44.410974 master-0 kubenswrapper[7337]: I0312 18:13:44.410824 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "9548479a-8ca1-400d-b682-c78fb116e7b6" (UID: "9548479a-8ca1-400d-b682-c78fb116e7b6"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:13:44.410974 master-0 kubenswrapper[7337]: I0312 18:13:44.410958 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-config" (OuterVolumeSpecName: "config") pod "9548479a-8ca1-400d-b682-c78fb116e7b6" (UID: "9548479a-8ca1-400d-b682-c78fb116e7b6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:13:44.411144 master-0 kubenswrapper[7337]: I0312 18:13:44.411057 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "9548479a-8ca1-400d-b682-c78fb116e7b6" (UID: "9548479a-8ca1-400d-b682-c78fb116e7b6"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:13:44.411288 master-0 kubenswrapper[7337]: I0312 18:13:44.411255 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "9548479a-8ca1-400d-b682-c78fb116e7b6" (UID: "9548479a-8ca1-400d-b682-c78fb116e7b6"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:13:44.413580 master-0 kubenswrapper[7337]: I0312 18:13:44.413536 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9548479a-8ca1-400d-b682-c78fb116e7b6-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "9548479a-8ca1-400d-b682-c78fb116e7b6" (UID: "9548479a-8ca1-400d-b682-c78fb116e7b6"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:13:44.413656 master-0 kubenswrapper[7337]: I0312 18:13:44.413632 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9548479a-8ca1-400d-b682-c78fb116e7b6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9548479a-8ca1-400d-b682-c78fb116e7b6" (UID: "9548479a-8ca1-400d-b682-c78fb116e7b6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:13:44.413693 master-0 kubenswrapper[7337]: I0312 18:13:44.413654 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9548479a-8ca1-400d-b682-c78fb116e7b6-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "9548479a-8ca1-400d-b682-c78fb116e7b6" (UID: "9548479a-8ca1-400d-b682-c78fb116e7b6"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:13:44.414041 master-0 kubenswrapper[7337]: I0312 18:13:44.413973 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9548479a-8ca1-400d-b682-c78fb116e7b6-kube-api-access-zb54c" (OuterVolumeSpecName: "kube-api-access-zb54c") pod "9548479a-8ca1-400d-b682-c78fb116e7b6" (UID: "9548479a-8ca1-400d-b682-c78fb116e7b6"). InnerVolumeSpecName "kube-api-access-zb54c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:13:44.512168 master-0 kubenswrapper[7337]: I0312 18:13:44.511828 7337 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zb54c\" (UniqueName: \"kubernetes.io/projected/9548479a-8ca1-400d-b682-c78fb116e7b6-kube-api-access-zb54c\") on node \"master-0\" DevicePath \"\"" Mar 12 18:13:44.512168 master-0 kubenswrapper[7337]: I0312 18:13:44.511867 7337 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9548479a-8ca1-400d-b682-c78fb116e7b6-etcd-client\") on node \"master-0\" DevicePath \"\"" Mar 12 18:13:44.512168 master-0 kubenswrapper[7337]: I0312 18:13:44.511882 7337 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:13:44.512168 master-0 kubenswrapper[7337]: I0312 18:13:44.511894 7337 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-image-import-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 18:13:44.512168 master-0 kubenswrapper[7337]: I0312 18:13:44.511906 7337 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9548479a-8ca1-400d-b682-c78fb116e7b6-encryption-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:13:44.512168 master-0 kubenswrapper[7337]: I0312 18:13:44.511918 7337 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9548479a-8ca1-400d-b682-c78fb116e7b6-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 18:13:44.512168 master-0 kubenswrapper[7337]: I0312 18:13:44.511929 7337 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9548479a-8ca1-400d-b682-c78fb116e7b6-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:13:44.512168 master-0 kubenswrapper[7337]: I0312 18:13:44.511941 7337 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 18:13:44.512168 master-0 kubenswrapper[7337]: I0312 18:13:44.511954 7337 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:13:44.512168 master-0 kubenswrapper[7337]: I0312 18:13:44.511967 7337 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9548479a-8ca1-400d-b682-c78fb116e7b6-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Mar 12 18:13:44.822488 master-0 kubenswrapper[7337]: I0312 18:13:44.822431 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5cc4c76f78-8gkkj"] Mar 12 18:13:44.823076 master-0 kubenswrapper[7337]: E0312 18:13:44.822870 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-5cc4c76f78-8gkkj" podUID="910180ff-efca-4c61-b250-6b0ba7a76089" Mar 12 18:13:44.828733 master-0 kubenswrapper[7337]: I0312 18:13:44.828632 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b45cc45cc-kjvgg"] Mar 12 18:13:44.829191 master-0 kubenswrapper[7337]: E0312 18:13:44.829153 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-5b45cc45cc-kjvgg" podUID="41644344-9d23-4ed1-9044-4fe65cab6159" Mar 12 18:13:45.315111 master-0 kubenswrapper[7337]: I0312 18:13:45.301781 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b45cc45cc-kjvgg" Mar 12 18:13:45.315111 master-0 kubenswrapper[7337]: I0312 18:13:45.302110 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-85689756c5-8jg6d" Mar 12 18:13:45.315111 master-0 kubenswrapper[7337]: I0312 18:13:45.305878 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cc4c76f78-8gkkj" Mar 12 18:13:45.315111 master-0 kubenswrapper[7337]: I0312 18:13:45.312094 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b45cc45cc-kjvgg" Mar 12 18:13:45.321139 master-0 kubenswrapper[7337]: I0312 18:13:45.321105 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cc4c76f78-8gkkj" Mar 12 18:13:45.399208 master-0 kubenswrapper[7337]: I0312 18:13:45.399148 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-5786c989f8-f6jgb"] Mar 12 18:13:45.400044 master-0 kubenswrapper[7337]: I0312 18:13:45.400025 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.404498 master-0 kubenswrapper[7337]: I0312 18:13:45.403603 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 12 18:13:45.404498 master-0 kubenswrapper[7337]: I0312 18:13:45.403868 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 12 18:13:45.404498 master-0 kubenswrapper[7337]: I0312 18:13:45.403920 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 12 18:13:45.404498 master-0 kubenswrapper[7337]: I0312 18:13:45.404155 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 12 18:13:45.410485 master-0 kubenswrapper[7337]: I0312 18:13:45.409700 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-85689756c5-8jg6d"] Mar 12 18:13:45.411000 master-0 kubenswrapper[7337]: I0312 18:13:45.410853 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 12 18:13:45.415539 master-0 kubenswrapper[7337]: I0312 18:13:45.414094 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-5786c989f8-f6jgb"] Mar 12 18:13:45.415539 master-0 kubenswrapper[7337]: I0312 18:13:45.414882 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 12 18:13:45.415539 master-0 kubenswrapper[7337]: I0312 18:13:45.415089 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 12 18:13:45.415539 master-0 kubenswrapper[7337]: I0312 18:13:45.415101 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 12 18:13:45.415539 master-0 kubenswrapper[7337]: I0312 18:13:45.415361 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 12 18:13:45.420971 master-0 kubenswrapper[7337]: I0312 18:13:45.419256 7337 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-85689756c5-8jg6d"] Mar 12 18:13:45.421142 master-0 kubenswrapper[7337]: I0312 18:13:45.421109 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/910180ff-efca-4c61-b250-6b0ba7a76089-proxy-ca-bundles\") pod \"910180ff-efca-4c61-b250-6b0ba7a76089\" (UID: \"910180ff-efca-4c61-b250-6b0ba7a76089\") " Mar 12 18:13:45.421203 master-0 kubenswrapper[7337]: I0312 18:13:45.421176 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-978rf\" (UniqueName: \"kubernetes.io/projected/910180ff-efca-4c61-b250-6b0ba7a76089-kube-api-access-978rf\") pod \"910180ff-efca-4c61-b250-6b0ba7a76089\" (UID: \"910180ff-efca-4c61-b250-6b0ba7a76089\") " Mar 12 18:13:45.421246 master-0 kubenswrapper[7337]: I0312 18:13:45.421230 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/910180ff-efca-4c61-b250-6b0ba7a76089-config\") pod \"910180ff-efca-4c61-b250-6b0ba7a76089\" (UID: \"910180ff-efca-4c61-b250-6b0ba7a76089\") " Mar 12 18:13:45.421297 master-0 kubenswrapper[7337]: I0312 18:13:45.421280 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/910180ff-efca-4c61-b250-6b0ba7a76089-serving-cert\") pod \"910180ff-efca-4c61-b250-6b0ba7a76089\" (UID: \"910180ff-efca-4c61-b250-6b0ba7a76089\") " Mar 12 18:13:45.421344 master-0 kubenswrapper[7337]: I0312 18:13:45.421312 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41644344-9d23-4ed1-9044-4fe65cab6159-config\") pod \"41644344-9d23-4ed1-9044-4fe65cab6159\" (UID: \"41644344-9d23-4ed1-9044-4fe65cab6159\") " Mar 12 18:13:45.421344 master-0 kubenswrapper[7337]: I0312 18:13:45.421337 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvqfn\" (UniqueName: \"kubernetes.io/projected/41644344-9d23-4ed1-9044-4fe65cab6159-kube-api-access-jvqfn\") pod \"41644344-9d23-4ed1-9044-4fe65cab6159\" (UID: \"41644344-9d23-4ed1-9044-4fe65cab6159\") " Mar 12 18:13:45.423572 master-0 kubenswrapper[7337]: I0312 18:13:45.423254 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/910180ff-efca-4c61-b250-6b0ba7a76089-config" (OuterVolumeSpecName: "config") pod "910180ff-efca-4c61-b250-6b0ba7a76089" (UID: "910180ff-efca-4c61-b250-6b0ba7a76089"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:13:45.425768 master-0 kubenswrapper[7337]: I0312 18:13:45.425497 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/910180ff-efca-4c61-b250-6b0ba7a76089-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "910180ff-efca-4c61-b250-6b0ba7a76089" (UID: "910180ff-efca-4c61-b250-6b0ba7a76089"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:13:45.425908 master-0 kubenswrapper[7337]: I0312 18:13:45.425839 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41644344-9d23-4ed1-9044-4fe65cab6159-config" (OuterVolumeSpecName: "config") pod "41644344-9d23-4ed1-9044-4fe65cab6159" (UID: "41644344-9d23-4ed1-9044-4fe65cab6159"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:13:45.427556 master-0 kubenswrapper[7337]: I0312 18:13:45.426182 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41644344-9d23-4ed1-9044-4fe65cab6159-kube-api-access-jvqfn" (OuterVolumeSpecName: "kube-api-access-jvqfn") pod "41644344-9d23-4ed1-9044-4fe65cab6159" (UID: "41644344-9d23-4ed1-9044-4fe65cab6159"). InnerVolumeSpecName "kube-api-access-jvqfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:13:45.431322 master-0 kubenswrapper[7337]: I0312 18:13:45.431067 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/910180ff-efca-4c61-b250-6b0ba7a76089-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "910180ff-efca-4c61-b250-6b0ba7a76089" (UID: "910180ff-efca-4c61-b250-6b0ba7a76089"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:13:45.431322 master-0 kubenswrapper[7337]: I0312 18:13:45.431207 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/910180ff-efca-4c61-b250-6b0ba7a76089-kube-api-access-978rf" (OuterVolumeSpecName: "kube-api-access-978rf") pod "910180ff-efca-4c61-b250-6b0ba7a76089" (UID: "910180ff-efca-4c61-b250-6b0ba7a76089"). InnerVolumeSpecName "kube-api-access-978rf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:13:45.435925 master-0 kubenswrapper[7337]: I0312 18:13:45.435891 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 12 18:13:45.523266 master-0 kubenswrapper[7337]: I0312 18:13:45.523203 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9b41258c-ac1d-4e00-ac5e-732d85441f12-audit\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.523451 master-0 kubenswrapper[7337]: I0312 18:13:45.523307 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b41258c-ac1d-4e00-ac5e-732d85441f12-config\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.523451 master-0 kubenswrapper[7337]: I0312 18:13:45.523350 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9b41258c-ac1d-4e00-ac5e-732d85441f12-etcd-client\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.523451 master-0 kubenswrapper[7337]: I0312 18:13:45.523392 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9b41258c-ac1d-4e00-ac5e-732d85441f12-audit-dir\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.523451 master-0 kubenswrapper[7337]: I0312 18:13:45.523416 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b41258c-ac1d-4e00-ac5e-732d85441f12-trusted-ca-bundle\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.523607 master-0 kubenswrapper[7337]: I0312 18:13:45.523467 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lmj2\" (UniqueName: \"kubernetes.io/projected/9b41258c-ac1d-4e00-ac5e-732d85441f12-kube-api-access-7lmj2\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.523607 master-0 kubenswrapper[7337]: I0312 18:13:45.523507 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9b41258c-ac1d-4e00-ac5e-732d85441f12-node-pullsecrets\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.523607 master-0 kubenswrapper[7337]: I0312 18:13:45.523551 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b41258c-ac1d-4e00-ac5e-732d85441f12-serving-cert\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.523607 master-0 kubenswrapper[7337]: I0312 18:13:45.523582 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9b41258c-ac1d-4e00-ac5e-732d85441f12-image-import-ca\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.523772 master-0 kubenswrapper[7337]: I0312 18:13:45.523722 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9b41258c-ac1d-4e00-ac5e-732d85441f12-encryption-config\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.523948 master-0 kubenswrapper[7337]: I0312 18:13:45.523926 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9b41258c-ac1d-4e00-ac5e-732d85441f12-etcd-serving-ca\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.524110 master-0 kubenswrapper[7337]: I0312 18:13:45.524090 7337 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-978rf\" (UniqueName: \"kubernetes.io/projected/910180ff-efca-4c61-b250-6b0ba7a76089-kube-api-access-978rf\") on node \"master-0\" DevicePath \"\"" Mar 12 18:13:45.524154 master-0 kubenswrapper[7337]: I0312 18:13:45.524115 7337 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/910180ff-efca-4c61-b250-6b0ba7a76089-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:13:45.524154 master-0 kubenswrapper[7337]: I0312 18:13:45.524131 7337 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9548479a-8ca1-400d-b682-c78fb116e7b6-audit\") on node \"master-0\" DevicePath \"\"" Mar 12 18:13:45.524154 master-0 kubenswrapper[7337]: I0312 18:13:45.524143 7337 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/910180ff-efca-4c61-b250-6b0ba7a76089-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 18:13:45.524154 master-0 kubenswrapper[7337]: I0312 18:13:45.524155 7337 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41644344-9d23-4ed1-9044-4fe65cab6159-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:13:45.524259 master-0 kubenswrapper[7337]: I0312 18:13:45.524166 7337 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvqfn\" (UniqueName: \"kubernetes.io/projected/41644344-9d23-4ed1-9044-4fe65cab6159-kube-api-access-jvqfn\") on node \"master-0\" DevicePath \"\"" Mar 12 18:13:45.524259 master-0 kubenswrapper[7337]: I0312 18:13:45.524205 7337 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/910180ff-efca-4c61-b250-6b0ba7a76089-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 12 18:13:45.625377 master-0 kubenswrapper[7337]: I0312 18:13:45.625309 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9b41258c-ac1d-4e00-ac5e-732d85441f12-etcd-client\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.625798 master-0 kubenswrapper[7337]: I0312 18:13:45.625766 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9b41258c-ac1d-4e00-ac5e-732d85441f12-audit-dir\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.625985 master-0 kubenswrapper[7337]: I0312 18:13:45.625848 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9b41258c-ac1d-4e00-ac5e-732d85441f12-audit-dir\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.626069 master-0 kubenswrapper[7337]: I0312 18:13:45.625921 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b41258c-ac1d-4e00-ac5e-732d85441f12-trusted-ca-bundle\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.626166 master-0 kubenswrapper[7337]: I0312 18:13:45.626153 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lmj2\" (UniqueName: \"kubernetes.io/projected/9b41258c-ac1d-4e00-ac5e-732d85441f12-kube-api-access-7lmj2\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.626696 master-0 kubenswrapper[7337]: I0312 18:13:45.626681 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b41258c-ac1d-4e00-ac5e-732d85441f12-serving-cert\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.627257 master-0 kubenswrapper[7337]: I0312 18:13:45.627243 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9b41258c-ac1d-4e00-ac5e-732d85441f12-node-pullsecrets\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.627346 master-0 kubenswrapper[7337]: I0312 18:13:45.627324 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9b41258c-ac1d-4e00-ac5e-732d85441f12-node-pullsecrets\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.627424 master-0 kubenswrapper[7337]: I0312 18:13:45.626841 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b41258c-ac1d-4e00-ac5e-732d85441f12-trusted-ca-bundle\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.628821 master-0 kubenswrapper[7337]: I0312 18:13:45.627581 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9b41258c-ac1d-4e00-ac5e-732d85441f12-image-import-ca\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.628821 master-0 kubenswrapper[7337]: I0312 18:13:45.627622 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9b41258c-ac1d-4e00-ac5e-732d85441f12-encryption-config\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.628821 master-0 kubenswrapper[7337]: I0312 18:13:45.627655 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9b41258c-ac1d-4e00-ac5e-732d85441f12-etcd-serving-ca\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.628821 master-0 kubenswrapper[7337]: I0312 18:13:45.627686 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9b41258c-ac1d-4e00-ac5e-732d85441f12-audit\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.628821 master-0 kubenswrapper[7337]: I0312 18:13:45.627732 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b41258c-ac1d-4e00-ac5e-732d85441f12-config\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.628821 master-0 kubenswrapper[7337]: I0312 18:13:45.628080 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9b41258c-ac1d-4e00-ac5e-732d85441f12-image-import-ca\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.628821 master-0 kubenswrapper[7337]: I0312 18:13:45.628253 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b41258c-ac1d-4e00-ac5e-732d85441f12-config\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.628821 master-0 kubenswrapper[7337]: I0312 18:13:45.628478 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9b41258c-ac1d-4e00-ac5e-732d85441f12-etcd-client\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.628821 master-0 kubenswrapper[7337]: I0312 18:13:45.628592 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9b41258c-ac1d-4e00-ac5e-732d85441f12-etcd-serving-ca\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.628821 master-0 kubenswrapper[7337]: I0312 18:13:45.628773 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9b41258c-ac1d-4e00-ac5e-732d85441f12-audit\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.629912 master-0 kubenswrapper[7337]: I0312 18:13:45.629892 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b41258c-ac1d-4e00-ac5e-732d85441f12-serving-cert\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.643637 master-0 kubenswrapper[7337]: I0312 18:13:45.643253 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9b41258c-ac1d-4e00-ac5e-732d85441f12-encryption-config\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.644531 master-0 kubenswrapper[7337]: I0312 18:13:45.644481 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lmj2\" (UniqueName: \"kubernetes.io/projected/9b41258c-ac1d-4e00-ac5e-732d85441f12-kube-api-access-7lmj2\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.726080 master-0 kubenswrapper[7337]: I0312 18:13:45.725966 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:45.731060 master-0 kubenswrapper[7337]: I0312 18:13:45.731035 7337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9548479a-8ca1-400d-b682-c78fb116e7b6" path="/var/lib/kubelet/pods/9548479a-8ca1-400d-b682-c78fb116e7b6/volumes" Mar 12 18:13:46.132804 master-0 kubenswrapper[7337]: I0312 18:13:46.132751 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/910180ff-efca-4c61-b250-6b0ba7a76089-client-ca\") pod \"controller-manager-5cc4c76f78-8gkkj\" (UID: \"910180ff-efca-4c61-b250-6b0ba7a76089\") " pod="openshift-controller-manager/controller-manager-5cc4c76f78-8gkkj" Mar 12 18:13:46.133909 master-0 kubenswrapper[7337]: I0312 18:13:46.133886 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/910180ff-efca-4c61-b250-6b0ba7a76089-client-ca\") pod \"controller-manager-5cc4c76f78-8gkkj\" (UID: \"910180ff-efca-4c61-b250-6b0ba7a76089\") " pod="openshift-controller-manager/controller-manager-5cc4c76f78-8gkkj" Mar 12 18:13:46.233906 master-0 kubenswrapper[7337]: I0312 18:13:46.233854 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/910180ff-efca-4c61-b250-6b0ba7a76089-client-ca\") pod \"910180ff-efca-4c61-b250-6b0ba7a76089\" (UID: \"910180ff-efca-4c61-b250-6b0ba7a76089\") " Mar 12 18:13:46.234498 master-0 kubenswrapper[7337]: I0312 18:13:46.234450 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/910180ff-efca-4c61-b250-6b0ba7a76089-client-ca" (OuterVolumeSpecName: "client-ca") pod "910180ff-efca-4c61-b250-6b0ba7a76089" (UID: "910180ff-efca-4c61-b250-6b0ba7a76089"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:13:46.304663 master-0 kubenswrapper[7337]: I0312 18:13:46.304596 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cc4c76f78-8gkkj" Mar 12 18:13:46.305632 master-0 kubenswrapper[7337]: I0312 18:13:46.305446 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b45cc45cc-kjvgg" Mar 12 18:13:46.335705 master-0 kubenswrapper[7337]: I0312 18:13:46.335669 7337 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/910180ff-efca-4c61-b250-6b0ba7a76089-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 18:13:46.346802 master-0 kubenswrapper[7337]: I0312 18:13:46.345986 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5cc4c76f78-8gkkj"] Mar 12 18:13:46.351895 master-0 kubenswrapper[7337]: I0312 18:13:46.351843 7337 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5cc4c76f78-8gkkj"] Mar 12 18:13:46.372166 master-0 kubenswrapper[7337]: I0312 18:13:46.372077 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b45cc45cc-kjvgg"] Mar 12 18:13:46.373823 master-0 kubenswrapper[7337]: I0312 18:13:46.373801 7337 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b45cc45cc-kjvgg"] Mar 12 18:13:46.538693 master-0 kubenswrapper[7337]: I0312 18:13:46.538604 7337 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41644344-9d23-4ed1-9044-4fe65cab6159-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 18:13:46.538886 master-0 kubenswrapper[7337]: I0312 18:13:46.538872 7337 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41644344-9d23-4ed1-9044-4fe65cab6159-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 18:13:46.817671 master-0 kubenswrapper[7337]: I0312 18:13:46.817592 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cd7bb8bd9-qb8ht"] Mar 12 18:13:46.822599 master-0 kubenswrapper[7337]: I0312 18:13:46.822203 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5c597958db-cclsx"] Mar 12 18:13:46.822599 master-0 kubenswrapper[7337]: I0312 18:13:46.822470 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cd7bb8bd9-qb8ht" Mar 12 18:13:46.824450 master-0 kubenswrapper[7337]: I0312 18:13:46.823306 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c597958db-cclsx" Mar 12 18:13:46.826414 master-0 kubenswrapper[7337]: I0312 18:13:46.826249 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 18:13:46.826659 master-0 kubenswrapper[7337]: I0312 18:13:46.826572 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 18:13:46.833121 master-0 kubenswrapper[7337]: I0312 18:13:46.828650 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cd7bb8bd9-qb8ht"] Mar 12 18:13:46.833121 master-0 kubenswrapper[7337]: I0312 18:13:46.832619 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 18:13:46.833121 master-0 kubenswrapper[7337]: I0312 18:13:46.832853 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 18:13:46.833121 master-0 kubenswrapper[7337]: I0312 18:13:46.832913 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 18:13:46.833121 master-0 kubenswrapper[7337]: I0312 18:13:46.832957 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 18:13:46.834823 master-0 kubenswrapper[7337]: I0312 18:13:46.833324 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 18:13:46.834823 master-0 kubenswrapper[7337]: I0312 18:13:46.833782 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 18:13:46.834823 master-0 kubenswrapper[7337]: I0312 18:13:46.833897 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 18:13:46.834823 master-0 kubenswrapper[7337]: I0312 18:13:46.833985 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 18:13:46.834823 master-0 kubenswrapper[7337]: I0312 18:13:46.834134 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c597958db-cclsx"] Mar 12 18:13:46.838489 master-0 kubenswrapper[7337]: I0312 18:13:46.838257 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 18:13:46.942753 master-0 kubenswrapper[7337]: I0312 18:13:46.942692 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/600d037c-0703-43c9-8f01-c8da82b114fd-serving-cert\") pod \"controller-manager-5c597958db-cclsx\" (UID: \"600d037c-0703-43c9-8f01-c8da82b114fd\") " pod="openshift-controller-manager/controller-manager-5c597958db-cclsx" Mar 12 18:13:46.942980 master-0 kubenswrapper[7337]: I0312 18:13:46.942766 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/600d037c-0703-43c9-8f01-c8da82b114fd-client-ca\") pod \"controller-manager-5c597958db-cclsx\" (UID: \"600d037c-0703-43c9-8f01-c8da82b114fd\") " pod="openshift-controller-manager/controller-manager-5c597958db-cclsx" Mar 12 18:13:46.942980 master-0 kubenswrapper[7337]: I0312 18:13:46.942795 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21fac822-b1df-42c3-8574-fa86e43d7ea4-config\") pod \"route-controller-manager-5cd7bb8bd9-qb8ht\" (UID: \"21fac822-b1df-42c3-8574-fa86e43d7ea4\") " pod="openshift-route-controller-manager/route-controller-manager-5cd7bb8bd9-qb8ht" Mar 12 18:13:46.942980 master-0 kubenswrapper[7337]: I0312 18:13:46.942814 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21fac822-b1df-42c3-8574-fa86e43d7ea4-client-ca\") pod \"route-controller-manager-5cd7bb8bd9-qb8ht\" (UID: \"21fac822-b1df-42c3-8574-fa86e43d7ea4\") " pod="openshift-route-controller-manager/route-controller-manager-5cd7bb8bd9-qb8ht" Mar 12 18:13:46.942980 master-0 kubenswrapper[7337]: I0312 18:13:46.942856 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/600d037c-0703-43c9-8f01-c8da82b114fd-config\") pod \"controller-manager-5c597958db-cclsx\" (UID: \"600d037c-0703-43c9-8f01-c8da82b114fd\") " pod="openshift-controller-manager/controller-manager-5c597958db-cclsx" Mar 12 18:13:46.942980 master-0 kubenswrapper[7337]: I0312 18:13:46.942878 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21fac822-b1df-42c3-8574-fa86e43d7ea4-serving-cert\") pod \"route-controller-manager-5cd7bb8bd9-qb8ht\" (UID: \"21fac822-b1df-42c3-8574-fa86e43d7ea4\") " pod="openshift-route-controller-manager/route-controller-manager-5cd7bb8bd9-qb8ht" Mar 12 18:13:46.942980 master-0 kubenswrapper[7337]: I0312 18:13:46.942917 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/600d037c-0703-43c9-8f01-c8da82b114fd-proxy-ca-bundles\") pod \"controller-manager-5c597958db-cclsx\" (UID: \"600d037c-0703-43c9-8f01-c8da82b114fd\") " pod="openshift-controller-manager/controller-manager-5c597958db-cclsx" Mar 12 18:13:46.942980 master-0 kubenswrapper[7337]: I0312 18:13:46.942951 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gkdn\" (UniqueName: \"kubernetes.io/projected/21fac822-b1df-42c3-8574-fa86e43d7ea4-kube-api-access-2gkdn\") pod \"route-controller-manager-5cd7bb8bd9-qb8ht\" (UID: \"21fac822-b1df-42c3-8574-fa86e43d7ea4\") " pod="openshift-route-controller-manager/route-controller-manager-5cd7bb8bd9-qb8ht" Mar 12 18:13:46.942980 master-0 kubenswrapper[7337]: I0312 18:13:46.942974 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkrg6\" (UniqueName: \"kubernetes.io/projected/600d037c-0703-43c9-8f01-c8da82b114fd-kube-api-access-xkrg6\") pod \"controller-manager-5c597958db-cclsx\" (UID: \"600d037c-0703-43c9-8f01-c8da82b114fd\") " pod="openshift-controller-manager/controller-manager-5c597958db-cclsx" Mar 12 18:13:47.047412 master-0 kubenswrapper[7337]: I0312 18:13:47.047314 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/600d037c-0703-43c9-8f01-c8da82b114fd-proxy-ca-bundles\") pod \"controller-manager-5c597958db-cclsx\" (UID: \"600d037c-0703-43c9-8f01-c8da82b114fd\") " pod="openshift-controller-manager/controller-manager-5c597958db-cclsx" Mar 12 18:13:47.047573 master-0 kubenswrapper[7337]: I0312 18:13:47.047534 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gkdn\" (UniqueName: \"kubernetes.io/projected/21fac822-b1df-42c3-8574-fa86e43d7ea4-kube-api-access-2gkdn\") pod \"route-controller-manager-5cd7bb8bd9-qb8ht\" (UID: \"21fac822-b1df-42c3-8574-fa86e43d7ea4\") " pod="openshift-route-controller-manager/route-controller-manager-5cd7bb8bd9-qb8ht" Mar 12 18:13:47.047628 master-0 kubenswrapper[7337]: I0312 18:13:47.047612 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkrg6\" (UniqueName: \"kubernetes.io/projected/600d037c-0703-43c9-8f01-c8da82b114fd-kube-api-access-xkrg6\") pod \"controller-manager-5c597958db-cclsx\" (UID: \"600d037c-0703-43c9-8f01-c8da82b114fd\") " pod="openshift-controller-manager/controller-manager-5c597958db-cclsx" Mar 12 18:13:47.047745 master-0 kubenswrapper[7337]: I0312 18:13:47.047707 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/600d037c-0703-43c9-8f01-c8da82b114fd-serving-cert\") pod \"controller-manager-5c597958db-cclsx\" (UID: \"600d037c-0703-43c9-8f01-c8da82b114fd\") " pod="openshift-controller-manager/controller-manager-5c597958db-cclsx" Mar 12 18:13:47.047907 master-0 kubenswrapper[7337]: I0312 18:13:47.047872 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/600d037c-0703-43c9-8f01-c8da82b114fd-client-ca\") pod \"controller-manager-5c597958db-cclsx\" (UID: \"600d037c-0703-43c9-8f01-c8da82b114fd\") " pod="openshift-controller-manager/controller-manager-5c597958db-cclsx" Mar 12 18:13:47.047964 master-0 kubenswrapper[7337]: I0312 18:13:47.047938 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21fac822-b1df-42c3-8574-fa86e43d7ea4-config\") pod \"route-controller-manager-5cd7bb8bd9-qb8ht\" (UID: \"21fac822-b1df-42c3-8574-fa86e43d7ea4\") " pod="openshift-route-controller-manager/route-controller-manager-5cd7bb8bd9-qb8ht" Mar 12 18:13:47.048020 master-0 kubenswrapper[7337]: I0312 18:13:47.047974 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21fac822-b1df-42c3-8574-fa86e43d7ea4-client-ca\") pod \"route-controller-manager-5cd7bb8bd9-qb8ht\" (UID: \"21fac822-b1df-42c3-8574-fa86e43d7ea4\") " pod="openshift-route-controller-manager/route-controller-manager-5cd7bb8bd9-qb8ht" Mar 12 18:13:47.049088 master-0 kubenswrapper[7337]: I0312 18:13:47.049045 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/600d037c-0703-43c9-8f01-c8da82b114fd-config\") pod \"controller-manager-5c597958db-cclsx\" (UID: \"600d037c-0703-43c9-8f01-c8da82b114fd\") " pod="openshift-controller-manager/controller-manager-5c597958db-cclsx" Mar 12 18:13:47.049152 master-0 kubenswrapper[7337]: I0312 18:13:47.049080 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21fac822-b1df-42c3-8574-fa86e43d7ea4-client-ca\") pod \"route-controller-manager-5cd7bb8bd9-qb8ht\" (UID: \"21fac822-b1df-42c3-8574-fa86e43d7ea4\") " pod="openshift-route-controller-manager/route-controller-manager-5cd7bb8bd9-qb8ht" Mar 12 18:13:47.049152 master-0 kubenswrapper[7337]: I0312 18:13:47.049142 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/600d037c-0703-43c9-8f01-c8da82b114fd-client-ca\") pod \"controller-manager-5c597958db-cclsx\" (UID: \"600d037c-0703-43c9-8f01-c8da82b114fd\") " pod="openshift-controller-manager/controller-manager-5c597958db-cclsx" Mar 12 18:13:47.049419 master-0 kubenswrapper[7337]: I0312 18:13:47.049156 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21fac822-b1df-42c3-8574-fa86e43d7ea4-serving-cert\") pod \"route-controller-manager-5cd7bb8bd9-qb8ht\" (UID: \"21fac822-b1df-42c3-8574-fa86e43d7ea4\") " pod="openshift-route-controller-manager/route-controller-manager-5cd7bb8bd9-qb8ht" Mar 12 18:13:47.049856 master-0 kubenswrapper[7337]: I0312 18:13:47.049827 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21fac822-b1df-42c3-8574-fa86e43d7ea4-config\") pod \"route-controller-manager-5cd7bb8bd9-qb8ht\" (UID: \"21fac822-b1df-42c3-8574-fa86e43d7ea4\") " pod="openshift-route-controller-manager/route-controller-manager-5cd7bb8bd9-qb8ht" Mar 12 18:13:47.053072 master-0 kubenswrapper[7337]: I0312 18:13:47.053030 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/600d037c-0703-43c9-8f01-c8da82b114fd-config\") pod \"controller-manager-5c597958db-cclsx\" (UID: \"600d037c-0703-43c9-8f01-c8da82b114fd\") " pod="openshift-controller-manager/controller-manager-5c597958db-cclsx" Mar 12 18:13:47.053306 master-0 kubenswrapper[7337]: I0312 18:13:47.053276 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/600d037c-0703-43c9-8f01-c8da82b114fd-proxy-ca-bundles\") pod \"controller-manager-5c597958db-cclsx\" (UID: \"600d037c-0703-43c9-8f01-c8da82b114fd\") " pod="openshift-controller-manager/controller-manager-5c597958db-cclsx" Mar 12 18:13:47.055010 master-0 kubenswrapper[7337]: I0312 18:13:47.054976 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/600d037c-0703-43c9-8f01-c8da82b114fd-serving-cert\") pod \"controller-manager-5c597958db-cclsx\" (UID: \"600d037c-0703-43c9-8f01-c8da82b114fd\") " pod="openshift-controller-manager/controller-manager-5c597958db-cclsx" Mar 12 18:13:47.055816 master-0 kubenswrapper[7337]: I0312 18:13:47.055595 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21fac822-b1df-42c3-8574-fa86e43d7ea4-serving-cert\") pod \"route-controller-manager-5cd7bb8bd9-qb8ht\" (UID: \"21fac822-b1df-42c3-8574-fa86e43d7ea4\") " pod="openshift-route-controller-manager/route-controller-manager-5cd7bb8bd9-qb8ht" Mar 12 18:13:47.070920 master-0 kubenswrapper[7337]: I0312 18:13:47.070831 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkrg6\" (UniqueName: \"kubernetes.io/projected/600d037c-0703-43c9-8f01-c8da82b114fd-kube-api-access-xkrg6\") pod \"controller-manager-5c597958db-cclsx\" (UID: \"600d037c-0703-43c9-8f01-c8da82b114fd\") " pod="openshift-controller-manager/controller-manager-5c597958db-cclsx" Mar 12 18:13:47.071068 master-0 kubenswrapper[7337]: I0312 18:13:47.071040 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gkdn\" (UniqueName: \"kubernetes.io/projected/21fac822-b1df-42c3-8574-fa86e43d7ea4-kube-api-access-2gkdn\") pod \"route-controller-manager-5cd7bb8bd9-qb8ht\" (UID: \"21fac822-b1df-42c3-8574-fa86e43d7ea4\") " pod="openshift-route-controller-manager/route-controller-manager-5cd7bb8bd9-qb8ht" Mar 12 18:13:47.212109 master-0 kubenswrapper[7337]: I0312 18:13:47.200109 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cd7bb8bd9-qb8ht" Mar 12 18:13:47.216081 master-0 kubenswrapper[7337]: I0312 18:13:47.216045 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c597958db-cclsx" Mar 12 18:13:47.305756 master-0 kubenswrapper[7337]: I0312 18:13:47.301394 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-5786c989f8-f6jgb"] Mar 12 18:13:47.345675 master-0 kubenswrapper[7337]: I0312 18:13:47.345361 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 12 18:13:47.398532 master-0 kubenswrapper[7337]: I0312 18:13:47.393945 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" event={"ID":"e22c7035-4b7a-48cb-9abb-db277b387842","Type":"ContainerStarted","Data":"a987d23905b82090084aa8d4e8ab172632e1e1833011544d548639c8ff18c467"} Mar 12 18:13:47.456301 master-0 kubenswrapper[7337]: I0312 18:13:47.456218 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" event={"ID":"00755a4e-124c-4a51-b1c5-7c505b3637a8","Type":"ContainerStarted","Data":"c1a8fe3c9ec9293190da1abf5d84165878cc28e2ff9a4187ebb4b5e5ee9ed66b"} Mar 12 18:13:47.477186 master-0 kubenswrapper[7337]: I0312 18:13:47.477134 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" event={"ID":"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed","Type":"ContainerStarted","Data":"bf6736acde3a261d2e1c8eec8be75f38ab871967029a8a7ea9d5bc1635fc75f5"} Mar 12 18:13:47.481255 master-0 kubenswrapper[7337]: I0312 18:13:47.481184 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" event={"ID":"d94dc349-c5cb-4f12-8e48-867030af4981","Type":"ContainerStarted","Data":"fd2b6be186aaa869f9c5743426ef2bc5d49bada1c5fa7a307e7f55efa78a7bbf"} Mar 12 18:13:47.524189 master-0 kubenswrapper[7337]: I0312 18:13:47.520547 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cd7bb8bd9-qb8ht"] Mar 12 18:13:47.680010 master-0 kubenswrapper[7337]: I0312 18:13:47.678184 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-c6qmx"] Mar 12 18:13:47.680010 master-0 kubenswrapper[7337]: I0312 18:13:47.678650 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.714884 master-0 kubenswrapper[7337]: I0312 18:13:47.701020 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c597958db-cclsx"] Mar 12 18:13:47.736279 master-0 kubenswrapper[7337]: I0312 18:13:47.736224 7337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41644344-9d23-4ed1-9044-4fe65cab6159" path="/var/lib/kubelet/pods/41644344-9d23-4ed1-9044-4fe65cab6159/volumes" Mar 12 18:13:47.736689 master-0 kubenswrapper[7337]: I0312 18:13:47.736620 7337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="910180ff-efca-4c61-b250-6b0ba7a76089" path="/var/lib/kubelet/pods/910180ff-efca-4c61-b250-6b0ba7a76089/volumes" Mar 12 18:13:47.773619 master-0 kubenswrapper[7337]: I0312 18:13:47.772913 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-sysctl-d\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.773619 master-0 kubenswrapper[7337]: I0312 18:13:47.772957 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-var-lib-kubelet\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.773619 master-0 kubenswrapper[7337]: I0312 18:13:47.772999 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-systemd\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.773619 master-0 kubenswrapper[7337]: I0312 18:13:47.773015 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-sys\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.773619 master-0 kubenswrapper[7337]: I0312 18:13:47.773033 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-kubernetes\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.773619 master-0 kubenswrapper[7337]: I0312 18:13:47.773046 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-tuned\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.773619 master-0 kubenswrapper[7337]: I0312 18:13:47.773060 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-host\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.773619 master-0 kubenswrapper[7337]: I0312 18:13:47.773099 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lj7z\" (UniqueName: \"kubernetes.io/projected/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-kube-api-access-2lj7z\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.773619 master-0 kubenswrapper[7337]: I0312 18:13:47.773124 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-lib-modules\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.773619 master-0 kubenswrapper[7337]: I0312 18:13:47.773149 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-sysconfig\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.773619 master-0 kubenswrapper[7337]: I0312 18:13:47.773166 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-tmp\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.773619 master-0 kubenswrapper[7337]: I0312 18:13:47.773181 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-sysctl-conf\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.773619 master-0 kubenswrapper[7337]: I0312 18:13:47.773200 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-run\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.773619 master-0 kubenswrapper[7337]: I0312 18:13:47.773232 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-modprobe-d\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.875231 master-0 kubenswrapper[7337]: I0312 18:13:47.874883 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-sysconfig\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.875402 master-0 kubenswrapper[7337]: I0312 18:13:47.875245 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-tmp\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.875402 master-0 kubenswrapper[7337]: I0312 18:13:47.875277 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-sysctl-conf\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.875402 master-0 kubenswrapper[7337]: I0312 18:13:47.875299 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-run\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.875402 master-0 kubenswrapper[7337]: I0312 18:13:47.875334 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-modprobe-d\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.875402 master-0 kubenswrapper[7337]: I0312 18:13:47.875355 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-sysctl-d\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.875402 master-0 kubenswrapper[7337]: I0312 18:13:47.875381 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-var-lib-kubelet\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.875670 master-0 kubenswrapper[7337]: I0312 18:13:47.875424 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-systemd\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.875670 master-0 kubenswrapper[7337]: I0312 18:13:47.875445 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-sys\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.875670 master-0 kubenswrapper[7337]: I0312 18:13:47.875468 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-kubernetes\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.875670 master-0 kubenswrapper[7337]: I0312 18:13:47.875490 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-tuned\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.875670 master-0 kubenswrapper[7337]: I0312 18:13:47.875532 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-host\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.875670 master-0 kubenswrapper[7337]: I0312 18:13:47.875585 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lj7z\" (UniqueName: \"kubernetes.io/projected/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-kube-api-access-2lj7z\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.875670 master-0 kubenswrapper[7337]: I0312 18:13:47.875617 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-lib-modules\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.875851 master-0 kubenswrapper[7337]: I0312 18:13:47.875073 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-sysconfig\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.876052 master-0 kubenswrapper[7337]: I0312 18:13:47.876027 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-lib-modules\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.876229 master-0 kubenswrapper[7337]: I0312 18:13:47.876204 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-sysctl-conf\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.876283 master-0 kubenswrapper[7337]: I0312 18:13:47.876264 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-sys\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.876346 master-0 kubenswrapper[7337]: I0312 18:13:47.876319 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-run\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.876385 master-0 kubenswrapper[7337]: I0312 18:13:47.876349 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-modprobe-d\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.876416 master-0 kubenswrapper[7337]: I0312 18:13:47.876397 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-sysctl-d\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.876879 master-0 kubenswrapper[7337]: I0312 18:13:47.876774 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-var-lib-kubelet\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.876879 master-0 kubenswrapper[7337]: I0312 18:13:47.876805 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-host\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.876879 master-0 kubenswrapper[7337]: I0312 18:13:47.876403 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-kubernetes\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.876879 master-0 kubenswrapper[7337]: I0312 18:13:47.876819 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-systemd\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.882050 master-0 kubenswrapper[7337]: I0312 18:13:47.881999 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-tmp\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.885559 master-0 kubenswrapper[7337]: I0312 18:13:47.882749 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-tuned\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:47.896840 master-0 kubenswrapper[7337]: I0312 18:13:47.896797 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lj7z\" (UniqueName: \"kubernetes.io/projected/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-kube-api-access-2lj7z\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:48.019732 master-0 kubenswrapper[7337]: I0312 18:13:48.019672 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:13:48.033184 master-0 kubenswrapper[7337]: W0312 18:13:48.033140 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod999f02f6_e9b8_4d4b_ac35_b8b43a931cfc.slice/crio-277d9668436832bc9b237cfda1e62f8877471ab27e58b9e701eaab60ab4f06e2 WatchSource:0}: Error finding container 277d9668436832bc9b237cfda1e62f8877471ab27e58b9e701eaab60ab4f06e2: Status 404 returned error can't find the container with id 277d9668436832bc9b237cfda1e62f8877471ab27e58b9e701eaab60ab4f06e2 Mar 12 18:13:48.269538 master-0 kubenswrapper[7337]: I0312 18:13:48.264140 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-6h5tt"] Mar 12 18:13:48.269538 master-0 kubenswrapper[7337]: I0312 18:13:48.265055 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-6h5tt" Mar 12 18:13:48.269538 master-0 kubenswrapper[7337]: I0312 18:13:48.269059 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 12 18:13:48.269538 master-0 kubenswrapper[7337]: I0312 18:13:48.269192 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 12 18:13:48.269538 master-0 kubenswrapper[7337]: I0312 18:13:48.269304 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 12 18:13:48.269538 master-0 kubenswrapper[7337]: I0312 18:13:48.269400 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 12 18:13:48.275424 master-0 kubenswrapper[7337]: I0312 18:13:48.275384 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-6h5tt"] Mar 12 18:13:48.393966 master-0 kubenswrapper[7337]: I0312 18:13:48.392033 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/266b9f4f-3fb4-474d-84df-0a6c687c7e9a-config-volume\") pod \"dns-default-6h5tt\" (UID: \"266b9f4f-3fb4-474d-84df-0a6c687c7e9a\") " pod="openshift-dns/dns-default-6h5tt" Mar 12 18:13:48.393966 master-0 kubenswrapper[7337]: I0312 18:13:48.392148 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/266b9f4f-3fb4-474d-84df-0a6c687c7e9a-metrics-tls\") pod \"dns-default-6h5tt\" (UID: \"266b9f4f-3fb4-474d-84df-0a6c687c7e9a\") " pod="openshift-dns/dns-default-6h5tt" Mar 12 18:13:48.393966 master-0 kubenswrapper[7337]: I0312 18:13:48.392200 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tmqs\" (UniqueName: \"kubernetes.io/projected/266b9f4f-3fb4-474d-84df-0a6c687c7e9a-kube-api-access-6tmqs\") pod \"dns-default-6h5tt\" (UID: \"266b9f4f-3fb4-474d-84df-0a6c687c7e9a\") " pod="openshift-dns/dns-default-6h5tt" Mar 12 18:13:48.496552 master-0 kubenswrapper[7337]: I0312 18:13:48.493115 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c597958db-cclsx" event={"ID":"600d037c-0703-43c9-8f01-c8da82b114fd","Type":"ContainerStarted","Data":"93a66548bb42611a70333af013dbc6db10084f9e626148e3dce0bdc571c9c53a"} Mar 12 18:13:48.496552 master-0 kubenswrapper[7337]: I0312 18:13:48.496030 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/266b9f4f-3fb4-474d-84df-0a6c687c7e9a-metrics-tls\") pod \"dns-default-6h5tt\" (UID: \"266b9f4f-3fb4-474d-84df-0a6c687c7e9a\") " pod="openshift-dns/dns-default-6h5tt" Mar 12 18:13:48.496552 master-0 kubenswrapper[7337]: I0312 18:13:48.496091 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tmqs\" (UniqueName: \"kubernetes.io/projected/266b9f4f-3fb4-474d-84df-0a6c687c7e9a-kube-api-access-6tmqs\") pod \"dns-default-6h5tt\" (UID: \"266b9f4f-3fb4-474d-84df-0a6c687c7e9a\") " pod="openshift-dns/dns-default-6h5tt" Mar 12 18:13:48.496803 master-0 kubenswrapper[7337]: I0312 18:13:48.496676 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/266b9f4f-3fb4-474d-84df-0a6c687c7e9a-config-volume\") pod \"dns-default-6h5tt\" (UID: \"266b9f4f-3fb4-474d-84df-0a6c687c7e9a\") " pod="openshift-dns/dns-default-6h5tt" Mar 12 18:13:48.497779 master-0 kubenswrapper[7337]: I0312 18:13:48.497742 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/266b9f4f-3fb4-474d-84df-0a6c687c7e9a-config-volume\") pod \"dns-default-6h5tt\" (UID: \"266b9f4f-3fb4-474d-84df-0a6c687c7e9a\") " pod="openshift-dns/dns-default-6h5tt" Mar 12 18:13:48.510546 master-0 kubenswrapper[7337]: I0312 18:13:48.510221 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/266b9f4f-3fb4-474d-84df-0a6c687c7e9a-metrics-tls\") pod \"dns-default-6h5tt\" (UID: \"266b9f4f-3fb4-474d-84df-0a6c687c7e9a\") " pod="openshift-dns/dns-default-6h5tt" Mar 12 18:13:48.510905 master-0 kubenswrapper[7337]: I0312 18:13:48.510836 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-jqj5k" event={"ID":"8ad05507-e242-4ff8-ae80-c16ff9ee68e2","Type":"ContainerStarted","Data":"5a1b945c7a19f636288dcb34f57a4087135c4cac370a4ad03640b0dbf1bb3bcb"} Mar 12 18:13:48.510905 master-0 kubenswrapper[7337]: I0312 18:13:48.510899 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-jqj5k" event={"ID":"8ad05507-e242-4ff8-ae80-c16ff9ee68e2","Type":"ContainerStarted","Data":"a4e0780fd8aed0626dbd83306c6bfdc12c28851e9b8963d662204f8d4d49ccd5"} Mar 12 18:13:48.523288 master-0 kubenswrapper[7337]: I0312 18:13:48.523253 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tmqs\" (UniqueName: \"kubernetes.io/projected/266b9f4f-3fb4-474d-84df-0a6c687c7e9a-kube-api-access-6tmqs\") pod \"dns-default-6h5tt\" (UID: \"266b9f4f-3fb4-474d-84df-0a6c687c7e9a\") " pod="openshift-dns/dns-default-6h5tt" Mar 12 18:13:48.529589 master-0 kubenswrapper[7337]: I0312 18:13:48.529466 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" event={"ID":"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc","Type":"ContainerStarted","Data":"282ff37511f6ae55956370c5027885ad234d12602aa94ef6f9bbdd934008e550"} Mar 12 18:13:48.529722 master-0 kubenswrapper[7337]: I0312 18:13:48.529617 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" event={"ID":"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc","Type":"ContainerStarted","Data":"277d9668436832bc9b237cfda1e62f8877471ab27e58b9e701eaab60ab4f06e2"} Mar 12 18:13:48.535459 master-0 kubenswrapper[7337]: I0312 18:13:48.535373 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" event={"ID":"d94dc349-c5cb-4f12-8e48-867030af4981","Type":"ContainerStarted","Data":"0ad3015222a8f16d127aab6f080af73c1d0665020a57f9fc4f03e7582fcac536"} Mar 12 18:13:48.546301 master-0 kubenswrapper[7337]: I0312 18:13:48.546238 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5cd7bb8bd9-qb8ht" event={"ID":"21fac822-b1df-42c3-8574-fa86e43d7ea4","Type":"ContainerStarted","Data":"4b3321d88006202c98d8ca0ae93d8f9d68729565e97acf051999a3a546d3319c"} Mar 12 18:13:48.554636 master-0 kubenswrapper[7337]: I0312 18:13:48.554414 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"e418d797-2c31-404b-9dc3-251399e42542","Type":"ContainerStarted","Data":"6b7528f0c5da1778fadc0415752a37a2983c5adfa27ce67313a93246b6745480"} Mar 12 18:13:48.554636 master-0 kubenswrapper[7337]: I0312 18:13:48.554462 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"e418d797-2c31-404b-9dc3-251399e42542","Type":"ContainerStarted","Data":"e39d12d9165077c1566bf86fdaa9d42c6abb87768cbd70c00423b7ab08d3f0d6"} Mar 12 18:13:48.565426 master-0 kubenswrapper[7337]: I0312 18:13:48.563960 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" event={"ID":"9b41258c-ac1d-4e00-ac5e-732d85441f12","Type":"ContainerStarted","Data":"e11e6ab5433862f323f8ba8f5b3beee99fbf9268c9b94118367fbf5cbb898018"} Mar 12 18:13:48.579721 master-0 kubenswrapper[7337]: I0312 18:13:48.579596 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" podStartSLOduration=1.5795742339999999 podStartE2EDuration="1.579574234s" podCreationTimestamp="2026-03-12 18:13:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:13:48.576970628 +0000 UTC m=+29.045571595" watchObservedRunningTime="2026-03-12 18:13:48.579574234 +0000 UTC m=+29.048175191" Mar 12 18:13:48.618829 master-0 kubenswrapper[7337]: I0312 18:13:48.618417 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-0" podStartSLOduration=5.618389656 podStartE2EDuration="5.618389656s" podCreationTimestamp="2026-03-12 18:13:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:13:48.615668497 +0000 UTC m=+29.084269444" watchObservedRunningTime="2026-03-12 18:13:48.618389656 +0000 UTC m=+29.086990603" Mar 12 18:13:48.628401 master-0 kubenswrapper[7337]: I0312 18:13:48.628342 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-6h5tt" Mar 12 18:13:48.647427 master-0 kubenswrapper[7337]: I0312 18:13:48.647393 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-7lzgx"] Mar 12 18:13:48.648188 master-0 kubenswrapper[7337]: I0312 18:13:48.648164 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-7lzgx" Mar 12 18:13:48.811569 master-0 kubenswrapper[7337]: I0312 18:13:48.810155 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbzcs\" (UniqueName: \"kubernetes.io/projected/9717d467-af1a-4de0-88e0-c47ec4d12d6e-kube-api-access-kbzcs\") pod \"node-resolver-7lzgx\" (UID: \"9717d467-af1a-4de0-88e0-c47ec4d12d6e\") " pod="openshift-dns/node-resolver-7lzgx" Mar 12 18:13:48.811569 master-0 kubenswrapper[7337]: I0312 18:13:48.810211 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9717d467-af1a-4de0-88e0-c47ec4d12d6e-hosts-file\") pod \"node-resolver-7lzgx\" (UID: \"9717d467-af1a-4de0-88e0-c47ec4d12d6e\") " pod="openshift-dns/node-resolver-7lzgx" Mar 12 18:13:48.844021 master-0 kubenswrapper[7337]: I0312 18:13:48.843976 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-6h5tt"] Mar 12 18:13:48.857263 master-0 kubenswrapper[7337]: W0312 18:13:48.857211 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod266b9f4f_3fb4_474d_84df_0a6c687c7e9a.slice/crio-467832511abdd120edabe55a66306c8828fbfde7aa084b7647ffcfdeb1475b2c WatchSource:0}: Error finding container 467832511abdd120edabe55a66306c8828fbfde7aa084b7647ffcfdeb1475b2c: Status 404 returned error can't find the container with id 467832511abdd120edabe55a66306c8828fbfde7aa084b7647ffcfdeb1475b2c Mar 12 18:13:48.911446 master-0 kubenswrapper[7337]: I0312 18:13:48.911047 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9717d467-af1a-4de0-88e0-c47ec4d12d6e-hosts-file\") pod \"node-resolver-7lzgx\" (UID: \"9717d467-af1a-4de0-88e0-c47ec4d12d6e\") " pod="openshift-dns/node-resolver-7lzgx" Mar 12 18:13:48.911446 master-0 kubenswrapper[7337]: I0312 18:13:48.911428 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbzcs\" (UniqueName: \"kubernetes.io/projected/9717d467-af1a-4de0-88e0-c47ec4d12d6e-kube-api-access-kbzcs\") pod \"node-resolver-7lzgx\" (UID: \"9717d467-af1a-4de0-88e0-c47ec4d12d6e\") " pod="openshift-dns/node-resolver-7lzgx" Mar 12 18:13:48.911850 master-0 kubenswrapper[7337]: I0312 18:13:48.911828 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9717d467-af1a-4de0-88e0-c47ec4d12d6e-hosts-file\") pod \"node-resolver-7lzgx\" (UID: \"9717d467-af1a-4de0-88e0-c47ec4d12d6e\") " pod="openshift-dns/node-resolver-7lzgx" Mar 12 18:13:48.930189 master-0 kubenswrapper[7337]: I0312 18:13:48.930134 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbzcs\" (UniqueName: \"kubernetes.io/projected/9717d467-af1a-4de0-88e0-c47ec4d12d6e-kube-api-access-kbzcs\") pod \"node-resolver-7lzgx\" (UID: \"9717d467-af1a-4de0-88e0-c47ec4d12d6e\") " pod="openshift-dns/node-resolver-7lzgx" Mar 12 18:13:48.997436 master-0 kubenswrapper[7337]: I0312 18:13:48.997365 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-7lzgx" Mar 12 18:13:49.013852 master-0 kubenswrapper[7337]: W0312 18:13:49.013798 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9717d467_af1a_4de0_88e0_c47ec4d12d6e.slice/crio-1cd179b2f7f0fe564fa7a9477bf555cfaf8b89c4a460b563f7a642b74759364e WatchSource:0}: Error finding container 1cd179b2f7f0fe564fa7a9477bf555cfaf8b89c4a460b563f7a642b74759364e: Status 404 returned error can't find the container with id 1cd179b2f7f0fe564fa7a9477bf555cfaf8b89c4a460b563f7a642b74759364e Mar 12 18:13:49.568379 master-0 kubenswrapper[7337]: I0312 18:13:49.567770 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-6h5tt" event={"ID":"266b9f4f-3fb4-474d-84df-0a6c687c7e9a","Type":"ContainerStarted","Data":"467832511abdd120edabe55a66306c8828fbfde7aa084b7647ffcfdeb1475b2c"} Mar 12 18:13:49.570850 master-0 kubenswrapper[7337]: I0312 18:13:49.570749 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-7lzgx" event={"ID":"9717d467-af1a-4de0-88e0-c47ec4d12d6e","Type":"ContainerStarted","Data":"92696cd87b70833bebaff90a8fe006e45d9a3075e554c321d430a4f95477de8d"} Mar 12 18:13:49.570850 master-0 kubenswrapper[7337]: I0312 18:13:49.570798 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-7lzgx" event={"ID":"9717d467-af1a-4de0-88e0-c47ec4d12d6e","Type":"ContainerStarted","Data":"1cd179b2f7f0fe564fa7a9477bf555cfaf8b89c4a460b563f7a642b74759364e"} Mar 12 18:13:49.651055 master-0 kubenswrapper[7337]: I0312 18:13:49.650986 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-7lzgx" podStartSLOduration=1.65097078 podStartE2EDuration="1.65097078s" podCreationTimestamp="2026-03-12 18:13:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:13:49.650310483 +0000 UTC m=+30.118911430" watchObservedRunningTime="2026-03-12 18:13:49.65097078 +0000 UTC m=+30.119571727" Mar 12 18:13:50.316541 master-0 kubenswrapper[7337]: I0312 18:13:50.313950 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc"] Mar 12 18:13:50.316541 master-0 kubenswrapper[7337]: I0312 18:13:50.314904 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:13:50.320558 master-0 kubenswrapper[7337]: I0312 18:13:50.318860 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 12 18:13:50.323296 master-0 kubenswrapper[7337]: I0312 18:13:50.323255 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 12 18:13:50.331091 master-0 kubenswrapper[7337]: I0312 18:13:50.326917 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 12 18:13:50.338665 master-0 kubenswrapper[7337]: I0312 18:13:50.338505 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 12 18:13:50.345544 master-0 kubenswrapper[7337]: I0312 18:13:50.341301 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc"] Mar 12 18:13:50.430786 master-0 kubenswrapper[7337]: I0312 18:13:50.430492 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn"] Mar 12 18:13:50.431740 master-0 kubenswrapper[7337]: I0312 18:13:50.431716 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:13:50.432170 master-0 kubenswrapper[7337]: I0312 18:13:50.432143 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/d1b3859c-20a1-4a1c-8508-86ed843768f5-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-mb6tc\" (UID: \"d1b3859c-20a1-4a1c-8508-86ed843768f5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:13:50.432229 master-0 kubenswrapper[7337]: I0312 18:13:50.432207 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw4m5\" (UniqueName: \"kubernetes.io/projected/d1b3859c-20a1-4a1c-8508-86ed843768f5-kube-api-access-gw4m5\") pod \"catalogd-controller-manager-7f8b8b6f4c-mb6tc\" (UID: \"d1b3859c-20a1-4a1c-8508-86ed843768f5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:13:50.432289 master-0 kubenswrapper[7337]: I0312 18:13:50.432265 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/d1b3859c-20a1-4a1c-8508-86ed843768f5-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-mb6tc\" (UID: \"d1b3859c-20a1-4a1c-8508-86ed843768f5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:13:50.432343 master-0 kubenswrapper[7337]: I0312 18:13:50.432325 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/d1b3859c-20a1-4a1c-8508-86ed843768f5-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-mb6tc\" (UID: \"d1b3859c-20a1-4a1c-8508-86ed843768f5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:13:50.432422 master-0 kubenswrapper[7337]: I0312 18:13:50.432399 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d1b3859c-20a1-4a1c-8508-86ed843768f5-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-mb6tc\" (UID: \"d1b3859c-20a1-4a1c-8508-86ed843768f5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:13:50.432459 master-0 kubenswrapper[7337]: I0312 18:13:50.432442 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d1b3859c-20a1-4a1c-8508-86ed843768f5-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-mb6tc\" (UID: \"d1b3859c-20a1-4a1c-8508-86ed843768f5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:13:50.434286 master-0 kubenswrapper[7337]: I0312 18:13:50.434250 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 12 18:13:50.434425 master-0 kubenswrapper[7337]: I0312 18:13:50.434278 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 12 18:13:50.434500 master-0 kubenswrapper[7337]: I0312 18:13:50.434475 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 12 18:13:50.462301 master-0 kubenswrapper[7337]: I0312 18:13:50.462260 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn"] Mar 12 18:13:50.534102 master-0 kubenswrapper[7337]: I0312 18:13:50.534050 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/d1b3859c-20a1-4a1c-8508-86ed843768f5-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-mb6tc\" (UID: \"d1b3859c-20a1-4a1c-8508-86ed843768f5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:13:50.534102 master-0 kubenswrapper[7337]: I0312 18:13:50.534111 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/d1b3859c-20a1-4a1c-8508-86ed843768f5-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-mb6tc\" (UID: \"d1b3859c-20a1-4a1c-8508-86ed843768f5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:13:50.534340 master-0 kubenswrapper[7337]: I0312 18:13:50.534137 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/d1b3859c-20a1-4a1c-8508-86ed843768f5-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-mb6tc\" (UID: \"d1b3859c-20a1-4a1c-8508-86ed843768f5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:13:50.534340 master-0 kubenswrapper[7337]: I0312 18:13:50.534221 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/d1b3859c-20a1-4a1c-8508-86ed843768f5-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-mb6tc\" (UID: \"d1b3859c-20a1-4a1c-8508-86ed843768f5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:13:50.534429 master-0 kubenswrapper[7337]: I0312 18:13:50.534328 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-9nzsn\" (UID: \"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:13:50.534429 master-0 kubenswrapper[7337]: I0312 18:13:50.534414 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d1b3859c-20a1-4a1c-8508-86ed843768f5-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-mb6tc\" (UID: \"d1b3859c-20a1-4a1c-8508-86ed843768f5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:13:50.534502 master-0 kubenswrapper[7337]: I0312 18:13:50.534469 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-9nzsn\" (UID: \"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:13:50.534502 master-0 kubenswrapper[7337]: I0312 18:13:50.534493 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k59mb\" (UniqueName: \"kubernetes.io/projected/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652-kube-api-access-k59mb\") pod \"operator-controller-controller-manager-6598bfb6c4-9nzsn\" (UID: \"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:13:50.534604 master-0 kubenswrapper[7337]: I0312 18:13:50.534583 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw4m5\" (UniqueName: \"kubernetes.io/projected/d1b3859c-20a1-4a1c-8508-86ed843768f5-kube-api-access-gw4m5\") pod \"catalogd-controller-manager-7f8b8b6f4c-mb6tc\" (UID: \"d1b3859c-20a1-4a1c-8508-86ed843768f5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:13:50.534657 master-0 kubenswrapper[7337]: I0312 18:13:50.534644 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d1b3859c-20a1-4a1c-8508-86ed843768f5-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-mb6tc\" (UID: \"d1b3859c-20a1-4a1c-8508-86ed843768f5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:13:50.534703 master-0 kubenswrapper[7337]: I0312 18:13:50.534691 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-9nzsn\" (UID: \"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:13:50.534747 master-0 kubenswrapper[7337]: I0312 18:13:50.534729 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-9nzsn\" (UID: \"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:13:50.535132 master-0 kubenswrapper[7337]: I0312 18:13:50.535099 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d1b3859c-20a1-4a1c-8508-86ed843768f5-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-mb6tc\" (UID: \"d1b3859c-20a1-4a1c-8508-86ed843768f5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:13:50.535678 master-0 kubenswrapper[7337]: I0312 18:13:50.535648 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d1b3859c-20a1-4a1c-8508-86ed843768f5-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-mb6tc\" (UID: \"d1b3859c-20a1-4a1c-8508-86ed843768f5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:13:50.538720 master-0 kubenswrapper[7337]: I0312 18:13:50.538683 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/d1b3859c-20a1-4a1c-8508-86ed843768f5-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-mb6tc\" (UID: \"d1b3859c-20a1-4a1c-8508-86ed843768f5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:13:50.540798 master-0 kubenswrapper[7337]: I0312 18:13:50.540716 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/d1b3859c-20a1-4a1c-8508-86ed843768f5-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-mb6tc\" (UID: \"d1b3859c-20a1-4a1c-8508-86ed843768f5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:13:50.597250 master-0 kubenswrapper[7337]: I0312 18:13:50.597166 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw4m5\" (UniqueName: \"kubernetes.io/projected/d1b3859c-20a1-4a1c-8508-86ed843768f5-kube-api-access-gw4m5\") pod \"catalogd-controller-manager-7f8b8b6f4c-mb6tc\" (UID: \"d1b3859c-20a1-4a1c-8508-86ed843768f5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:13:50.636138 master-0 kubenswrapper[7337]: I0312 18:13:50.636046 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-9nzsn\" (UID: \"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:13:50.636138 master-0 kubenswrapper[7337]: I0312 18:13:50.636109 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-9nzsn\" (UID: \"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:13:50.636357 master-0 kubenswrapper[7337]: I0312 18:13:50.636258 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-9nzsn\" (UID: \"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:13:50.636357 master-0 kubenswrapper[7337]: I0312 18:13:50.636300 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-9nzsn\" (UID: \"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:13:50.636451 master-0 kubenswrapper[7337]: I0312 18:13:50.636374 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k59mb\" (UniqueName: \"kubernetes.io/projected/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652-kube-api-access-k59mb\") pod \"operator-controller-controller-manager-6598bfb6c4-9nzsn\" (UID: \"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:13:50.636757 master-0 kubenswrapper[7337]: I0312 18:13:50.636681 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-9nzsn\" (UID: \"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:13:50.637782 master-0 kubenswrapper[7337]: I0312 18:13:50.637295 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-9nzsn\" (UID: \"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:13:50.637782 master-0 kubenswrapper[7337]: E0312 18:13:50.637363 7337 projected.go:301] Couldn't get configMap payload openshift-operator-controller/operator-controller-trusted-ca-bundle: configmap references non-existent config key: ca-bundle.crt Mar 12 18:13:50.637782 master-0 kubenswrapper[7337]: E0312 18:13:50.637381 7337 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn: configmap references non-existent config key: ca-bundle.crt Mar 12 18:13:50.637782 master-0 kubenswrapper[7337]: E0312 18:13:50.637423 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652-ca-certs podName:b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652 nodeName:}" failed. No retries permitted until 2026-03-12 18:13:51.137407585 +0000 UTC m=+31.606008532 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652-ca-certs") pod "operator-controller-controller-manager-6598bfb6c4-9nzsn" (UID: "b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652") : configmap references non-existent config key: ca-bundle.crt Mar 12 18:13:50.637782 master-0 kubenswrapper[7337]: I0312 18:13:50.637619 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-9nzsn\" (UID: \"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:13:50.665266 master-0 kubenswrapper[7337]: I0312 18:13:50.665199 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:13:50.702655 master-0 kubenswrapper[7337]: I0312 18:13:50.702608 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k59mb\" (UniqueName: \"kubernetes.io/projected/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652-kube-api-access-k59mb\") pod \"operator-controller-controller-manager-6598bfb6c4-9nzsn\" (UID: \"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:13:51.143319 master-0 kubenswrapper[7337]: I0312 18:13:51.143268 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-9nzsn\" (UID: \"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:13:51.155079 master-0 kubenswrapper[7337]: I0312 18:13:51.155014 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-9nzsn\" (UID: \"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:13:51.198593 master-0 kubenswrapper[7337]: I0312 18:13:51.198524 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 12 18:13:51.198778 master-0 kubenswrapper[7337]: I0312 18:13:51.198735 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-1-master-0" podUID="56a4489e-252b-44a7-8310-3b699b2af7d6" containerName="installer" containerID="cri-o://53ddad5a8c9cbca89afea7f839486d18707671bc74821fb0560b014cdd65817f" gracePeriod=30 Mar 12 18:13:51.351787 master-0 kubenswrapper[7337]: I0312 18:13:51.351651 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:13:52.566210 master-0 kubenswrapper[7337]: I0312 18:13:52.566141 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-fz79c\" (UID: \"e94d098b-fbcc-4e85-b8ad-42f3a21c822c\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" Mar 12 18:13:52.566709 master-0 kubenswrapper[7337]: I0312 18:13:52.566237 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-clkx5\" (UID: \"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:13:52.566709 master-0 kubenswrapper[7337]: I0312 18:13:52.566293 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert\") pod \"catalog-operator-7d9c49f57b-pslh7\" (UID: \"47850839-bb4b-41e9-ac31-f1cabbb4926d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7" Mar 12 18:13:52.566709 master-0 kubenswrapper[7337]: I0312 18:13:52.566319 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert\") pod \"olm-operator-d64cfc9db-npt4r\" (UID: \"d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r" Mar 12 18:13:52.566709 master-0 kubenswrapper[7337]: I0312 18:13:52.566380 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-kwv7s\" (UID: \"51eb717b-d11f-4bc3-8df6-deb51d5889f3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:13:52.569652 master-0 kubenswrapper[7337]: I0312 18:13:52.569620 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-clkx5\" (UID: \"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:13:52.569731 master-0 kubenswrapper[7337]: I0312 18:13:52.569704 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-kwv7s\" (UID: \"51eb717b-d11f-4bc3-8df6-deb51d5889f3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:13:52.570141 master-0 kubenswrapper[7337]: I0312 18:13:52.570101 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert\") pod \"catalog-operator-7d9c49f57b-pslh7\" (UID: \"47850839-bb4b-41e9-ac31-f1cabbb4926d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7" Mar 12 18:13:52.570868 master-0 kubenswrapper[7337]: I0312 18:13:52.570830 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert\") pod \"olm-operator-d64cfc9db-npt4r\" (UID: \"d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r" Mar 12 18:13:52.583119 master-0 kubenswrapper[7337]: I0312 18:13:52.583046 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-fz79c\" (UID: \"e94d098b-fbcc-4e85-b8ad-42f3a21c822c\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" Mar 12 18:13:52.668004 master-0 kubenswrapper[7337]: I0312 18:13:52.667935 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs\") pod \"multus-admission-controller-8d675b596-kcpg5\" (UID: \"875bdfaa-b0a4-4412-a477-c962844e7057\") " pod="openshift-multus/multus-admission-controller-8d675b596-kcpg5" Mar 12 18:13:52.668004 master-0 kubenswrapper[7337]: I0312 18:13:52.667991 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs\") pod \"network-metrics-daemon-z4sc9\" (UID: \"5b48f8fd-2efe-44e3-a6d7-c71358b83a2f\") " pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:13:52.670577 master-0 kubenswrapper[7337]: I0312 18:13:52.670544 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs\") pod \"multus-admission-controller-8d675b596-kcpg5\" (UID: \"875bdfaa-b0a4-4412-a477-c962844e7057\") " pod="openshift-multus/multus-admission-controller-8d675b596-kcpg5" Mar 12 18:13:52.670752 master-0 kubenswrapper[7337]: I0312 18:13:52.670727 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs\") pod \"network-metrics-daemon-z4sc9\" (UID: \"5b48f8fd-2efe-44e3-a6d7-c71358b83a2f\") " pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:13:52.768495 master-0 kubenswrapper[7337]: I0312 18:13:52.768441 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" Mar 12 18:13:52.768694 master-0 kubenswrapper[7337]: I0312 18:13:52.768602 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r" Mar 12 18:13:52.768694 master-0 kubenswrapper[7337]: I0312 18:13:52.768637 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:13:52.774526 master-0 kubenswrapper[7337]: I0312 18:13:52.774482 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7" Mar 12 18:13:52.774938 master-0 kubenswrapper[7337]: I0312 18:13:52.774890 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:13:52.782828 master-0 kubenswrapper[7337]: I0312 18:13:52.782794 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:13:52.782898 master-0 kubenswrapper[7337]: I0312 18:13:52.782844 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-kcpg5" Mar 12 18:13:53.794116 master-0 kubenswrapper[7337]: I0312 18:13:53.794076 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 12 18:13:53.794738 master-0 kubenswrapper[7337]: I0312 18:13:53.794564 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 18:13:53.804206 master-0 kubenswrapper[7337]: I0312 18:13:53.804145 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 12 18:13:53.988155 master-0 kubenswrapper[7337]: I0312 18:13:53.987461 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56345785-8643-4f59-ab92-d4eb40d25312-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"56345785-8643-4f59-ab92-d4eb40d25312\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 18:13:53.988155 master-0 kubenswrapper[7337]: I0312 18:13:53.987541 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56345785-8643-4f59-ab92-d4eb40d25312-kube-api-access\") pod \"installer-2-master-0\" (UID: \"56345785-8643-4f59-ab92-d4eb40d25312\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 18:13:53.988155 master-0 kubenswrapper[7337]: I0312 18:13:53.987579 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56345785-8643-4f59-ab92-d4eb40d25312-var-lock\") pod \"installer-2-master-0\" (UID: \"56345785-8643-4f59-ab92-d4eb40d25312\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 18:13:54.142908 master-0 kubenswrapper[7337]: I0312 18:13:54.088388 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56345785-8643-4f59-ab92-d4eb40d25312-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"56345785-8643-4f59-ab92-d4eb40d25312\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 18:13:54.142908 master-0 kubenswrapper[7337]: I0312 18:13:54.088443 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56345785-8643-4f59-ab92-d4eb40d25312-kube-api-access\") pod \"installer-2-master-0\" (UID: \"56345785-8643-4f59-ab92-d4eb40d25312\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 18:13:54.142908 master-0 kubenswrapper[7337]: I0312 18:13:54.088468 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56345785-8643-4f59-ab92-d4eb40d25312-var-lock\") pod \"installer-2-master-0\" (UID: \"56345785-8643-4f59-ab92-d4eb40d25312\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 18:13:54.142908 master-0 kubenswrapper[7337]: I0312 18:13:54.088743 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56345785-8643-4f59-ab92-d4eb40d25312-var-lock\") pod \"installer-2-master-0\" (UID: \"56345785-8643-4f59-ab92-d4eb40d25312\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 18:13:54.142908 master-0 kubenswrapper[7337]: I0312 18:13:54.088777 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56345785-8643-4f59-ab92-d4eb40d25312-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"56345785-8643-4f59-ab92-d4eb40d25312\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 18:13:54.209270 master-0 kubenswrapper[7337]: I0312 18:13:54.200137 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56345785-8643-4f59-ab92-d4eb40d25312-kube-api-access\") pod \"installer-2-master-0\" (UID: \"56345785-8643-4f59-ab92-d4eb40d25312\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 18:13:54.284798 master-0 kubenswrapper[7337]: I0312 18:13:54.284759 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc"] Mar 12 18:13:54.423530 master-0 kubenswrapper[7337]: I0312 18:13:54.423473 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 18:13:54.621002 master-0 kubenswrapper[7337]: I0312 18:13:54.620960 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c597958db-cclsx" event={"ID":"600d037c-0703-43c9-8f01-c8da82b114fd","Type":"ContainerStarted","Data":"06ed6eb88ed1f644751ea6fbc8ab9068fa3790ca5e4deea484dd8d55fa290007"} Mar 12 18:13:54.626798 master-0 kubenswrapper[7337]: I0312 18:13:54.621859 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5c597958db-cclsx" Mar 12 18:13:54.627460 master-0 kubenswrapper[7337]: I0312 18:13:54.627425 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c597958db-cclsx" Mar 12 18:13:54.631417 master-0 kubenswrapper[7337]: I0312 18:13:54.631386 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" event={"ID":"d1b3859c-20a1-4a1c-8508-86ed843768f5","Type":"ContainerStarted","Data":"025f6ef7726027b226244a49b1b7aa7b4b726a6a64b08241b8944ae1790681b8"} Mar 12 18:13:54.639637 master-0 kubenswrapper[7337]: I0312 18:13:54.637221 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-6h5tt" event={"ID":"266b9f4f-3fb4-474d-84df-0a6c687c7e9a","Type":"ContainerStarted","Data":"aaaf976af8fce500fc5ecd742101b71dceddd3d6f502d25363de35be61a7635c"} Mar 12 18:13:54.648830 master-0 kubenswrapper[7337]: I0312 18:13:54.648114 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5cd7bb8bd9-qb8ht" event={"ID":"21fac822-b1df-42c3-8574-fa86e43d7ea4","Type":"ContainerStarted","Data":"257ce4c9ab5ca88cc44891cc3278fd9efdca93406f4d5d0347483fa67ee4dda8"} Mar 12 18:13:54.648830 master-0 kubenswrapper[7337]: I0312 18:13:54.648524 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5cd7bb8bd9-qb8ht" Mar 12 18:13:54.652020 master-0 kubenswrapper[7337]: I0312 18:13:54.651971 7337 generic.go:334] "Generic (PLEG): container finished" podID="9b41258c-ac1d-4e00-ac5e-732d85441f12" containerID="3afc57dd06460be0cc0e28f1088f020bc3b1fb80b27fe8bfdb49d253e732e561" exitCode=0 Mar 12 18:13:54.652096 master-0 kubenswrapper[7337]: I0312 18:13:54.652002 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" event={"ID":"9b41258c-ac1d-4e00-ac5e-732d85441f12","Type":"ContainerDied","Data":"3afc57dd06460be0cc0e28f1088f020bc3b1fb80b27fe8bfdb49d253e732e561"} Mar 12 18:13:54.690907 master-0 kubenswrapper[7337]: I0312 18:13:54.682918 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5c597958db-cclsx" podStartSLOduration=4.4289079000000005 podStartE2EDuration="10.682898441s" podCreationTimestamp="2026-03-12 18:13:44 +0000 UTC" firstStartedPulling="2026-03-12 18:13:47.718834739 +0000 UTC m=+28.187435686" lastFinishedPulling="2026-03-12 18:13:53.97282529 +0000 UTC m=+34.441426227" observedRunningTime="2026-03-12 18:13:54.647892385 +0000 UTC m=+35.116493332" watchObservedRunningTime="2026-03-12 18:13:54.682898441 +0000 UTC m=+35.151499388" Mar 12 18:13:54.764890 master-0 kubenswrapper[7337]: I0312 18:13:54.764425 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5cd7bb8bd9-qb8ht" Mar 12 18:13:54.791333 master-0 kubenswrapper[7337]: I0312 18:13:54.789004 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5cd7bb8bd9-qb8ht" podStartSLOduration=4.427942874 podStartE2EDuration="10.788983425s" podCreationTimestamp="2026-03-12 18:13:44 +0000 UTC" firstStartedPulling="2026-03-12 18:13:47.573790228 +0000 UTC m=+28.042391175" lastFinishedPulling="2026-03-12 18:13:53.934830789 +0000 UTC m=+34.403431726" observedRunningTime="2026-03-12 18:13:54.775006932 +0000 UTC m=+35.243607889" watchObservedRunningTime="2026-03-12 18:13:54.788983425 +0000 UTC m=+35.257584372" Mar 12 18:13:54.896264 master-0 kubenswrapper[7337]: I0312 18:13:54.889048 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn"] Mar 12 18:13:54.913575 master-0 kubenswrapper[7337]: I0312 18:13:54.908278 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r"] Mar 12 18:13:54.913929 master-0 kubenswrapper[7337]: I0312 18:13:54.913791 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-z4sc9"] Mar 12 18:13:54.916610 master-0 kubenswrapper[7337]: W0312 18:13:54.915272 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1cf6cb1_8b20_4bc2_a474_52d6d7cc3652.slice/crio-88d123f937ec5c733d7e95dda1a51126ac31987054c509b5e60506575b947b18 WatchSource:0}: Error finding container 88d123f937ec5c733d7e95dda1a51126ac31987054c509b5e60506575b947b18: Status 404 returned error can't find the container with id 88d123f937ec5c733d7e95dda1a51126ac31987054c509b5e60506575b947b18 Mar 12 18:13:54.916610 master-0 kubenswrapper[7337]: I0312 18:13:54.916603 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-kcpg5"] Mar 12 18:13:54.930821 master-0 kubenswrapper[7337]: I0312 18:13:54.930777 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s"] Mar 12 18:13:54.932419 master-0 kubenswrapper[7337]: I0312 18:13:54.932347 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c"] Mar 12 18:13:54.947882 master-0 kubenswrapper[7337]: W0312 18:13:54.947843 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51eb717b_d11f_4bc3_8df6_deb51d5889f3.slice/crio-7982749b657bfab7994ceaf29145e70be8f2384ed3fb94c1cb38726c467e71d6 WatchSource:0}: Error finding container 7982749b657bfab7994ceaf29145e70be8f2384ed3fb94c1cb38726c467e71d6: Status 404 returned error can't find the container with id 7982749b657bfab7994ceaf29145e70be8f2384ed3fb94c1cb38726c467e71d6 Mar 12 18:13:54.993928 master-0 kubenswrapper[7337]: I0312 18:13:54.993880 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7"] Mar 12 18:13:55.006037 master-0 kubenswrapper[7337]: I0312 18:13:55.001553 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-64bf9778cb-clkx5"] Mar 12 18:13:55.066419 master-0 kubenswrapper[7337]: I0312 18:13:55.066156 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 12 18:13:55.078381 master-0 kubenswrapper[7337]: W0312 18:13:55.077311 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod56345785_8643_4f59_ab92_d4eb40d25312.slice/crio-329888663eb1d1a34fc232bab3b5d51f53bc5e1541113e9b1a0cb115cc3c8e5a WatchSource:0}: Error finding container 329888663eb1d1a34fc232bab3b5d51f53bc5e1541113e9b1a0cb115cc3c8e5a: Status 404 returned error can't find the container with id 329888663eb1d1a34fc232bab3b5d51f53bc5e1541113e9b1a0cb115cc3c8e5a Mar 12 18:13:55.665474 master-0 kubenswrapper[7337]: I0312 18:13:55.665432 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" event={"ID":"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652","Type":"ContainerStarted","Data":"c433c1f714e33e033c20efa5d98a4308ecf9ca34573376730b9a889d9f08a15c"} Mar 12 18:13:55.665474 master-0 kubenswrapper[7337]: I0312 18:13:55.665481 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" event={"ID":"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652","Type":"ContainerStarted","Data":"a15650ff0279cc1eb053cd0564e886ecaf1299636ec1285faa1562a29a442c43"} Mar 12 18:13:55.665941 master-0 kubenswrapper[7337]: I0312 18:13:55.665498 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" event={"ID":"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652","Type":"ContainerStarted","Data":"88d123f937ec5c733d7e95dda1a51126ac31987054c509b5e60506575b947b18"} Mar 12 18:13:55.665941 master-0 kubenswrapper[7337]: I0312 18:13:55.665568 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:13:55.669655 master-0 kubenswrapper[7337]: I0312 18:13:55.669587 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" event={"ID":"51eb717b-d11f-4bc3-8df6-deb51d5889f3","Type":"ContainerStarted","Data":"9c7b264912a8c88ea45a21bcc8d754c12a39be6228b6f9e555ee08994d82ca9a"} Mar 12 18:13:55.669801 master-0 kubenswrapper[7337]: I0312 18:13:55.669663 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" event={"ID":"51eb717b-d11f-4bc3-8df6-deb51d5889f3","Type":"ContainerStarted","Data":"7982749b657bfab7994ceaf29145e70be8f2384ed3fb94c1cb38726c467e71d6"} Mar 12 18:13:55.672255 master-0 kubenswrapper[7337]: I0312 18:13:55.672219 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-6h5tt" event={"ID":"266b9f4f-3fb4-474d-84df-0a6c687c7e9a","Type":"ContainerStarted","Data":"2f45a8211d870b01d3e124e1e1ecfb8acb0462c5e0fffcce43500d69b79f8b4e"} Mar 12 18:13:55.672494 master-0 kubenswrapper[7337]: I0312 18:13:55.672467 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-6h5tt" Mar 12 18:13:55.675279 master-0 kubenswrapper[7337]: I0312 18:13:55.674737 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"56345785-8643-4f59-ab92-d4eb40d25312","Type":"ContainerStarted","Data":"a4f9a56f69b75a1303b4dc8bf2fe9b299627c08d770e930cb53681ac90fcdf8a"} Mar 12 18:13:55.675279 master-0 kubenswrapper[7337]: I0312 18:13:55.674803 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"56345785-8643-4f59-ab92-d4eb40d25312","Type":"ContainerStarted","Data":"329888663eb1d1a34fc232bab3b5d51f53bc5e1541113e9b1a0cb115cc3c8e5a"} Mar 12 18:13:55.676224 master-0 kubenswrapper[7337]: I0312 18:13:55.676160 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-z4sc9" event={"ID":"5b48f8fd-2efe-44e3-a6d7-c71358b83a2f","Type":"ContainerStarted","Data":"59af426bb753de2f517179014e6cfd5fa8b94b02ab3fedab6e4b42ba0bebac29"} Mar 12 18:13:55.678691 master-0 kubenswrapper[7337]: I0312 18:13:55.678062 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" podStartSLOduration=5.678049807 podStartE2EDuration="5.678049807s" podCreationTimestamp="2026-03-12 18:13:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:13:55.677644987 +0000 UTC m=+36.146245944" watchObservedRunningTime="2026-03-12 18:13:55.678049807 +0000 UTC m=+36.146650754" Mar 12 18:13:55.687626 master-0 kubenswrapper[7337]: I0312 18:13:55.687582 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" event={"ID":"9b41258c-ac1d-4e00-ac5e-732d85441f12","Type":"ContainerStarted","Data":"64dc9793f349e170c2ae3d1a2bb6c5aa7d455b30ce316310ea01d07d10360fb4"} Mar 12 18:13:55.687884 master-0 kubenswrapper[7337]: I0312 18:13:55.687643 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" event={"ID":"9b41258c-ac1d-4e00-ac5e-732d85441f12","Type":"ContainerStarted","Data":"f39b8b09e0d7dd16f300ad27702eb00d52fbb88ce1a1e7bd54b45afbcb942a89"} Mar 12 18:13:55.689918 master-0 kubenswrapper[7337]: I0312 18:13:55.689898 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" event={"ID":"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64","Type":"ContainerStarted","Data":"da8a3dd02c7bc3e5376ebe604c414570540ccdc280e818957636de9c32beb180"} Mar 12 18:13:55.694034 master-0 kubenswrapper[7337]: I0312 18:13:55.693959 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-6h5tt" podStartSLOduration=2.614370551 podStartE2EDuration="7.693942769s" podCreationTimestamp="2026-03-12 18:13:48 +0000 UTC" firstStartedPulling="2026-03-12 18:13:48.859714744 +0000 UTC m=+29.328315691" lastFinishedPulling="2026-03-12 18:13:53.939286962 +0000 UTC m=+34.407887909" observedRunningTime="2026-03-12 18:13:55.692833751 +0000 UTC m=+36.161434698" watchObservedRunningTime="2026-03-12 18:13:55.693942769 +0000 UTC m=+36.162543726" Mar 12 18:13:55.695343 master-0 kubenswrapper[7337]: I0312 18:13:55.693981 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r" event={"ID":"d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27","Type":"ContainerStarted","Data":"fb3de63e9ae8f0f90ed99bf3dc6471ec32942e542a8f9f641416a08fbffeda83"} Mar 12 18:13:55.702916 master-0 kubenswrapper[7337]: I0312 18:13:55.702868 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" event={"ID":"d1b3859c-20a1-4a1c-8508-86ed843768f5","Type":"ContainerStarted","Data":"736a8404a1683d56f8dbc8f71de47cc325d858c0409febcb5d511b27a322ce13"} Mar 12 18:13:55.702916 master-0 kubenswrapper[7337]: I0312 18:13:55.702911 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" event={"ID":"d1b3859c-20a1-4a1c-8508-86ed843768f5","Type":"ContainerStarted","Data":"5706e3784ca742e5928056231dad1859cd3cb62e588cf6ac31397a2d0427db03"} Mar 12 18:13:55.703133 master-0 kubenswrapper[7337]: I0312 18:13:55.703059 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:13:55.708543 master-0 kubenswrapper[7337]: I0312 18:13:55.707020 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-kcpg5" event={"ID":"875bdfaa-b0a4-4412-a477-c962844e7057","Type":"ContainerStarted","Data":"04e6b16b49390ef2fd14eeb3200708298f9f8befad96c527fa22cf0d9077e2eb"} Mar 12 18:13:55.709196 master-0 kubenswrapper[7337]: I0312 18:13:55.709160 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" event={"ID":"e94d098b-fbcc-4e85-b8ad-42f3a21c822c","Type":"ContainerStarted","Data":"8b4322c396b926726b3445bf3f4c514365e3dc0962cabf32996b7feaa6ce265c"} Mar 12 18:13:55.713903 master-0 kubenswrapper[7337]: I0312 18:13:55.713876 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7" event={"ID":"47850839-bb4b-41e9-ac31-f1cabbb4926d","Type":"ContainerStarted","Data":"0182a4eff93f7ac8355fe5920af6a23f38515c1d4a493448a8ac4ea00cfb1b71"} Mar 12 18:13:55.729023 master-0 kubenswrapper[7337]: I0312 18:13:55.728942 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-2-master-0" podStartSLOduration=2.7289243340000002 podStartE2EDuration="2.728924334s" podCreationTimestamp="2026-03-12 18:13:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:13:55.709241966 +0000 UTC m=+36.177842933" watchObservedRunningTime="2026-03-12 18:13:55.728924334 +0000 UTC m=+36.197525281" Mar 12 18:13:55.733073 master-0 kubenswrapper[7337]: I0312 18:13:55.732797 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" podStartSLOduration=5.732782672 podStartE2EDuration="5.732782672s" podCreationTimestamp="2026-03-12 18:13:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:13:55.728226587 +0000 UTC m=+36.196827554" watchObservedRunningTime="2026-03-12 18:13:55.732782672 +0000 UTC m=+36.201383619" Mar 12 18:13:55.743314 master-0 kubenswrapper[7337]: I0312 18:13:55.743247 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:55.743314 master-0 kubenswrapper[7337]: I0312 18:13:55.743284 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:13:55.750468 master-0 kubenswrapper[7337]: I0312 18:13:55.750393 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" podStartSLOduration=6.051166688 podStartE2EDuration="12.750372237s" podCreationTimestamp="2026-03-12 18:13:43 +0000 UTC" firstStartedPulling="2026-03-12 18:13:47.35141212 +0000 UTC m=+27.820013067" lastFinishedPulling="2026-03-12 18:13:54.050617669 +0000 UTC m=+34.519218616" observedRunningTime="2026-03-12 18:13:55.750317986 +0000 UTC m=+36.218918943" watchObservedRunningTime="2026-03-12 18:13:55.750372237 +0000 UTC m=+36.218973184" Mar 12 18:14:00.668164 master-0 kubenswrapper[7337]: I0312 18:14:00.668111 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:14:00.693585 master-0 kubenswrapper[7337]: I0312 18:14:00.683743 7337 patch_prober.go:28] interesting pod/apiserver-5786c989f8-f6jgb container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 12 18:14:00.693585 master-0 kubenswrapper[7337]: [+]log ok Mar 12 18:14:00.693585 master-0 kubenswrapper[7337]: [+]etcd ok Mar 12 18:14:00.693585 master-0 kubenswrapper[7337]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 12 18:14:00.693585 master-0 kubenswrapper[7337]: [+]poststarthook/generic-apiserver-start-informers ok Mar 12 18:14:00.693585 master-0 kubenswrapper[7337]: [+]poststarthook/max-in-flight-filter ok Mar 12 18:14:00.693585 master-0 kubenswrapper[7337]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 12 18:14:00.693585 master-0 kubenswrapper[7337]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 12 18:14:00.693585 master-0 kubenswrapper[7337]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 12 18:14:00.693585 master-0 kubenswrapper[7337]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 12 18:14:00.693585 master-0 kubenswrapper[7337]: [+]poststarthook/project.openshift.io-projectcache ok Mar 12 18:14:00.693585 master-0 kubenswrapper[7337]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 12 18:14:00.693585 master-0 kubenswrapper[7337]: [+]poststarthook/openshift.io-startinformers ok Mar 12 18:14:00.693585 master-0 kubenswrapper[7337]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 12 18:14:00.693585 master-0 kubenswrapper[7337]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 12 18:14:00.693585 master-0 kubenswrapper[7337]: livez check failed Mar 12 18:14:00.693585 master-0 kubenswrapper[7337]: I0312 18:14:00.683823 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" podUID="9b41258c-ac1d-4e00-ac5e-732d85441f12" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:14:00.756535 master-0 kubenswrapper[7337]: I0312 18:14:00.755776 7337 patch_prober.go:28] interesting pod/apiserver-5786c989f8-f6jgb container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 12 18:14:00.756535 master-0 kubenswrapper[7337]: [+]log ok Mar 12 18:14:00.756535 master-0 kubenswrapper[7337]: [+]etcd ok Mar 12 18:14:00.756535 master-0 kubenswrapper[7337]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 12 18:14:00.756535 master-0 kubenswrapper[7337]: [+]poststarthook/generic-apiserver-start-informers ok Mar 12 18:14:00.756535 master-0 kubenswrapper[7337]: [+]poststarthook/max-in-flight-filter ok Mar 12 18:14:00.756535 master-0 kubenswrapper[7337]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 12 18:14:00.756535 master-0 kubenswrapper[7337]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 12 18:14:00.756535 master-0 kubenswrapper[7337]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 12 18:14:00.756535 master-0 kubenswrapper[7337]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 12 18:14:00.756535 master-0 kubenswrapper[7337]: [+]poststarthook/project.openshift.io-projectcache ok Mar 12 18:14:00.756535 master-0 kubenswrapper[7337]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 12 18:14:00.756535 master-0 kubenswrapper[7337]: [+]poststarthook/openshift.io-startinformers ok Mar 12 18:14:00.756535 master-0 kubenswrapper[7337]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 12 18:14:00.756535 master-0 kubenswrapper[7337]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 12 18:14:00.756535 master-0 kubenswrapper[7337]: livez check failed Mar 12 18:14:00.756535 master-0 kubenswrapper[7337]: I0312 18:14:00.755844 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" podUID="9b41258c-ac1d-4e00-ac5e-732d85441f12" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:14:01.361549 master-0 kubenswrapper[7337]: I0312 18:14:01.357313 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:14:04.598059 master-0 kubenswrapper[7337]: I0312 18:14:04.597996 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx"] Mar 12 18:14:04.598627 master-0 kubenswrapper[7337]: I0312 18:14:04.598288 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" podUID="00755a4e-124c-4a51-b1c5-7c505b3637a8" containerName="cluster-version-operator" containerID="cri-o://c1a8fe3c9ec9293190da1abf5d84165878cc28e2ff9a4187ebb4b5e5ee9ed66b" gracePeriod=130 Mar 12 18:14:04.601177 master-0 kubenswrapper[7337]: I0312 18:14:04.601137 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 12 18:14:04.601573 master-0 kubenswrapper[7337]: I0312 18:14:04.601504 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-2-master-0" podUID="56345785-8643-4f59-ab92-d4eb40d25312" containerName="installer" containerID="cri-o://a4f9a56f69b75a1303b4dc8bf2fe9b299627c08d770e930cb53681ac90fcdf8a" gracePeriod=30 Mar 12 18:14:04.900152 master-0 kubenswrapper[7337]: I0312 18:14:04.900032 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c597958db-cclsx"] Mar 12 18:14:04.900332 master-0 kubenswrapper[7337]: I0312 18:14:04.900277 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5c597958db-cclsx" podUID="600d037c-0703-43c9-8f01-c8da82b114fd" containerName="controller-manager" containerID="cri-o://06ed6eb88ed1f644751ea6fbc8ab9068fa3790ca5e4deea484dd8d55fa290007" gracePeriod=30 Mar 12 18:14:04.965570 master-0 kubenswrapper[7337]: I0312 18:14:04.964392 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cd7bb8bd9-qb8ht"] Mar 12 18:14:04.965570 master-0 kubenswrapper[7337]: I0312 18:14:04.964612 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5cd7bb8bd9-qb8ht" podUID="21fac822-b1df-42c3-8574-fa86e43d7ea4" containerName="route-controller-manager" containerID="cri-o://257ce4c9ab5ca88cc44891cc3278fd9efdca93406f4d5d0347483fa67ee4dda8" gracePeriod=30 Mar 12 18:14:06.008903 master-0 kubenswrapper[7337]: I0312 18:14:06.008852 7337 generic.go:334] "Generic (PLEG): container finished" podID="21fac822-b1df-42c3-8574-fa86e43d7ea4" containerID="257ce4c9ab5ca88cc44891cc3278fd9efdca93406f4d5d0347483fa67ee4dda8" exitCode=0 Mar 12 18:14:06.008903 master-0 kubenswrapper[7337]: I0312 18:14:06.008895 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5cd7bb8bd9-qb8ht" event={"ID":"21fac822-b1df-42c3-8574-fa86e43d7ea4","Type":"ContainerDied","Data":"257ce4c9ab5ca88cc44891cc3278fd9efdca93406f4d5d0347483fa67ee4dda8"} Mar 12 18:14:06.383321 master-0 kubenswrapper[7337]: I0312 18:14:06.383091 7337 patch_prober.go:28] interesting pod/apiserver-5786c989f8-f6jgb container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 12 18:14:06.383321 master-0 kubenswrapper[7337]: [+]log ok Mar 12 18:14:06.383321 master-0 kubenswrapper[7337]: [+]etcd ok Mar 12 18:14:06.383321 master-0 kubenswrapper[7337]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 12 18:14:06.383321 master-0 kubenswrapper[7337]: [+]poststarthook/generic-apiserver-start-informers ok Mar 12 18:14:06.383321 master-0 kubenswrapper[7337]: [+]poststarthook/max-in-flight-filter ok Mar 12 18:14:06.383321 master-0 kubenswrapper[7337]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 12 18:14:06.383321 master-0 kubenswrapper[7337]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 12 18:14:06.383321 master-0 kubenswrapper[7337]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 12 18:14:06.383321 master-0 kubenswrapper[7337]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 12 18:14:06.383321 master-0 kubenswrapper[7337]: [+]poststarthook/project.openshift.io-projectcache ok Mar 12 18:14:06.383321 master-0 kubenswrapper[7337]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 12 18:14:06.383321 master-0 kubenswrapper[7337]: [+]poststarthook/openshift.io-startinformers ok Mar 12 18:14:06.383321 master-0 kubenswrapper[7337]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 12 18:14:06.383321 master-0 kubenswrapper[7337]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 12 18:14:06.383321 master-0 kubenswrapper[7337]: livez check failed Mar 12 18:14:06.383321 master-0 kubenswrapper[7337]: I0312 18:14:06.383180 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" podUID="9b41258c-ac1d-4e00-ac5e-732d85441f12" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:14:06.659091 master-0 kubenswrapper[7337]: I0312 18:14:06.658976 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-6h5tt" Mar 12 18:14:06.724572 master-0 kubenswrapper[7337]: I0312 18:14:06.716655 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 12 18:14:06.724572 master-0 kubenswrapper[7337]: I0312 18:14:06.717805 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 18:14:06.733913 master-0 kubenswrapper[7337]: I0312 18:14:06.733869 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 12 18:14:06.877033 master-0 kubenswrapper[7337]: I0312 18:14:06.876982 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2d3635b-61ed-4f81-8735-47be74319c67-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"f2d3635b-61ed-4f81-8735-47be74319c67\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 18:14:06.877033 master-0 kubenswrapper[7337]: I0312 18:14:06.877040 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f2d3635b-61ed-4f81-8735-47be74319c67-var-lock\") pod \"installer-3-master-0\" (UID: \"f2d3635b-61ed-4f81-8735-47be74319c67\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 18:14:06.877344 master-0 kubenswrapper[7337]: I0312 18:14:06.877103 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2d3635b-61ed-4f81-8735-47be74319c67-kube-api-access\") pod \"installer-3-master-0\" (UID: \"f2d3635b-61ed-4f81-8735-47be74319c67\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 18:14:06.909922 master-0 kubenswrapper[7337]: I0312 18:14:06.909673 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 12 18:14:06.910891 master-0 kubenswrapper[7337]: I0312 18:14:06.910599 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 18:14:06.912388 master-0 kubenswrapper[7337]: I0312 18:14:06.912351 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 12 18:14:06.920251 master-0 kubenswrapper[7337]: I0312 18:14:06.920033 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 12 18:14:06.977843 master-0 kubenswrapper[7337]: I0312 18:14:06.977792 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8cd18201-afdc-4229-972e-ab01adb2a7f3-kube-api-access\") pod \"installer-1-master-0\" (UID: \"8cd18201-afdc-4229-972e-ab01adb2a7f3\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 18:14:06.977843 master-0 kubenswrapper[7337]: I0312 18:14:06.977853 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8cd18201-afdc-4229-972e-ab01adb2a7f3-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"8cd18201-afdc-4229-972e-ab01adb2a7f3\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 18:14:06.978076 master-0 kubenswrapper[7337]: I0312 18:14:06.977888 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2d3635b-61ed-4f81-8735-47be74319c67-kube-api-access\") pod \"installer-3-master-0\" (UID: \"f2d3635b-61ed-4f81-8735-47be74319c67\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 18:14:06.978076 master-0 kubenswrapper[7337]: I0312 18:14:06.977934 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2d3635b-61ed-4f81-8735-47be74319c67-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"f2d3635b-61ed-4f81-8735-47be74319c67\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 18:14:06.978076 master-0 kubenswrapper[7337]: I0312 18:14:06.977955 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8cd18201-afdc-4229-972e-ab01adb2a7f3-var-lock\") pod \"installer-1-master-0\" (UID: \"8cd18201-afdc-4229-972e-ab01adb2a7f3\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 18:14:06.978076 master-0 kubenswrapper[7337]: I0312 18:14:06.977971 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f2d3635b-61ed-4f81-8735-47be74319c67-var-lock\") pod \"installer-3-master-0\" (UID: \"f2d3635b-61ed-4f81-8735-47be74319c67\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 18:14:06.978193 master-0 kubenswrapper[7337]: I0312 18:14:06.978074 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2d3635b-61ed-4f81-8735-47be74319c67-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"f2d3635b-61ed-4f81-8735-47be74319c67\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 18:14:06.978193 master-0 kubenswrapper[7337]: I0312 18:14:06.978149 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f2d3635b-61ed-4f81-8735-47be74319c67-var-lock\") pod \"installer-3-master-0\" (UID: \"f2d3635b-61ed-4f81-8735-47be74319c67\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 18:14:07.001982 master-0 kubenswrapper[7337]: I0312 18:14:07.001949 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2d3635b-61ed-4f81-8735-47be74319c67-kube-api-access\") pod \"installer-3-master-0\" (UID: \"f2d3635b-61ed-4f81-8735-47be74319c67\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 18:14:07.015205 master-0 kubenswrapper[7337]: I0312 18:14:07.015154 7337 generic.go:334] "Generic (PLEG): container finished" podID="600d037c-0703-43c9-8f01-c8da82b114fd" containerID="06ed6eb88ed1f644751ea6fbc8ab9068fa3790ca5e4deea484dd8d55fa290007" exitCode=0 Mar 12 18:14:07.015705 master-0 kubenswrapper[7337]: I0312 18:14:07.015238 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c597958db-cclsx" event={"ID":"600d037c-0703-43c9-8f01-c8da82b114fd","Type":"ContainerDied","Data":"06ed6eb88ed1f644751ea6fbc8ab9068fa3790ca5e4deea484dd8d55fa290007"} Mar 12 18:14:07.016978 master-0 kubenswrapper[7337]: I0312 18:14:07.016950 7337 generic.go:334] "Generic (PLEG): container finished" podID="00755a4e-124c-4a51-b1c5-7c505b3637a8" containerID="c1a8fe3c9ec9293190da1abf5d84165878cc28e2ff9a4187ebb4b5e5ee9ed66b" exitCode=0 Mar 12 18:14:07.017044 master-0 kubenswrapper[7337]: I0312 18:14:07.017024 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" event={"ID":"00755a4e-124c-4a51-b1c5-7c505b3637a8","Type":"ContainerDied","Data":"c1a8fe3c9ec9293190da1abf5d84165878cc28e2ff9a4187ebb4b5e5ee9ed66b"} Mar 12 18:14:07.018684 master-0 kubenswrapper[7337]: I0312 18:14:07.018662 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_56345785-8643-4f59-ab92-d4eb40d25312/installer/0.log" Mar 12 18:14:07.018743 master-0 kubenswrapper[7337]: I0312 18:14:07.018705 7337 generic.go:334] "Generic (PLEG): container finished" podID="56345785-8643-4f59-ab92-d4eb40d25312" containerID="a4f9a56f69b75a1303b4dc8bf2fe9b299627c08d770e930cb53681ac90fcdf8a" exitCode=1 Mar 12 18:14:07.018743 master-0 kubenswrapper[7337]: I0312 18:14:07.018727 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"56345785-8643-4f59-ab92-d4eb40d25312","Type":"ContainerDied","Data":"a4f9a56f69b75a1303b4dc8bf2fe9b299627c08d770e930cb53681ac90fcdf8a"} Mar 12 18:14:07.042917 master-0 kubenswrapper[7337]: I0312 18:14:07.042891 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 18:14:07.078470 master-0 kubenswrapper[7337]: I0312 18:14:07.078426 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8cd18201-afdc-4229-972e-ab01adb2a7f3-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"8cd18201-afdc-4229-972e-ab01adb2a7f3\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 18:14:07.078672 master-0 kubenswrapper[7337]: I0312 18:14:07.078570 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8cd18201-afdc-4229-972e-ab01adb2a7f3-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"8cd18201-afdc-4229-972e-ab01adb2a7f3\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 18:14:07.078733 master-0 kubenswrapper[7337]: I0312 18:14:07.078682 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8cd18201-afdc-4229-972e-ab01adb2a7f3-var-lock\") pod \"installer-1-master-0\" (UID: \"8cd18201-afdc-4229-972e-ab01adb2a7f3\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 18:14:07.078802 master-0 kubenswrapper[7337]: I0312 18:14:07.078775 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8cd18201-afdc-4229-972e-ab01adb2a7f3-kube-api-access\") pod \"installer-1-master-0\" (UID: \"8cd18201-afdc-4229-972e-ab01adb2a7f3\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 18:14:07.079501 master-0 kubenswrapper[7337]: I0312 18:14:07.079479 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8cd18201-afdc-4229-972e-ab01adb2a7f3-var-lock\") pod \"installer-1-master-0\" (UID: \"8cd18201-afdc-4229-972e-ab01adb2a7f3\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 18:14:07.096077 master-0 kubenswrapper[7337]: I0312 18:14:07.096040 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8cd18201-afdc-4229-972e-ab01adb2a7f3-kube-api-access\") pod \"installer-1-master-0\" (UID: \"8cd18201-afdc-4229-972e-ab01adb2a7f3\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 18:14:07.226256 master-0 kubenswrapper[7337]: I0312 18:14:07.226122 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 18:14:07.300582 master-0 kubenswrapper[7337]: I0312 18:14:07.300213 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg"] Mar 12 18:14:07.301115 master-0 kubenswrapper[7337]: I0312 18:14:07.301083 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:14:07.312792 master-0 kubenswrapper[7337]: I0312 18:14:07.312742 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 12 18:14:07.312792 master-0 kubenswrapper[7337]: I0312 18:14:07.312793 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 12 18:14:07.313022 master-0 kubenswrapper[7337]: I0312 18:14:07.312927 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 12 18:14:07.313115 master-0 kubenswrapper[7337]: I0312 18:14:07.313094 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 12 18:14:07.313281 master-0 kubenswrapper[7337]: I0312 18:14:07.313245 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 12 18:14:07.314039 master-0 kubenswrapper[7337]: I0312 18:14:07.313433 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 12 18:14:07.314039 master-0 kubenswrapper[7337]: I0312 18:14:07.313930 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 12 18:14:07.318834 master-0 kubenswrapper[7337]: I0312 18:14:07.318795 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 12 18:14:07.323573 master-0 kubenswrapper[7337]: I0312 18:14:07.323488 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg"] Mar 12 18:14:07.389871 master-0 kubenswrapper[7337]: I0312 18:14:07.389824 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fb529297-b3de-4167-a91e-0a63725b3b0f-etcd-client\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:14:07.390108 master-0 kubenswrapper[7337]: I0312 18:14:07.389915 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb529297-b3de-4167-a91e-0a63725b3b0f-serving-cert\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:14:07.390108 master-0 kubenswrapper[7337]: I0312 18:14:07.389946 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fb529297-b3de-4167-a91e-0a63725b3b0f-audit-policies\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:14:07.390108 master-0 kubenswrapper[7337]: I0312 18:14:07.390096 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fb529297-b3de-4167-a91e-0a63725b3b0f-etcd-serving-ca\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:14:07.390230 master-0 kubenswrapper[7337]: I0312 18:14:07.390141 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fb529297-b3de-4167-a91e-0a63725b3b0f-encryption-config\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:14:07.390230 master-0 kubenswrapper[7337]: I0312 18:14:07.390167 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmzf4\" (UniqueName: \"kubernetes.io/projected/fb529297-b3de-4167-a91e-0a63725b3b0f-kube-api-access-tmzf4\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:14:07.390447 master-0 kubenswrapper[7337]: I0312 18:14:07.390393 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb529297-b3de-4167-a91e-0a63725b3b0f-trusted-ca-bundle\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:14:07.390606 master-0 kubenswrapper[7337]: I0312 18:14:07.390558 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fb529297-b3de-4167-a91e-0a63725b3b0f-audit-dir\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:14:07.492085 master-0 kubenswrapper[7337]: I0312 18:14:07.491961 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fb529297-b3de-4167-a91e-0a63725b3b0f-etcd-client\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:14:07.492085 master-0 kubenswrapper[7337]: I0312 18:14:07.492024 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb529297-b3de-4167-a91e-0a63725b3b0f-serving-cert\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:14:07.492085 master-0 kubenswrapper[7337]: I0312 18:14:07.492048 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fb529297-b3de-4167-a91e-0a63725b3b0f-audit-policies\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:14:07.492334 master-0 kubenswrapper[7337]: I0312 18:14:07.492091 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fb529297-b3de-4167-a91e-0a63725b3b0f-etcd-serving-ca\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:14:07.492334 master-0 kubenswrapper[7337]: I0312 18:14:07.492118 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fb529297-b3de-4167-a91e-0a63725b3b0f-encryption-config\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:14:07.492334 master-0 kubenswrapper[7337]: I0312 18:14:07.492145 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmzf4\" (UniqueName: \"kubernetes.io/projected/fb529297-b3de-4167-a91e-0a63725b3b0f-kube-api-access-tmzf4\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:14:07.492334 master-0 kubenswrapper[7337]: I0312 18:14:07.492171 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb529297-b3de-4167-a91e-0a63725b3b0f-trusted-ca-bundle\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:14:07.492334 master-0 kubenswrapper[7337]: I0312 18:14:07.492202 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fb529297-b3de-4167-a91e-0a63725b3b0f-audit-dir\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:14:07.492334 master-0 kubenswrapper[7337]: I0312 18:14:07.492264 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fb529297-b3de-4167-a91e-0a63725b3b0f-audit-dir\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:14:07.493396 master-0 kubenswrapper[7337]: I0312 18:14:07.493358 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fb529297-b3de-4167-a91e-0a63725b3b0f-etcd-serving-ca\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:14:07.494686 master-0 kubenswrapper[7337]: I0312 18:14:07.493832 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fb529297-b3de-4167-a91e-0a63725b3b0f-audit-policies\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:14:07.494686 master-0 kubenswrapper[7337]: I0312 18:14:07.494128 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb529297-b3de-4167-a91e-0a63725b3b0f-trusted-ca-bundle\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:14:07.496064 master-0 kubenswrapper[7337]: I0312 18:14:07.496020 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fb529297-b3de-4167-a91e-0a63725b3b0f-encryption-config\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:14:07.496942 master-0 kubenswrapper[7337]: I0312 18:14:07.496915 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fb529297-b3de-4167-a91e-0a63725b3b0f-etcd-client\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:14:07.498466 master-0 kubenswrapper[7337]: I0312 18:14:07.498438 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb529297-b3de-4167-a91e-0a63725b3b0f-serving-cert\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:14:07.517493 master-0 kubenswrapper[7337]: I0312 18:14:07.517443 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmzf4\" (UniqueName: \"kubernetes.io/projected/fb529297-b3de-4167-a91e-0a63725b3b0f-kube-api-access-tmzf4\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:14:07.628144 master-0 kubenswrapper[7337]: I0312 18:14:07.628092 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:14:08.201395 master-0 kubenswrapper[7337]: I0312 18:14:08.201334 7337 patch_prober.go:28] interesting pod/route-controller-manager-5cd7bb8bd9-qb8ht container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.38:8443/healthz\": dial tcp 10.128.0.38:8443: i/o timeout" start-of-body= Mar 12 18:14:08.201856 master-0 kubenswrapper[7337]: I0312 18:14:08.201406 7337 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5cd7bb8bd9-qb8ht" podUID="21fac822-b1df-42c3-8574-fa86e43d7ea4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.38:8443/healthz\": dial tcp 10.128.0.38:8443: i/o timeout" Mar 12 18:14:08.218065 master-0 kubenswrapper[7337]: I0312 18:14:08.218007 7337 patch_prober.go:28] interesting pod/controller-manager-5c597958db-cclsx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 18:14:08.218221 master-0 kubenswrapper[7337]: I0312 18:14:08.218077 7337 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5c597958db-cclsx" podUID="600d037c-0703-43c9-8f01-c8da82b114fd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 18:14:09.104579 master-0 kubenswrapper[7337]: I0312 18:14:09.104296 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 12 18:14:09.105913 master-0 kubenswrapper[7337]: I0312 18:14:09.105675 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 18:14:09.115740 master-0 kubenswrapper[7337]: I0312 18:14:09.115667 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 12 18:14:09.122947 master-0 kubenswrapper[7337]: I0312 18:14:09.121917 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 12 18:14:09.220508 master-0 kubenswrapper[7337]: I0312 18:14:09.220454 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/38785e6e-3052-405c-8874-4f295985def5-var-lock\") pod \"installer-1-master-0\" (UID: \"38785e6e-3052-405c-8874-4f295985def5\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 18:14:09.220508 master-0 kubenswrapper[7337]: I0312 18:14:09.220502 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38785e6e-3052-405c-8874-4f295985def5-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"38785e6e-3052-405c-8874-4f295985def5\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 18:14:09.221058 master-0 kubenswrapper[7337]: I0312 18:14:09.220569 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38785e6e-3052-405c-8874-4f295985def5-kube-api-access\") pod \"installer-1-master-0\" (UID: \"38785e6e-3052-405c-8874-4f295985def5\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 18:14:09.321545 master-0 kubenswrapper[7337]: I0312 18:14:09.321478 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38785e6e-3052-405c-8874-4f295985def5-kube-api-access\") pod \"installer-1-master-0\" (UID: \"38785e6e-3052-405c-8874-4f295985def5\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 18:14:09.321740 master-0 kubenswrapper[7337]: I0312 18:14:09.321627 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/38785e6e-3052-405c-8874-4f295985def5-var-lock\") pod \"installer-1-master-0\" (UID: \"38785e6e-3052-405c-8874-4f295985def5\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 18:14:09.321740 master-0 kubenswrapper[7337]: I0312 18:14:09.321661 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38785e6e-3052-405c-8874-4f295985def5-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"38785e6e-3052-405c-8874-4f295985def5\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 18:14:09.321740 master-0 kubenswrapper[7337]: I0312 18:14:09.321734 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38785e6e-3052-405c-8874-4f295985def5-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"38785e6e-3052-405c-8874-4f295985def5\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 18:14:09.321880 master-0 kubenswrapper[7337]: I0312 18:14:09.321835 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/38785e6e-3052-405c-8874-4f295985def5-var-lock\") pod \"installer-1-master-0\" (UID: \"38785e6e-3052-405c-8874-4f295985def5\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 18:14:09.508441 master-0 kubenswrapper[7337]: I0312 18:14:09.508338 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38785e6e-3052-405c-8874-4f295985def5-kube-api-access\") pod \"installer-1-master-0\" (UID: \"38785e6e-3052-405c-8874-4f295985def5\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 18:14:09.733909 master-0 kubenswrapper[7337]: I0312 18:14:09.733859 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 18:14:09.820652 master-0 kubenswrapper[7337]: I0312 18:14:09.819333 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_56345785-8643-4f59-ab92-d4eb40d25312/installer/0.log" Mar 12 18:14:09.820652 master-0 kubenswrapper[7337]: I0312 18:14:09.819394 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 18:14:09.824804 master-0 kubenswrapper[7337]: I0312 18:14:09.824736 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cd7bb8bd9-qb8ht" Mar 12 18:14:09.847806 master-0 kubenswrapper[7337]: I0312 18:14:09.847786 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:14:09.868502 master-0 kubenswrapper[7337]: I0312 18:14:09.868465 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c597958db-cclsx" Mar 12 18:14:09.912277 master-0 kubenswrapper[7337]: I0312 18:14:09.907825 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 12 18:14:09.928427 master-0 kubenswrapper[7337]: I0312 18:14:09.928065 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gkdn\" (UniqueName: \"kubernetes.io/projected/21fac822-b1df-42c3-8574-fa86e43d7ea4-kube-api-access-2gkdn\") pod \"21fac822-b1df-42c3-8574-fa86e43d7ea4\" (UID: \"21fac822-b1df-42c3-8574-fa86e43d7ea4\") " Mar 12 18:14:09.928427 master-0 kubenswrapper[7337]: I0312 18:14:09.928096 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/00755a4e-124c-4a51-b1c5-7c505b3637a8-service-ca\") pod \"00755a4e-124c-4a51-b1c5-7c505b3637a8\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " Mar 12 18:14:09.928427 master-0 kubenswrapper[7337]: I0312 18:14:09.928118 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert\") pod \"00755a4e-124c-4a51-b1c5-7c505b3637a8\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " Mar 12 18:14:09.928427 master-0 kubenswrapper[7337]: I0312 18:14:09.928151 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkrg6\" (UniqueName: \"kubernetes.io/projected/600d037c-0703-43c9-8f01-c8da82b114fd-kube-api-access-xkrg6\") pod \"600d037c-0703-43c9-8f01-c8da82b114fd\" (UID: \"600d037c-0703-43c9-8f01-c8da82b114fd\") " Mar 12 18:14:09.929385 master-0 kubenswrapper[7337]: I0312 18:14:09.928172 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21fac822-b1df-42c3-8574-fa86e43d7ea4-config\") pod \"21fac822-b1df-42c3-8574-fa86e43d7ea4\" (UID: \"21fac822-b1df-42c3-8574-fa86e43d7ea4\") " Mar 12 18:14:09.929385 master-0 kubenswrapper[7337]: I0312 18:14:09.928661 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/00755a4e-124c-4a51-b1c5-7c505b3637a8-etc-cvo-updatepayloads\") pod \"00755a4e-124c-4a51-b1c5-7c505b3637a8\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " Mar 12 18:14:09.929385 master-0 kubenswrapper[7337]: I0312 18:14:09.928708 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21fac822-b1df-42c3-8574-fa86e43d7ea4-serving-cert\") pod \"21fac822-b1df-42c3-8574-fa86e43d7ea4\" (UID: \"21fac822-b1df-42c3-8574-fa86e43d7ea4\") " Mar 12 18:14:09.929385 master-0 kubenswrapper[7337]: I0312 18:14:09.928729 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/00755a4e-124c-4a51-b1c5-7c505b3637a8-etc-ssl-certs\") pod \"00755a4e-124c-4a51-b1c5-7c505b3637a8\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " Mar 12 18:14:09.929385 master-0 kubenswrapper[7337]: I0312 18:14:09.928749 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21fac822-b1df-42c3-8574-fa86e43d7ea4-client-ca\") pod \"21fac822-b1df-42c3-8574-fa86e43d7ea4\" (UID: \"21fac822-b1df-42c3-8574-fa86e43d7ea4\") " Mar 12 18:14:09.929385 master-0 kubenswrapper[7337]: I0312 18:14:09.928781 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56345785-8643-4f59-ab92-d4eb40d25312-kube-api-access\") pod \"56345785-8643-4f59-ab92-d4eb40d25312\" (UID: \"56345785-8643-4f59-ab92-d4eb40d25312\") " Mar 12 18:14:09.929385 master-0 kubenswrapper[7337]: I0312 18:14:09.928809 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00755a4e-124c-4a51-b1c5-7c505b3637a8-kube-api-access\") pod \"00755a4e-124c-4a51-b1c5-7c505b3637a8\" (UID: \"00755a4e-124c-4a51-b1c5-7c505b3637a8\") " Mar 12 18:14:09.929385 master-0 kubenswrapper[7337]: I0312 18:14:09.928840 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/600d037c-0703-43c9-8f01-c8da82b114fd-proxy-ca-bundles\") pod \"600d037c-0703-43c9-8f01-c8da82b114fd\" (UID: \"600d037c-0703-43c9-8f01-c8da82b114fd\") " Mar 12 18:14:09.929385 master-0 kubenswrapper[7337]: I0312 18:14:09.928909 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56345785-8643-4f59-ab92-d4eb40d25312-var-lock\") pod \"56345785-8643-4f59-ab92-d4eb40d25312\" (UID: \"56345785-8643-4f59-ab92-d4eb40d25312\") " Mar 12 18:14:09.929385 master-0 kubenswrapper[7337]: I0312 18:14:09.928929 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56345785-8643-4f59-ab92-d4eb40d25312-kubelet-dir\") pod \"56345785-8643-4f59-ab92-d4eb40d25312\" (UID: \"56345785-8643-4f59-ab92-d4eb40d25312\") " Mar 12 18:14:09.929385 master-0 kubenswrapper[7337]: I0312 18:14:09.928949 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/600d037c-0703-43c9-8f01-c8da82b114fd-config\") pod \"600d037c-0703-43c9-8f01-c8da82b114fd\" (UID: \"600d037c-0703-43c9-8f01-c8da82b114fd\") " Mar 12 18:14:09.929385 master-0 kubenswrapper[7337]: I0312 18:14:09.928977 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/600d037c-0703-43c9-8f01-c8da82b114fd-serving-cert\") pod \"600d037c-0703-43c9-8f01-c8da82b114fd\" (UID: \"600d037c-0703-43c9-8f01-c8da82b114fd\") " Mar 12 18:14:09.929385 master-0 kubenswrapper[7337]: I0312 18:14:09.929023 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/600d037c-0703-43c9-8f01-c8da82b114fd-client-ca\") pod \"600d037c-0703-43c9-8f01-c8da82b114fd\" (UID: \"600d037c-0703-43c9-8f01-c8da82b114fd\") " Mar 12 18:14:09.933025 master-0 kubenswrapper[7337]: I0312 18:14:09.930065 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00755a4e-124c-4a51-b1c5-7c505b3637a8-etc-ssl-certs" (OuterVolumeSpecName: "etc-ssl-certs") pod "00755a4e-124c-4a51-b1c5-7c505b3637a8" (UID: "00755a4e-124c-4a51-b1c5-7c505b3637a8"). InnerVolumeSpecName "etc-ssl-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:14:09.933025 master-0 kubenswrapper[7337]: I0312 18:14:09.930682 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00755a4e-124c-4a51-b1c5-7c505b3637a8-etc-cvo-updatepayloads" (OuterVolumeSpecName: "etc-cvo-updatepayloads") pod "00755a4e-124c-4a51-b1c5-7c505b3637a8" (UID: "00755a4e-124c-4a51-b1c5-7c505b3637a8"). InnerVolumeSpecName "etc-cvo-updatepayloads". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:14:09.933025 master-0 kubenswrapper[7337]: I0312 18:14:09.930758 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56345785-8643-4f59-ab92-d4eb40d25312-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "56345785-8643-4f59-ab92-d4eb40d25312" (UID: "56345785-8643-4f59-ab92-d4eb40d25312"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:14:09.933025 master-0 kubenswrapper[7337]: I0312 18:14:09.930808 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56345785-8643-4f59-ab92-d4eb40d25312-var-lock" (OuterVolumeSpecName: "var-lock") pod "56345785-8643-4f59-ab92-d4eb40d25312" (UID: "56345785-8643-4f59-ab92-d4eb40d25312"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:14:09.933025 master-0 kubenswrapper[7337]: I0312 18:14:09.930846 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21fac822-b1df-42c3-8574-fa86e43d7ea4-config" (OuterVolumeSpecName: "config") pod "21fac822-b1df-42c3-8574-fa86e43d7ea4" (UID: "21fac822-b1df-42c3-8574-fa86e43d7ea4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:14:09.933025 master-0 kubenswrapper[7337]: I0312 18:14:09.931001 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21fac822-b1df-42c3-8574-fa86e43d7ea4-client-ca" (OuterVolumeSpecName: "client-ca") pod "21fac822-b1df-42c3-8574-fa86e43d7ea4" (UID: "21fac822-b1df-42c3-8574-fa86e43d7ea4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:14:09.933025 master-0 kubenswrapper[7337]: I0312 18:14:09.931556 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/600d037c-0703-43c9-8f01-c8da82b114fd-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "600d037c-0703-43c9-8f01-c8da82b114fd" (UID: "600d037c-0703-43c9-8f01-c8da82b114fd"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:14:09.933025 master-0 kubenswrapper[7337]: I0312 18:14:09.932487 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00755a4e-124c-4a51-b1c5-7c505b3637a8-service-ca" (OuterVolumeSpecName: "service-ca") pod "00755a4e-124c-4a51-b1c5-7c505b3637a8" (UID: "00755a4e-124c-4a51-b1c5-7c505b3637a8"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:14:09.933745 master-0 kubenswrapper[7337]: I0312 18:14:09.933505 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/600d037c-0703-43c9-8f01-c8da82b114fd-config" (OuterVolumeSpecName: "config") pod "600d037c-0703-43c9-8f01-c8da82b114fd" (UID: "600d037c-0703-43c9-8f01-c8da82b114fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:14:09.934045 master-0 kubenswrapper[7337]: I0312 18:14:09.933999 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/600d037c-0703-43c9-8f01-c8da82b114fd-client-ca" (OuterVolumeSpecName: "client-ca") pod "600d037c-0703-43c9-8f01-c8da82b114fd" (UID: "600d037c-0703-43c9-8f01-c8da82b114fd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:14:09.934348 master-0 kubenswrapper[7337]: I0312 18:14:09.934315 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/600d037c-0703-43c9-8f01-c8da82b114fd-kube-api-access-xkrg6" (OuterVolumeSpecName: "kube-api-access-xkrg6") pod "600d037c-0703-43c9-8f01-c8da82b114fd" (UID: "600d037c-0703-43c9-8f01-c8da82b114fd"). InnerVolumeSpecName "kube-api-access-xkrg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:14:09.937029 master-0 kubenswrapper[7337]: I0312 18:14:09.935458 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00755a4e-124c-4a51-b1c5-7c505b3637a8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "00755a4e-124c-4a51-b1c5-7c505b3637a8" (UID: "00755a4e-124c-4a51-b1c5-7c505b3637a8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:14:09.937029 master-0 kubenswrapper[7337]: I0312 18:14:09.936172 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56345785-8643-4f59-ab92-d4eb40d25312-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "56345785-8643-4f59-ab92-d4eb40d25312" (UID: "56345785-8643-4f59-ab92-d4eb40d25312"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:14:09.937234 master-0 kubenswrapper[7337]: I0312 18:14:09.937070 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21fac822-b1df-42c3-8574-fa86e43d7ea4-kube-api-access-2gkdn" (OuterVolumeSpecName: "kube-api-access-2gkdn") pod "21fac822-b1df-42c3-8574-fa86e43d7ea4" (UID: "21fac822-b1df-42c3-8574-fa86e43d7ea4"). InnerVolumeSpecName "kube-api-access-2gkdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:14:09.942079 master-0 kubenswrapper[7337]: I0312 18:14:09.941996 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/600d037c-0703-43c9-8f01-c8da82b114fd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "600d037c-0703-43c9-8f01-c8da82b114fd" (UID: "600d037c-0703-43c9-8f01-c8da82b114fd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:14:09.942079 master-0 kubenswrapper[7337]: I0312 18:14:09.942062 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21fac822-b1df-42c3-8574-fa86e43d7ea4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "21fac822-b1df-42c3-8574-fa86e43d7ea4" (UID: "21fac822-b1df-42c3-8574-fa86e43d7ea4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:14:09.953689 master-0 kubenswrapper[7337]: I0312 18:14:09.953632 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "00755a4e-124c-4a51-b1c5-7c505b3637a8" (UID: "00755a4e-124c-4a51-b1c5-7c505b3637a8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:14:10.035601 master-0 kubenswrapper[7337]: I0312 18:14:10.032129 7337 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gkdn\" (UniqueName: \"kubernetes.io/projected/21fac822-b1df-42c3-8574-fa86e43d7ea4-kube-api-access-2gkdn\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:10.035601 master-0 kubenswrapper[7337]: I0312 18:14:10.032151 7337 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/00755a4e-124c-4a51-b1c5-7c505b3637a8-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:10.035601 master-0 kubenswrapper[7337]: I0312 18:14:10.032179 7337 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00755a4e-124c-4a51-b1c5-7c505b3637a8-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:10.035601 master-0 kubenswrapper[7337]: I0312 18:14:10.032190 7337 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkrg6\" (UniqueName: \"kubernetes.io/projected/600d037c-0703-43c9-8f01-c8da82b114fd-kube-api-access-xkrg6\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:10.035601 master-0 kubenswrapper[7337]: I0312 18:14:10.032199 7337 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21fac822-b1df-42c3-8574-fa86e43d7ea4-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:10.035601 master-0 kubenswrapper[7337]: I0312 18:14:10.032207 7337 reconciler_common.go:293] "Volume detached for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/00755a4e-124c-4a51-b1c5-7c505b3637a8-etc-cvo-updatepayloads\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:10.035601 master-0 kubenswrapper[7337]: I0312 18:14:10.032216 7337 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21fac822-b1df-42c3-8574-fa86e43d7ea4-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:10.035601 master-0 kubenswrapper[7337]: I0312 18:14:10.032236 7337 reconciler_common.go:293] "Volume detached for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/00755a4e-124c-4a51-b1c5-7c505b3637a8-etc-ssl-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:10.035601 master-0 kubenswrapper[7337]: I0312 18:14:10.032245 7337 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21fac822-b1df-42c3-8574-fa86e43d7ea4-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:10.035601 master-0 kubenswrapper[7337]: I0312 18:14:10.032253 7337 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56345785-8643-4f59-ab92-d4eb40d25312-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:10.035601 master-0 kubenswrapper[7337]: I0312 18:14:10.032261 7337 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00755a4e-124c-4a51-b1c5-7c505b3637a8-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:10.035601 master-0 kubenswrapper[7337]: I0312 18:14:10.032270 7337 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56345785-8643-4f59-ab92-d4eb40d25312-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:10.035601 master-0 kubenswrapper[7337]: I0312 18:14:10.032277 7337 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/600d037c-0703-43c9-8f01-c8da82b114fd-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:10.035601 master-0 kubenswrapper[7337]: I0312 18:14:10.032287 7337 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/600d037c-0703-43c9-8f01-c8da82b114fd-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:10.035601 master-0 kubenswrapper[7337]: I0312 18:14:10.032295 7337 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56345785-8643-4f59-ab92-d4eb40d25312-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:10.035601 master-0 kubenswrapper[7337]: I0312 18:14:10.032302 7337 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/600d037c-0703-43c9-8f01-c8da82b114fd-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:10.035601 master-0 kubenswrapper[7337]: I0312 18:14:10.032312 7337 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/600d037c-0703-43c9-8f01-c8da82b114fd-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:10.035601 master-0 kubenswrapper[7337]: I0312 18:14:10.033471 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_56345785-8643-4f59-ab92-d4eb40d25312/installer/0.log" Mar 12 18:14:10.035601 master-0 kubenswrapper[7337]: I0312 18:14:10.033726 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"56345785-8643-4f59-ab92-d4eb40d25312","Type":"ContainerDied","Data":"329888663eb1d1a34fc232bab3b5d51f53bc5e1541113e9b1a0cb115cc3c8e5a"} Mar 12 18:14:10.035601 master-0 kubenswrapper[7337]: I0312 18:14:10.033775 7337 scope.go:117] "RemoveContainer" containerID="a4f9a56f69b75a1303b4dc8bf2fe9b299627c08d770e930cb53681ac90fcdf8a" Mar 12 18:14:10.035601 master-0 kubenswrapper[7337]: I0312 18:14:10.033880 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 18:14:10.040483 master-0 kubenswrapper[7337]: I0312 18:14:10.039976 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5cd7bb8bd9-qb8ht" event={"ID":"21fac822-b1df-42c3-8574-fa86e43d7ea4","Type":"ContainerDied","Data":"4b3321d88006202c98d8ca0ae93d8f9d68729565e97acf051999a3a546d3319c"} Mar 12 18:14:10.040483 master-0 kubenswrapper[7337]: I0312 18:14:10.040060 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cd7bb8bd9-qb8ht" Mar 12 18:14:10.048009 master-0 kubenswrapper[7337]: I0312 18:14:10.047867 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c597958db-cclsx" event={"ID":"600d037c-0703-43c9-8f01-c8da82b114fd","Type":"ContainerDied","Data":"93a66548bb42611a70333af013dbc6db10084f9e626148e3dce0bdc571c9c53a"} Mar 12 18:14:10.048009 master-0 kubenswrapper[7337]: I0312 18:14:10.047964 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c597958db-cclsx" Mar 12 18:14:10.067392 master-0 kubenswrapper[7337]: I0312 18:14:10.057762 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" event={"ID":"00755a4e-124c-4a51-b1c5-7c505b3637a8","Type":"ContainerDied","Data":"7c9e68c50d09c9f8a89015bdfd2c1cf33c28b6a7d845aef581a57e003e8e6cc7"} Mar 12 18:14:10.067392 master-0 kubenswrapper[7337]: I0312 18:14:10.057871 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx" Mar 12 18:14:10.106408 master-0 kubenswrapper[7337]: I0312 18:14:10.105460 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cd7bb8bd9-qb8ht"] Mar 12 18:14:10.108250 master-0 kubenswrapper[7337]: I0312 18:14:10.108190 7337 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cd7bb8bd9-qb8ht"] Mar 12 18:14:10.112388 master-0 kubenswrapper[7337]: I0312 18:14:10.112346 7337 scope.go:117] "RemoveContainer" containerID="257ce4c9ab5ca88cc44891cc3278fd9efdca93406f4d5d0347483fa67ee4dda8" Mar 12 18:14:10.113498 master-0 kubenswrapper[7337]: I0312 18:14:10.113385 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c597958db-cclsx"] Mar 12 18:14:10.119365 master-0 kubenswrapper[7337]: I0312 18:14:10.119156 7337 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5c597958db-cclsx"] Mar 12 18:14:10.145639 master-0 kubenswrapper[7337]: I0312 18:14:10.144868 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx"] Mar 12 18:14:10.147801 master-0 kubenswrapper[7337]: I0312 18:14:10.147750 7337 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-version/cluster-version-operator-745944c6b7-cxwmx"] Mar 12 18:14:10.149488 master-0 kubenswrapper[7337]: I0312 18:14:10.148212 7337 scope.go:117] "RemoveContainer" containerID="06ed6eb88ed1f644751ea6fbc8ab9068fa3790ca5e4deea484dd8d55fa290007" Mar 12 18:14:10.183879 master-0 kubenswrapper[7337]: I0312 18:14:10.183099 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 12 18:14:10.188381 master-0 kubenswrapper[7337]: I0312 18:14:10.188325 7337 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 12 18:14:10.194312 master-0 kubenswrapper[7337]: I0312 18:14:10.193772 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k"] Mar 12 18:14:10.194312 master-0 kubenswrapper[7337]: E0312 18:14:10.194119 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56345785-8643-4f59-ab92-d4eb40d25312" containerName="installer" Mar 12 18:14:10.194312 master-0 kubenswrapper[7337]: I0312 18:14:10.194132 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="56345785-8643-4f59-ab92-d4eb40d25312" containerName="installer" Mar 12 18:14:10.194312 master-0 kubenswrapper[7337]: E0312 18:14:10.194148 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="600d037c-0703-43c9-8f01-c8da82b114fd" containerName="controller-manager" Mar 12 18:14:10.194312 master-0 kubenswrapper[7337]: I0312 18:14:10.194154 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="600d037c-0703-43c9-8f01-c8da82b114fd" containerName="controller-manager" Mar 12 18:14:10.194312 master-0 kubenswrapper[7337]: E0312 18:14:10.194169 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21fac822-b1df-42c3-8574-fa86e43d7ea4" containerName="route-controller-manager" Mar 12 18:14:10.194312 master-0 kubenswrapper[7337]: I0312 18:14:10.194175 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="21fac822-b1df-42c3-8574-fa86e43d7ea4" containerName="route-controller-manager" Mar 12 18:14:10.194312 master-0 kubenswrapper[7337]: E0312 18:14:10.194183 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00755a4e-124c-4a51-b1c5-7c505b3637a8" containerName="cluster-version-operator" Mar 12 18:14:10.194312 master-0 kubenswrapper[7337]: I0312 18:14:10.194190 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="00755a4e-124c-4a51-b1c5-7c505b3637a8" containerName="cluster-version-operator" Mar 12 18:14:10.194312 master-0 kubenswrapper[7337]: I0312 18:14:10.194255 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="00755a4e-124c-4a51-b1c5-7c505b3637a8" containerName="cluster-version-operator" Mar 12 18:14:10.194312 master-0 kubenswrapper[7337]: I0312 18:14:10.194267 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="56345785-8643-4f59-ab92-d4eb40d25312" containerName="installer" Mar 12 18:14:10.194312 master-0 kubenswrapper[7337]: I0312 18:14:10.194275 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="21fac822-b1df-42c3-8574-fa86e43d7ea4" containerName="route-controller-manager" Mar 12 18:14:10.195103 master-0 kubenswrapper[7337]: I0312 18:14:10.194469 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="600d037c-0703-43c9-8f01-c8da82b114fd" containerName="controller-manager" Mar 12 18:14:10.195103 master-0 kubenswrapper[7337]: I0312 18:14:10.194801 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k" Mar 12 18:14:10.197962 master-0 kubenswrapper[7337]: I0312 18:14:10.197739 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 12 18:14:10.197962 master-0 kubenswrapper[7337]: I0312 18:14:10.197929 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 12 18:14:10.202495 master-0 kubenswrapper[7337]: I0312 18:14:10.199309 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 12 18:14:10.213548 master-0 kubenswrapper[7337]: I0312 18:14:10.213487 7337 scope.go:117] "RemoveContainer" containerID="c1a8fe3c9ec9293190da1abf5d84165878cc28e2ff9a4187ebb4b5e5ee9ed66b" Mar 12 18:14:10.240995 master-0 kubenswrapper[7337]: I0312 18:14:10.240947 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4048e453-a983-4708-89b6-a81af0067e29-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-m885k\" (UID: \"4048e453-a983-4708-89b6-a81af0067e29\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k" Mar 12 18:14:10.241828 master-0 kubenswrapper[7337]: I0312 18:14:10.241001 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4048e453-a983-4708-89b6-a81af0067e29-serving-cert\") pod \"cluster-version-operator-8c9c967c7-m885k\" (UID: \"4048e453-a983-4708-89b6-a81af0067e29\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k" Mar 12 18:14:10.241828 master-0 kubenswrapper[7337]: I0312 18:14:10.241023 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4048e453-a983-4708-89b6-a81af0067e29-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-m885k\" (UID: \"4048e453-a983-4708-89b6-a81af0067e29\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k" Mar 12 18:14:10.241828 master-0 kubenswrapper[7337]: I0312 18:14:10.241090 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4048e453-a983-4708-89b6-a81af0067e29-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-m885k\" (UID: \"4048e453-a983-4708-89b6-a81af0067e29\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k" Mar 12 18:14:10.241828 master-0 kubenswrapper[7337]: I0312 18:14:10.241210 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4048e453-a983-4708-89b6-a81af0067e29-service-ca\") pod \"cluster-version-operator-8c9c967c7-m885k\" (UID: \"4048e453-a983-4708-89b6-a81af0067e29\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k" Mar 12 18:14:10.326782 master-0 kubenswrapper[7337]: I0312 18:14:10.322083 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 12 18:14:10.346060 master-0 kubenswrapper[7337]: I0312 18:14:10.346031 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4048e453-a983-4708-89b6-a81af0067e29-serving-cert\") pod \"cluster-version-operator-8c9c967c7-m885k\" (UID: \"4048e453-a983-4708-89b6-a81af0067e29\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k" Mar 12 18:14:10.346142 master-0 kubenswrapper[7337]: I0312 18:14:10.346071 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4048e453-a983-4708-89b6-a81af0067e29-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-m885k\" (UID: \"4048e453-a983-4708-89b6-a81af0067e29\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k" Mar 12 18:14:10.346142 master-0 kubenswrapper[7337]: I0312 18:14:10.346097 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4048e453-a983-4708-89b6-a81af0067e29-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-m885k\" (UID: \"4048e453-a983-4708-89b6-a81af0067e29\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k" Mar 12 18:14:10.346142 master-0 kubenswrapper[7337]: I0312 18:14:10.346119 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4048e453-a983-4708-89b6-a81af0067e29-service-ca\") pod \"cluster-version-operator-8c9c967c7-m885k\" (UID: \"4048e453-a983-4708-89b6-a81af0067e29\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k" Mar 12 18:14:10.346228 master-0 kubenswrapper[7337]: I0312 18:14:10.346153 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4048e453-a983-4708-89b6-a81af0067e29-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-m885k\" (UID: \"4048e453-a983-4708-89b6-a81af0067e29\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k" Mar 12 18:14:10.346228 master-0 kubenswrapper[7337]: I0312 18:14:10.346210 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4048e453-a983-4708-89b6-a81af0067e29-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-m885k\" (UID: \"4048e453-a983-4708-89b6-a81af0067e29\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k" Mar 12 18:14:10.346688 master-0 kubenswrapper[7337]: I0312 18:14:10.346651 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4048e453-a983-4708-89b6-a81af0067e29-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-m885k\" (UID: \"4048e453-a983-4708-89b6-a81af0067e29\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k" Mar 12 18:14:10.347546 master-0 kubenswrapper[7337]: I0312 18:14:10.347524 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4048e453-a983-4708-89b6-a81af0067e29-service-ca\") pod \"cluster-version-operator-8c9c967c7-m885k\" (UID: \"4048e453-a983-4708-89b6-a81af0067e29\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k" Mar 12 18:14:10.355120 master-0 kubenswrapper[7337]: I0312 18:14:10.355083 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4048e453-a983-4708-89b6-a81af0067e29-serving-cert\") pod \"cluster-version-operator-8c9c967c7-m885k\" (UID: \"4048e453-a983-4708-89b6-a81af0067e29\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k" Mar 12 18:14:10.374706 master-0 kubenswrapper[7337]: I0312 18:14:10.374653 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4048e453-a983-4708-89b6-a81af0067e29-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-m885k\" (UID: \"4048e453-a983-4708-89b6-a81af0067e29\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k" Mar 12 18:14:10.445595 master-0 kubenswrapper[7337]: I0312 18:14:10.444013 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 12 18:14:10.445595 master-0 kubenswrapper[7337]: I0312 18:14:10.445378 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg"] Mar 12 18:14:10.509087 master-0 kubenswrapper[7337]: I0312 18:14:10.509055 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 12 18:14:10.554149 master-0 kubenswrapper[7337]: W0312 18:14:10.551956 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod38785e6e_3052_405c_8874_4f295985def5.slice/crio-d2719f778fae8a1410c74e71ed0769412f583c56fba7c4dc342221e161dce0bd WatchSource:0}: Error finding container d2719f778fae8a1410c74e71ed0769412f583c56fba7c4dc342221e161dce0bd: Status 404 returned error can't find the container with id d2719f778fae8a1410c74e71ed0769412f583c56fba7c4dc342221e161dce0bd Mar 12 18:14:10.601279 master-0 kubenswrapper[7337]: I0312 18:14:10.601219 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k" Mar 12 18:14:10.768616 master-0 kubenswrapper[7337]: I0312 18:14:10.768576 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:14:10.775529 master-0 kubenswrapper[7337]: I0312 18:14:10.773312 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:14:11.070300 master-0 kubenswrapper[7337]: I0312 18:14:11.070245 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r" event={"ID":"d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27","Type":"ContainerStarted","Data":"8cd8d9c414efb9a1afe112e22dac0c2d6d3f331000e1f66770596c52c2644c01"} Mar 12 18:14:11.071289 master-0 kubenswrapper[7337]: I0312 18:14:11.071265 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r" Mar 12 18:14:11.074092 master-0 kubenswrapper[7337]: I0312 18:14:11.074044 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-z4sc9" event={"ID":"5b48f8fd-2efe-44e3-a6d7-c71358b83a2f","Type":"ContainerStarted","Data":"633a386e76a823076ab467422478d08ac351422fa35ea9ef2eadb9dad46eae33"} Mar 12 18:14:11.074146 master-0 kubenswrapper[7337]: I0312 18:14:11.074091 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-z4sc9" event={"ID":"5b48f8fd-2efe-44e3-a6d7-c71358b83a2f","Type":"ContainerStarted","Data":"d9028e487d2717cec5fbddf51229e128bba45ce7530fe3ba0bfee5ffd865ed82"} Mar 12 18:14:11.075530 master-0 kubenswrapper[7337]: I0312 18:14:11.075442 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r" Mar 12 18:14:11.075920 master-0 kubenswrapper[7337]: I0312 18:14:11.075882 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" event={"ID":"51eb717b-d11f-4bc3-8df6-deb51d5889f3","Type":"ContainerStarted","Data":"33470f162304f6a1c732da622d08f9a2cb10dfebe7eb3e1cc79d0a55f3c66c95"} Mar 12 18:14:11.076063 master-0 kubenswrapper[7337]: I0312 18:14:11.076025 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:14:11.077207 master-0 kubenswrapper[7337]: I0312 18:14:11.077172 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"f2d3635b-61ed-4f81-8735-47be74319c67","Type":"ContainerStarted","Data":"a93de5cae20a0c2be516e05565af87002244d2ae7747d73ec6d4ce23b90b3dea"} Mar 12 18:14:11.077207 master-0 kubenswrapper[7337]: I0312 18:14:11.077203 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"f2d3635b-61ed-4f81-8735-47be74319c67","Type":"ContainerStarted","Data":"5d883da860f9aa6a7aa5626f2e240823e0e89b289e407a0285fc8b0a5e547f31"} Mar 12 18:14:11.077403 master-0 kubenswrapper[7337]: I0312 18:14:11.077314 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-3-master-0" podUID="f2d3635b-61ed-4f81-8735-47be74319c67" containerName="installer" containerID="cri-o://a93de5cae20a0c2be516e05565af87002244d2ae7747d73ec6d4ce23b90b3dea" gracePeriod=30 Mar 12 18:14:11.087111 master-0 kubenswrapper[7337]: I0312 18:14:11.087072 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-kcpg5" event={"ID":"875bdfaa-b0a4-4412-a477-c962844e7057","Type":"ContainerStarted","Data":"4e982f58af65b335ca3753572817b82f818c04cb442b9fd1ca79a8530ee48175"} Mar 12 18:14:11.087111 master-0 kubenswrapper[7337]: I0312 18:14:11.087118 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-kcpg5" event={"ID":"875bdfaa-b0a4-4412-a477-c962844e7057","Type":"ContainerStarted","Data":"017d55d2922541b7f8cf5506cf963835c47653756128a5be4b04a566d5989973"} Mar 12 18:14:11.089887 master-0 kubenswrapper[7337]: I0312 18:14:11.089735 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k" event={"ID":"4048e453-a983-4708-89b6-a81af0067e29","Type":"ContainerStarted","Data":"570936a0a36edb0fda6b55c99e7f566dfd145b7b28da0dcae1b91148af7c1a36"} Mar 12 18:14:11.089887 master-0 kubenswrapper[7337]: I0312 18:14:11.089791 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k" event={"ID":"4048e453-a983-4708-89b6-a81af0067e29","Type":"ContainerStarted","Data":"49ed17fafdb495990cffcb60e09d22b57348e9bbf59679c7126d84628d0f24f1"} Mar 12 18:14:11.107348 master-0 kubenswrapper[7337]: I0312 18:14:11.107292 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" event={"ID":"fb529297-b3de-4167-a91e-0a63725b3b0f","Type":"ContainerStarted","Data":"81a2ffe73dc94d42d0d0d238f88887bd148c25d4cd10443967e58bd472ed7cfd"} Mar 12 18:14:11.109372 master-0 kubenswrapper[7337]: I0312 18:14:11.109119 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" event={"ID":"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64","Type":"ContainerStarted","Data":"91ac1142b73c6d3658240c3848ad3ec4d35a6a2c1e366a3eec630ba38825ae3c"} Mar 12 18:14:11.110270 master-0 kubenswrapper[7337]: I0312 18:14:11.109966 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:14:11.111068 master-0 kubenswrapper[7337]: I0312 18:14:11.111030 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"38785e6e-3052-405c-8874-4f295985def5","Type":"ContainerStarted","Data":"ad09860af65a7f4806ecc5c16545e1e14574d76310388c1e9bda798b177013f0"} Mar 12 18:14:11.111135 master-0 kubenswrapper[7337]: I0312 18:14:11.111079 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"38785e6e-3052-405c-8874-4f295985def5","Type":"ContainerStarted","Data":"d2719f778fae8a1410c74e71ed0769412f583c56fba7c4dc342221e161dce0bd"} Mar 12 18:14:11.116876 master-0 kubenswrapper[7337]: I0312 18:14:11.116849 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:14:11.126553 master-0 kubenswrapper[7337]: I0312 18:14:11.126497 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" event={"ID":"e94d098b-fbcc-4e85-b8ad-42f3a21c822c","Type":"ContainerStarted","Data":"9bdcd14d07d14a4e373b5ae51f798eb8c612258e7c0d9bcf7e3f1b06a06095b3"} Mar 12 18:14:11.130987 master-0 kubenswrapper[7337]: I0312 18:14:11.130945 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"8cd18201-afdc-4229-972e-ab01adb2a7f3","Type":"ContainerStarted","Data":"ddcae4325b25bb06e6c7df16759097b61b22ac82cbe47445944983995281e38f"} Mar 12 18:14:11.130987 master-0 kubenswrapper[7337]: I0312 18:14:11.130987 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"8cd18201-afdc-4229-972e-ab01adb2a7f3","Type":"ContainerStarted","Data":"be0c2d6cb5987b5b04f809bb69ec34be4077cf368f8ec9a517894818769dcd9b"} Mar 12 18:14:11.143398 master-0 kubenswrapper[7337]: I0312 18:14:11.142666 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7" event={"ID":"47850839-bb4b-41e9-ac31-f1cabbb4926d","Type":"ContainerStarted","Data":"23c2256a50a2f91587bb555e618132804718a8b7d4ab45b6c532ae36a08f8fbd"} Mar 12 18:14:11.143398 master-0 kubenswrapper[7337]: I0312 18:14:11.142709 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7" Mar 12 18:14:11.153565 master-0 kubenswrapper[7337]: I0312 18:14:11.153405 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7" Mar 12 18:14:11.265543 master-0 kubenswrapper[7337]: I0312 18:14:11.265402 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k" podStartSLOduration=1.265379316 podStartE2EDuration="1.265379316s" podCreationTimestamp="2026-03-12 18:14:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:14:11.209958783 +0000 UTC m=+51.678559740" watchObservedRunningTime="2026-03-12 18:14:11.265379316 +0000 UTC m=+51.733980293" Mar 12 18:14:11.265998 master-0 kubenswrapper[7337]: I0312 18:14:11.265625 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-master-0" podStartSLOduration=2.265620272 podStartE2EDuration="2.265620272s" podCreationTimestamp="2026-03-12 18:14:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:14:11.265092268 +0000 UTC m=+51.733693215" watchObservedRunningTime="2026-03-12 18:14:11.265620272 +0000 UTC m=+51.734221219" Mar 12 18:14:11.378208 master-0 kubenswrapper[7337]: I0312 18:14:11.374890 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-master-0" podStartSLOduration=5.374855646 podStartE2EDuration="5.374855646s" podCreationTimestamp="2026-03-12 18:14:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:14:11.374134658 +0000 UTC m=+51.842735625" watchObservedRunningTime="2026-03-12 18:14:11.374855646 +0000 UTC m=+51.843456593" Mar 12 18:14:11.414534 master-0 kubenswrapper[7337]: I0312 18:14:11.414200 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-1-master-0" podStartSLOduration=5.414181022 podStartE2EDuration="5.414181022s" podCreationTimestamp="2026-03-12 18:14:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:14:11.4121563 +0000 UTC m=+51.880757257" watchObservedRunningTime="2026-03-12 18:14:11.414181022 +0000 UTC m=+51.882781969" Mar 12 18:14:11.582253 master-0 kubenswrapper[7337]: I0312 18:14:11.580674 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_f2d3635b-61ed-4f81-8735-47be74319c67/installer/0.log" Mar 12 18:14:11.582253 master-0 kubenswrapper[7337]: I0312 18:14:11.580753 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 18:14:11.582253 master-0 kubenswrapper[7337]: I0312 18:14:11.581058 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4s28n"] Mar 12 18:14:11.582253 master-0 kubenswrapper[7337]: E0312 18:14:11.581262 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2d3635b-61ed-4f81-8735-47be74319c67" containerName="installer" Mar 12 18:14:11.582253 master-0 kubenswrapper[7337]: I0312 18:14:11.581287 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2d3635b-61ed-4f81-8735-47be74319c67" containerName="installer" Mar 12 18:14:11.582253 master-0 kubenswrapper[7337]: I0312 18:14:11.581404 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2d3635b-61ed-4f81-8735-47be74319c67" containerName="installer" Mar 12 18:14:11.582253 master-0 kubenswrapper[7337]: I0312 18:14:11.582065 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4s28n" Mar 12 18:14:11.605537 master-0 kubenswrapper[7337]: I0312 18:14:11.604370 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4s28n"] Mar 12 18:14:11.673661 master-0 kubenswrapper[7337]: I0312 18:14:11.673554 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6q2f\" (UniqueName: \"kubernetes.io/projected/9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a-kube-api-access-b6q2f\") pod \"redhat-operators-4s28n\" (UID: \"9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a\") " pod="openshift-marketplace/redhat-operators-4s28n" Mar 12 18:14:11.673661 master-0 kubenswrapper[7337]: I0312 18:14:11.673601 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a-catalog-content\") pod \"redhat-operators-4s28n\" (UID: \"9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a\") " pod="openshift-marketplace/redhat-operators-4s28n" Mar 12 18:14:11.673661 master-0 kubenswrapper[7337]: I0312 18:14:11.673632 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a-utilities\") pod \"redhat-operators-4s28n\" (UID: \"9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a\") " pod="openshift-marketplace/redhat-operators-4s28n" Mar 12 18:14:11.732865 master-0 kubenswrapper[7337]: I0312 18:14:11.732815 7337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00755a4e-124c-4a51-b1c5-7c505b3637a8" path="/var/lib/kubelet/pods/00755a4e-124c-4a51-b1c5-7c505b3637a8/volumes" Mar 12 18:14:11.735552 master-0 kubenswrapper[7337]: I0312 18:14:11.733410 7337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21fac822-b1df-42c3-8574-fa86e43d7ea4" path="/var/lib/kubelet/pods/21fac822-b1df-42c3-8574-fa86e43d7ea4/volumes" Mar 12 18:14:11.735552 master-0 kubenswrapper[7337]: I0312 18:14:11.733852 7337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56345785-8643-4f59-ab92-d4eb40d25312" path="/var/lib/kubelet/pods/56345785-8643-4f59-ab92-d4eb40d25312/volumes" Mar 12 18:14:11.735552 master-0 kubenswrapper[7337]: I0312 18:14:11.734783 7337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="600d037c-0703-43c9-8f01-c8da82b114fd" path="/var/lib/kubelet/pods/600d037c-0703-43c9-8f01-c8da82b114fd/volumes" Mar 12 18:14:11.775923 master-0 kubenswrapper[7337]: I0312 18:14:11.775873 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2d3635b-61ed-4f81-8735-47be74319c67-kubelet-dir\") pod \"f2d3635b-61ed-4f81-8735-47be74319c67\" (UID: \"f2d3635b-61ed-4f81-8735-47be74319c67\") " Mar 12 18:14:11.776115 master-0 kubenswrapper[7337]: I0312 18:14:11.775989 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2d3635b-61ed-4f81-8735-47be74319c67-kube-api-access\") pod \"f2d3635b-61ed-4f81-8735-47be74319c67\" (UID: \"f2d3635b-61ed-4f81-8735-47be74319c67\") " Mar 12 18:14:11.776115 master-0 kubenswrapper[7337]: I0312 18:14:11.776046 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f2d3635b-61ed-4f81-8735-47be74319c67-var-lock\") pod \"f2d3635b-61ed-4f81-8735-47be74319c67\" (UID: \"f2d3635b-61ed-4f81-8735-47be74319c67\") " Mar 12 18:14:11.776249 master-0 kubenswrapper[7337]: I0312 18:14:11.776223 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2d3635b-61ed-4f81-8735-47be74319c67-var-lock" (OuterVolumeSpecName: "var-lock") pod "f2d3635b-61ed-4f81-8735-47be74319c67" (UID: "f2d3635b-61ed-4f81-8735-47be74319c67"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:14:11.776290 master-0 kubenswrapper[7337]: I0312 18:14:11.776210 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2d3635b-61ed-4f81-8735-47be74319c67-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f2d3635b-61ed-4f81-8735-47be74319c67" (UID: "f2d3635b-61ed-4f81-8735-47be74319c67"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:14:11.776357 master-0 kubenswrapper[7337]: I0312 18:14:11.776323 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6q2f\" (UniqueName: \"kubernetes.io/projected/9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a-kube-api-access-b6q2f\") pod \"redhat-operators-4s28n\" (UID: \"9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a\") " pod="openshift-marketplace/redhat-operators-4s28n" Mar 12 18:14:11.776691 master-0 kubenswrapper[7337]: I0312 18:14:11.776664 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a-catalog-content\") pod \"redhat-operators-4s28n\" (UID: \"9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a\") " pod="openshift-marketplace/redhat-operators-4s28n" Mar 12 18:14:11.776795 master-0 kubenswrapper[7337]: I0312 18:14:11.776782 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a-utilities\") pod \"redhat-operators-4s28n\" (UID: \"9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a\") " pod="openshift-marketplace/redhat-operators-4s28n" Mar 12 18:14:11.776920 master-0 kubenswrapper[7337]: I0312 18:14:11.776909 7337 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2d3635b-61ed-4f81-8735-47be74319c67-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:11.776988 master-0 kubenswrapper[7337]: I0312 18:14:11.776979 7337 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f2d3635b-61ed-4f81-8735-47be74319c67-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:11.777236 master-0 kubenswrapper[7337]: I0312 18:14:11.777202 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a-catalog-content\") pod \"redhat-operators-4s28n\" (UID: \"9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a\") " pod="openshift-marketplace/redhat-operators-4s28n" Mar 12 18:14:11.777338 master-0 kubenswrapper[7337]: I0312 18:14:11.777299 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a-utilities\") pod \"redhat-operators-4s28n\" (UID: \"9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a\") " pod="openshift-marketplace/redhat-operators-4s28n" Mar 12 18:14:11.778860 master-0 kubenswrapper[7337]: I0312 18:14:11.778826 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2d3635b-61ed-4f81-8735-47be74319c67-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f2d3635b-61ed-4f81-8735-47be74319c67" (UID: "f2d3635b-61ed-4f81-8735-47be74319c67"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:14:11.883861 master-0 kubenswrapper[7337]: I0312 18:14:11.880390 7337 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2d3635b-61ed-4f81-8735-47be74319c67-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:12.146376 master-0 kubenswrapper[7337]: I0312 18:14:12.146312 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2vwn7"] Mar 12 18:14:12.148673 master-0 kubenswrapper[7337]: I0312 18:14:12.148632 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2vwn7" Mar 12 18:14:12.166959 master-0 kubenswrapper[7337]: I0312 18:14:12.166915 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_56a4489e-252b-44a7-8310-3b699b2af7d6/installer/0.log" Mar 12 18:14:12.167111 master-0 kubenswrapper[7337]: I0312 18:14:12.166965 7337 generic.go:334] "Generic (PLEG): container finished" podID="56a4489e-252b-44a7-8310-3b699b2af7d6" containerID="53ddad5a8c9cbca89afea7f839486d18707671bc74821fb0560b014cdd65817f" exitCode=1 Mar 12 18:14:12.167111 master-0 kubenswrapper[7337]: I0312 18:14:12.167018 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"56a4489e-252b-44a7-8310-3b699b2af7d6","Type":"ContainerDied","Data":"53ddad5a8c9cbca89afea7f839486d18707671bc74821fb0560b014cdd65817f"} Mar 12 18:14:12.172893 master-0 kubenswrapper[7337]: I0312 18:14:12.172845 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_f2d3635b-61ed-4f81-8735-47be74319c67/installer/0.log" Mar 12 18:14:12.173026 master-0 kubenswrapper[7337]: I0312 18:14:12.172907 7337 generic.go:334] "Generic (PLEG): container finished" podID="f2d3635b-61ed-4f81-8735-47be74319c67" containerID="a93de5cae20a0c2be516e05565af87002244d2ae7747d73ec6d4ce23b90b3dea" exitCode=2 Mar 12 18:14:12.173026 master-0 kubenswrapper[7337]: I0312 18:14:12.172991 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 18:14:12.173116 master-0 kubenswrapper[7337]: I0312 18:14:12.173044 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"f2d3635b-61ed-4f81-8735-47be74319c67","Type":"ContainerDied","Data":"a93de5cae20a0c2be516e05565af87002244d2ae7747d73ec6d4ce23b90b3dea"} Mar 12 18:14:12.173116 master-0 kubenswrapper[7337]: I0312 18:14:12.173110 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"f2d3635b-61ed-4f81-8735-47be74319c67","Type":"ContainerDied","Data":"5d883da860f9aa6a7aa5626f2e240823e0e89b289e407a0285fc8b0a5e547f31"} Mar 12 18:14:12.173204 master-0 kubenswrapper[7337]: I0312 18:14:12.173129 7337 scope.go:117] "RemoveContainer" containerID="a93de5cae20a0c2be516e05565af87002244d2ae7747d73ec6d4ce23b90b3dea" Mar 12 18:14:12.204843 master-0 kubenswrapper[7337]: I0312 18:14:12.204745 7337 scope.go:117] "RemoveContainer" containerID="a93de5cae20a0c2be516e05565af87002244d2ae7747d73ec6d4ce23b90b3dea" Mar 12 18:14:12.205607 master-0 kubenswrapper[7337]: E0312 18:14:12.205449 7337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a93de5cae20a0c2be516e05565af87002244d2ae7747d73ec6d4ce23b90b3dea\": container with ID starting with a93de5cae20a0c2be516e05565af87002244d2ae7747d73ec6d4ce23b90b3dea not found: ID does not exist" containerID="a93de5cae20a0c2be516e05565af87002244d2ae7747d73ec6d4ce23b90b3dea" Mar 12 18:14:12.205607 master-0 kubenswrapper[7337]: I0312 18:14:12.205488 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a93de5cae20a0c2be516e05565af87002244d2ae7747d73ec6d4ce23b90b3dea"} err="failed to get container status \"a93de5cae20a0c2be516e05565af87002244d2ae7747d73ec6d4ce23b90b3dea\": rpc error: code = NotFound desc = could not find container \"a93de5cae20a0c2be516e05565af87002244d2ae7747d73ec6d4ce23b90b3dea\": container with ID starting with a93de5cae20a0c2be516e05565af87002244d2ae7747d73ec6d4ce23b90b3dea not found: ID does not exist" Mar 12 18:14:12.295994 master-0 kubenswrapper[7337]: I0312 18:14:12.295939 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f516dab9-06d1-4bea-96b9-8f3e14543bbd-utilities\") pod \"certified-operators-2vwn7\" (UID: \"f516dab9-06d1-4bea-96b9-8f3e14543bbd\") " pod="openshift-marketplace/certified-operators-2vwn7" Mar 12 18:14:12.296430 master-0 kubenswrapper[7337]: I0312 18:14:12.296044 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvd6n\" (UniqueName: \"kubernetes.io/projected/f516dab9-06d1-4bea-96b9-8f3e14543bbd-kube-api-access-qvd6n\") pod \"certified-operators-2vwn7\" (UID: \"f516dab9-06d1-4bea-96b9-8f3e14543bbd\") " pod="openshift-marketplace/certified-operators-2vwn7" Mar 12 18:14:12.296430 master-0 kubenswrapper[7337]: I0312 18:14:12.296079 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f516dab9-06d1-4bea-96b9-8f3e14543bbd-catalog-content\") pod \"certified-operators-2vwn7\" (UID: \"f516dab9-06d1-4bea-96b9-8f3e14543bbd\") " pod="openshift-marketplace/certified-operators-2vwn7" Mar 12 18:14:12.365549 master-0 kubenswrapper[7337]: I0312 18:14:12.362458 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2vwn7"] Mar 12 18:14:12.365935 master-0 kubenswrapper[7337]: I0312 18:14:12.365903 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6q2f\" (UniqueName: \"kubernetes.io/projected/9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a-kube-api-access-b6q2f\") pod \"redhat-operators-4s28n\" (UID: \"9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a\") " pod="openshift-marketplace/redhat-operators-4s28n" Mar 12 18:14:12.380595 master-0 kubenswrapper[7337]: I0312 18:14:12.368325 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5b55d98459-sr4hk"] Mar 12 18:14:12.380595 master-0 kubenswrapper[7337]: I0312 18:14:12.369124 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79884f6cc-tpdsd"] Mar 12 18:14:12.380595 master-0 kubenswrapper[7337]: I0312 18:14:12.369566 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79884f6cc-tpdsd" Mar 12 18:14:12.380595 master-0 kubenswrapper[7337]: I0312 18:14:12.369978 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" Mar 12 18:14:12.387591 master-0 kubenswrapper[7337]: I0312 18:14:12.384738 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 18:14:12.387591 master-0 kubenswrapper[7337]: I0312 18:14:12.384929 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 18:14:12.387591 master-0 kubenswrapper[7337]: I0312 18:14:12.385025 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 18:14:12.387591 master-0 kubenswrapper[7337]: I0312 18:14:12.385040 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 18:14:12.387591 master-0 kubenswrapper[7337]: I0312 18:14:12.385113 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 18:14:12.387591 master-0 kubenswrapper[7337]: I0312 18:14:12.385192 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 18:14:12.387591 master-0 kubenswrapper[7337]: I0312 18:14:12.385311 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 18:14:12.387591 master-0 kubenswrapper[7337]: I0312 18:14:12.385323 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 18:14:12.387591 master-0 kubenswrapper[7337]: I0312 18:14:12.385396 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 18:14:12.388799 master-0 kubenswrapper[7337]: I0312 18:14:12.388777 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 18:14:12.390926 master-0 kubenswrapper[7337]: I0312 18:14:12.390416 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 18:14:12.397090 master-0 kubenswrapper[7337]: I0312 18:14:12.397058 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f516dab9-06d1-4bea-96b9-8f3e14543bbd-utilities\") pod \"certified-operators-2vwn7\" (UID: \"f516dab9-06d1-4bea-96b9-8f3e14543bbd\") " pod="openshift-marketplace/certified-operators-2vwn7" Mar 12 18:14:12.397215 master-0 kubenswrapper[7337]: I0312 18:14:12.397103 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvd6n\" (UniqueName: \"kubernetes.io/projected/f516dab9-06d1-4bea-96b9-8f3e14543bbd-kube-api-access-qvd6n\") pod \"certified-operators-2vwn7\" (UID: \"f516dab9-06d1-4bea-96b9-8f3e14543bbd\") " pod="openshift-marketplace/certified-operators-2vwn7" Mar 12 18:14:12.397215 master-0 kubenswrapper[7337]: I0312 18:14:12.397127 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f516dab9-06d1-4bea-96b9-8f3e14543bbd-catalog-content\") pod \"certified-operators-2vwn7\" (UID: \"f516dab9-06d1-4bea-96b9-8f3e14543bbd\") " pod="openshift-marketplace/certified-operators-2vwn7" Mar 12 18:14:12.397622 master-0 kubenswrapper[7337]: I0312 18:14:12.397557 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f516dab9-06d1-4bea-96b9-8f3e14543bbd-catalog-content\") pod \"certified-operators-2vwn7\" (UID: \"f516dab9-06d1-4bea-96b9-8f3e14543bbd\") " pod="openshift-marketplace/certified-operators-2vwn7" Mar 12 18:14:12.397622 master-0 kubenswrapper[7337]: I0312 18:14:12.397591 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f516dab9-06d1-4bea-96b9-8f3e14543bbd-utilities\") pod \"certified-operators-2vwn7\" (UID: \"f516dab9-06d1-4bea-96b9-8f3e14543bbd\") " pod="openshift-marketplace/certified-operators-2vwn7" Mar 12 18:14:12.498645 master-0 kubenswrapper[7337]: I0312 18:14:12.498547 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lm82\" (UniqueName: \"kubernetes.io/projected/e27d2693-1a06-473e-a126-614b939bae33-kube-api-access-6lm82\") pod \"route-controller-manager-79884f6cc-tpdsd\" (UID: \"e27d2693-1a06-473e-a126-614b939bae33\") " pod="openshift-route-controller-manager/route-controller-manager-79884f6cc-tpdsd" Mar 12 18:14:12.498832 master-0 kubenswrapper[7337]: I0312 18:14:12.498775 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30c5dc4b-f1c8-4773-b961-985740fcc503-config\") pod \"controller-manager-5b55d98459-sr4hk\" (UID: \"30c5dc4b-f1c8-4773-b961-985740fcc503\") " pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" Mar 12 18:14:12.498832 master-0 kubenswrapper[7337]: I0312 18:14:12.498801 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flnvn\" (UniqueName: \"kubernetes.io/projected/30c5dc4b-f1c8-4773-b961-985740fcc503-kube-api-access-flnvn\") pod \"controller-manager-5b55d98459-sr4hk\" (UID: \"30c5dc4b-f1c8-4773-b961-985740fcc503\") " pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" Mar 12 18:14:12.498832 master-0 kubenswrapper[7337]: I0312 18:14:12.498819 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e27d2693-1a06-473e-a126-614b939bae33-serving-cert\") pod \"route-controller-manager-79884f6cc-tpdsd\" (UID: \"e27d2693-1a06-473e-a126-614b939bae33\") " pod="openshift-route-controller-manager/route-controller-manager-79884f6cc-tpdsd" Mar 12 18:14:12.498918 master-0 kubenswrapper[7337]: I0312 18:14:12.498848 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e27d2693-1a06-473e-a126-614b939bae33-config\") pod \"route-controller-manager-79884f6cc-tpdsd\" (UID: \"e27d2693-1a06-473e-a126-614b939bae33\") " pod="openshift-route-controller-manager/route-controller-manager-79884f6cc-tpdsd" Mar 12 18:14:12.498918 master-0 kubenswrapper[7337]: I0312 18:14:12.498881 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30c5dc4b-f1c8-4773-b961-985740fcc503-serving-cert\") pod \"controller-manager-5b55d98459-sr4hk\" (UID: \"30c5dc4b-f1c8-4773-b961-985740fcc503\") " pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" Mar 12 18:14:12.498918 master-0 kubenswrapper[7337]: I0312 18:14:12.498904 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e27d2693-1a06-473e-a126-614b939bae33-client-ca\") pod \"route-controller-manager-79884f6cc-tpdsd\" (UID: \"e27d2693-1a06-473e-a126-614b939bae33\") " pod="openshift-route-controller-manager/route-controller-manager-79884f6cc-tpdsd" Mar 12 18:14:12.499000 master-0 kubenswrapper[7337]: I0312 18:14:12.498921 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30c5dc4b-f1c8-4773-b961-985740fcc503-client-ca\") pod \"controller-manager-5b55d98459-sr4hk\" (UID: \"30c5dc4b-f1c8-4773-b961-985740fcc503\") " pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" Mar 12 18:14:12.499000 master-0 kubenswrapper[7337]: I0312 18:14:12.498937 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/30c5dc4b-f1c8-4773-b961-985740fcc503-proxy-ca-bundles\") pod \"controller-manager-5b55d98459-sr4hk\" (UID: \"30c5dc4b-f1c8-4773-b961-985740fcc503\") " pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" Mar 12 18:14:12.509665 master-0 kubenswrapper[7337]: I0312 18:14:12.509612 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4s28n" Mar 12 18:14:12.600399 master-0 kubenswrapper[7337]: I0312 18:14:12.600342 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lm82\" (UniqueName: \"kubernetes.io/projected/e27d2693-1a06-473e-a126-614b939bae33-kube-api-access-6lm82\") pod \"route-controller-manager-79884f6cc-tpdsd\" (UID: \"e27d2693-1a06-473e-a126-614b939bae33\") " pod="openshift-route-controller-manager/route-controller-manager-79884f6cc-tpdsd" Mar 12 18:14:12.600399 master-0 kubenswrapper[7337]: I0312 18:14:12.600402 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30c5dc4b-f1c8-4773-b961-985740fcc503-config\") pod \"controller-manager-5b55d98459-sr4hk\" (UID: \"30c5dc4b-f1c8-4773-b961-985740fcc503\") " pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" Mar 12 18:14:12.600826 master-0 kubenswrapper[7337]: I0312 18:14:12.600791 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flnvn\" (UniqueName: \"kubernetes.io/projected/30c5dc4b-f1c8-4773-b961-985740fcc503-kube-api-access-flnvn\") pod \"controller-manager-5b55d98459-sr4hk\" (UID: \"30c5dc4b-f1c8-4773-b961-985740fcc503\") " pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" Mar 12 18:14:12.600875 master-0 kubenswrapper[7337]: I0312 18:14:12.600836 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e27d2693-1a06-473e-a126-614b939bae33-serving-cert\") pod \"route-controller-manager-79884f6cc-tpdsd\" (UID: \"e27d2693-1a06-473e-a126-614b939bae33\") " pod="openshift-route-controller-manager/route-controller-manager-79884f6cc-tpdsd" Mar 12 18:14:12.600875 master-0 kubenswrapper[7337]: I0312 18:14:12.600857 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e27d2693-1a06-473e-a126-614b939bae33-config\") pod \"route-controller-manager-79884f6cc-tpdsd\" (UID: \"e27d2693-1a06-473e-a126-614b939bae33\") " pod="openshift-route-controller-manager/route-controller-manager-79884f6cc-tpdsd" Mar 12 18:14:12.600951 master-0 kubenswrapper[7337]: I0312 18:14:12.600903 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30c5dc4b-f1c8-4773-b961-985740fcc503-serving-cert\") pod \"controller-manager-5b55d98459-sr4hk\" (UID: \"30c5dc4b-f1c8-4773-b961-985740fcc503\") " pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" Mar 12 18:14:12.600951 master-0 kubenswrapper[7337]: I0312 18:14:12.600931 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e27d2693-1a06-473e-a126-614b939bae33-client-ca\") pod \"route-controller-manager-79884f6cc-tpdsd\" (UID: \"e27d2693-1a06-473e-a126-614b939bae33\") " pod="openshift-route-controller-manager/route-controller-manager-79884f6cc-tpdsd" Mar 12 18:14:12.600951 master-0 kubenswrapper[7337]: I0312 18:14:12.600947 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30c5dc4b-f1c8-4773-b961-985740fcc503-client-ca\") pod \"controller-manager-5b55d98459-sr4hk\" (UID: \"30c5dc4b-f1c8-4773-b961-985740fcc503\") " pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" Mar 12 18:14:12.601056 master-0 kubenswrapper[7337]: I0312 18:14:12.600965 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/30c5dc4b-f1c8-4773-b961-985740fcc503-proxy-ca-bundles\") pod \"controller-manager-5b55d98459-sr4hk\" (UID: \"30c5dc4b-f1c8-4773-b961-985740fcc503\") " pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" Mar 12 18:14:12.602022 master-0 kubenswrapper[7337]: I0312 18:14:12.601660 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30c5dc4b-f1c8-4773-b961-985740fcc503-config\") pod \"controller-manager-5b55d98459-sr4hk\" (UID: \"30c5dc4b-f1c8-4773-b961-985740fcc503\") " pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" Mar 12 18:14:12.602488 master-0 kubenswrapper[7337]: I0312 18:14:12.602401 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/30c5dc4b-f1c8-4773-b961-985740fcc503-proxy-ca-bundles\") pod \"controller-manager-5b55d98459-sr4hk\" (UID: \"30c5dc4b-f1c8-4773-b961-985740fcc503\") " pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" Mar 12 18:14:12.602743 master-0 kubenswrapper[7337]: I0312 18:14:12.602587 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e27d2693-1a06-473e-a126-614b939bae33-config\") pod \"route-controller-manager-79884f6cc-tpdsd\" (UID: \"e27d2693-1a06-473e-a126-614b939bae33\") " pod="openshift-route-controller-manager/route-controller-manager-79884f6cc-tpdsd" Mar 12 18:14:12.602853 master-0 kubenswrapper[7337]: I0312 18:14:12.602802 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e27d2693-1a06-473e-a126-614b939bae33-client-ca\") pod \"route-controller-manager-79884f6cc-tpdsd\" (UID: \"e27d2693-1a06-473e-a126-614b939bae33\") " pod="openshift-route-controller-manager/route-controller-manager-79884f6cc-tpdsd" Mar 12 18:14:12.603362 master-0 kubenswrapper[7337]: I0312 18:14:12.603024 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30c5dc4b-f1c8-4773-b961-985740fcc503-client-ca\") pod \"controller-manager-5b55d98459-sr4hk\" (UID: \"30c5dc4b-f1c8-4773-b961-985740fcc503\") " pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" Mar 12 18:14:12.606351 master-0 kubenswrapper[7337]: I0312 18:14:12.606312 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e27d2693-1a06-473e-a126-614b939bae33-serving-cert\") pod \"route-controller-manager-79884f6cc-tpdsd\" (UID: \"e27d2693-1a06-473e-a126-614b939bae33\") " pod="openshift-route-controller-manager/route-controller-manager-79884f6cc-tpdsd" Mar 12 18:14:12.618277 master-0 kubenswrapper[7337]: I0312 18:14:12.618206 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30c5dc4b-f1c8-4773-b961-985740fcc503-serving-cert\") pod \"controller-manager-5b55d98459-sr4hk\" (UID: \"30c5dc4b-f1c8-4773-b961-985740fcc503\") " pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" Mar 12 18:14:12.927697 master-0 kubenswrapper[7337]: I0312 18:14:12.926650 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79884f6cc-tpdsd"] Mar 12 18:14:12.932843 master-0 kubenswrapper[7337]: I0312 18:14:12.932337 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b55d98459-sr4hk"] Mar 12 18:14:12.951573 master-0 kubenswrapper[7337]: I0312 18:14:12.948673 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 12 18:14:12.951573 master-0 kubenswrapper[7337]: I0312 18:14:12.951491 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 18:14:12.961505 master-0 kubenswrapper[7337]: I0312 18:14:12.961432 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvd6n\" (UniqueName: \"kubernetes.io/projected/f516dab9-06d1-4bea-96b9-8f3e14543bbd-kube-api-access-qvd6n\") pod \"certified-operators-2vwn7\" (UID: \"f516dab9-06d1-4bea-96b9-8f3e14543bbd\") " pod="openshift-marketplace/certified-operators-2vwn7" Mar 12 18:14:12.966606 master-0 kubenswrapper[7337]: I0312 18:14:12.966268 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lm82\" (UniqueName: \"kubernetes.io/projected/e27d2693-1a06-473e-a126-614b939bae33-kube-api-access-6lm82\") pod \"route-controller-manager-79884f6cc-tpdsd\" (UID: \"e27d2693-1a06-473e-a126-614b939bae33\") " pod="openshift-route-controller-manager/route-controller-manager-79884f6cc-tpdsd" Mar 12 18:14:12.966606 master-0 kubenswrapper[7337]: I0312 18:14:12.966339 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4s28n"] Mar 12 18:14:12.971294 master-0 kubenswrapper[7337]: I0312 18:14:12.971092 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 12 18:14:12.988392 master-0 kubenswrapper[7337]: I0312 18:14:12.987358 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flnvn\" (UniqueName: \"kubernetes.io/projected/30c5dc4b-f1c8-4773-b961-985740fcc503-kube-api-access-flnvn\") pod \"controller-manager-5b55d98459-sr4hk\" (UID: \"30c5dc4b-f1c8-4773-b961-985740fcc503\") " pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" Mar 12 18:14:13.009674 master-0 kubenswrapper[7337]: I0312 18:14:13.009276 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7542f3f1-23fe-41df-99b9-4324c75d35b7-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"7542f3f1-23fe-41df-99b9-4324c75d35b7\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 18:14:13.009674 master-0 kubenswrapper[7337]: I0312 18:14:13.009325 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7542f3f1-23fe-41df-99b9-4324c75d35b7-var-lock\") pod \"installer-4-master-0\" (UID: \"7542f3f1-23fe-41df-99b9-4324c75d35b7\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 18:14:13.009674 master-0 kubenswrapper[7337]: I0312 18:14:13.009374 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7542f3f1-23fe-41df-99b9-4324c75d35b7-kube-api-access\") pod \"installer-4-master-0\" (UID: \"7542f3f1-23fe-41df-99b9-4324c75d35b7\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 18:14:13.022777 master-0 kubenswrapper[7337]: I0312 18:14:13.018883 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79884f6cc-tpdsd" Mar 12 18:14:13.026246 master-0 kubenswrapper[7337]: I0312 18:14:13.026151 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" Mar 12 18:14:13.087397 master-0 kubenswrapper[7337]: I0312 18:14:13.087035 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 12 18:14:13.089914 master-0 kubenswrapper[7337]: I0312 18:14:13.088427 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2vwn7" Mar 12 18:14:13.094521 master-0 kubenswrapper[7337]: I0312 18:14:13.094457 7337 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 12 18:14:13.110531 master-0 kubenswrapper[7337]: I0312 18:14:13.110335 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7542f3f1-23fe-41df-99b9-4324c75d35b7-kube-api-access\") pod \"installer-4-master-0\" (UID: \"7542f3f1-23fe-41df-99b9-4324c75d35b7\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 18:14:13.110531 master-0 kubenswrapper[7337]: I0312 18:14:13.110376 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7542f3f1-23fe-41df-99b9-4324c75d35b7-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"7542f3f1-23fe-41df-99b9-4324c75d35b7\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 18:14:13.110531 master-0 kubenswrapper[7337]: I0312 18:14:13.110405 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7542f3f1-23fe-41df-99b9-4324c75d35b7-var-lock\") pod \"installer-4-master-0\" (UID: \"7542f3f1-23fe-41df-99b9-4324c75d35b7\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 18:14:13.110531 master-0 kubenswrapper[7337]: I0312 18:14:13.110476 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7542f3f1-23fe-41df-99b9-4324c75d35b7-var-lock\") pod \"installer-4-master-0\" (UID: \"7542f3f1-23fe-41df-99b9-4324c75d35b7\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 18:14:13.110777 master-0 kubenswrapper[7337]: I0312 18:14:13.110763 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7542f3f1-23fe-41df-99b9-4324c75d35b7-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"7542f3f1-23fe-41df-99b9-4324c75d35b7\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 18:14:13.145567 master-0 kubenswrapper[7337]: I0312 18:14:13.138452 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7542f3f1-23fe-41df-99b9-4324c75d35b7-kube-api-access\") pod \"installer-4-master-0\" (UID: \"7542f3f1-23fe-41df-99b9-4324c75d35b7\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 18:14:13.178623 master-0 kubenswrapper[7337]: I0312 18:14:13.172191 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-67j2w"] Mar 12 18:14:13.178623 master-0 kubenswrapper[7337]: I0312 18:14:13.173035 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-67j2w" Mar 12 18:14:13.234536 master-0 kubenswrapper[7337]: I0312 18:14:13.225573 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4s28n" event={"ID":"9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a","Type":"ContainerStarted","Data":"585e569ac901da0beb62f12e0509996b671a526fd344836c25da805ed3a58520"} Mar 12 18:14:13.234536 master-0 kubenswrapper[7337]: I0312 18:14:13.229607 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-67j2w"] Mar 12 18:14:13.234536 master-0 kubenswrapper[7337]: I0312 18:14:13.229942 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2331fc4b-e67b-4496-8cae-15cd11cf3030-catalog-content\") pod \"community-operators-67j2w\" (UID: \"2331fc4b-e67b-4496-8cae-15cd11cf3030\") " pod="openshift-marketplace/community-operators-67j2w" Mar 12 18:14:13.234536 master-0 kubenswrapper[7337]: I0312 18:14:13.230000 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhjpk\" (UniqueName: \"kubernetes.io/projected/2331fc4b-e67b-4496-8cae-15cd11cf3030-kube-api-access-nhjpk\") pod \"community-operators-67j2w\" (UID: \"2331fc4b-e67b-4496-8cae-15cd11cf3030\") " pod="openshift-marketplace/community-operators-67j2w" Mar 12 18:14:13.234536 master-0 kubenswrapper[7337]: I0312 18:14:13.230031 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2331fc4b-e67b-4496-8cae-15cd11cf3030-utilities\") pod \"community-operators-67j2w\" (UID: \"2331fc4b-e67b-4496-8cae-15cd11cf3030\") " pod="openshift-marketplace/community-operators-67j2w" Mar 12 18:14:13.363339 master-0 kubenswrapper[7337]: I0312 18:14:13.361477 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2331fc4b-e67b-4496-8cae-15cd11cf3030-catalog-content\") pod \"community-operators-67j2w\" (UID: \"2331fc4b-e67b-4496-8cae-15cd11cf3030\") " pod="openshift-marketplace/community-operators-67j2w" Mar 12 18:14:13.363339 master-0 kubenswrapper[7337]: I0312 18:14:13.362322 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhjpk\" (UniqueName: \"kubernetes.io/projected/2331fc4b-e67b-4496-8cae-15cd11cf3030-kube-api-access-nhjpk\") pod \"community-operators-67j2w\" (UID: \"2331fc4b-e67b-4496-8cae-15cd11cf3030\") " pod="openshift-marketplace/community-operators-67j2w" Mar 12 18:14:13.363339 master-0 kubenswrapper[7337]: I0312 18:14:13.362395 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2331fc4b-e67b-4496-8cae-15cd11cf3030-utilities\") pod \"community-operators-67j2w\" (UID: \"2331fc4b-e67b-4496-8cae-15cd11cf3030\") " pod="openshift-marketplace/community-operators-67j2w" Mar 12 18:14:13.363938 master-0 kubenswrapper[7337]: I0312 18:14:13.363656 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2331fc4b-e67b-4496-8cae-15cd11cf3030-utilities\") pod \"community-operators-67j2w\" (UID: \"2331fc4b-e67b-4496-8cae-15cd11cf3030\") " pod="openshift-marketplace/community-operators-67j2w" Mar 12 18:14:13.364499 master-0 kubenswrapper[7337]: I0312 18:14:13.364461 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2331fc4b-e67b-4496-8cae-15cd11cf3030-catalog-content\") pod \"community-operators-67j2w\" (UID: \"2331fc4b-e67b-4496-8cae-15cd11cf3030\") " pod="openshift-marketplace/community-operators-67j2w" Mar 12 18:14:13.413329 master-0 kubenswrapper[7337]: I0312 18:14:13.411791 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 18:14:13.519348 master-0 kubenswrapper[7337]: I0312 18:14:13.518935 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_56a4489e-252b-44a7-8310-3b699b2af7d6/installer/0.log" Mar 12 18:14:13.519348 master-0 kubenswrapper[7337]: I0312 18:14:13.519017 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 18:14:13.595261 master-0 kubenswrapper[7337]: I0312 18:14:13.595023 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhjpk\" (UniqueName: \"kubernetes.io/projected/2331fc4b-e67b-4496-8cae-15cd11cf3030-kube-api-access-nhjpk\") pod \"community-operators-67j2w\" (UID: \"2331fc4b-e67b-4496-8cae-15cd11cf3030\") " pod="openshift-marketplace/community-operators-67j2w" Mar 12 18:14:13.680570 master-0 kubenswrapper[7337]: I0312 18:14:13.678253 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56a4489e-252b-44a7-8310-3b699b2af7d6-kube-api-access\") pod \"56a4489e-252b-44a7-8310-3b699b2af7d6\" (UID: \"56a4489e-252b-44a7-8310-3b699b2af7d6\") " Mar 12 18:14:13.680570 master-0 kubenswrapper[7337]: I0312 18:14:13.678347 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56a4489e-252b-44a7-8310-3b699b2af7d6-var-lock\") pod \"56a4489e-252b-44a7-8310-3b699b2af7d6\" (UID: \"56a4489e-252b-44a7-8310-3b699b2af7d6\") " Mar 12 18:14:13.680570 master-0 kubenswrapper[7337]: I0312 18:14:13.678372 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56a4489e-252b-44a7-8310-3b699b2af7d6-kubelet-dir\") pod \"56a4489e-252b-44a7-8310-3b699b2af7d6\" (UID: \"56a4489e-252b-44a7-8310-3b699b2af7d6\") " Mar 12 18:14:13.680570 master-0 kubenswrapper[7337]: I0312 18:14:13.678569 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56a4489e-252b-44a7-8310-3b699b2af7d6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "56a4489e-252b-44a7-8310-3b699b2af7d6" (UID: "56a4489e-252b-44a7-8310-3b699b2af7d6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:14:13.680570 master-0 kubenswrapper[7337]: I0312 18:14:13.680306 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56a4489e-252b-44a7-8310-3b699b2af7d6-var-lock" (OuterVolumeSpecName: "var-lock") pod "56a4489e-252b-44a7-8310-3b699b2af7d6" (UID: "56a4489e-252b-44a7-8310-3b699b2af7d6"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:14:13.691431 master-0 kubenswrapper[7337]: I0312 18:14:13.690588 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56a4489e-252b-44a7-8310-3b699b2af7d6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "56a4489e-252b-44a7-8310-3b699b2af7d6" (UID: "56a4489e-252b-44a7-8310-3b699b2af7d6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:14:13.699840 master-0 kubenswrapper[7337]: I0312 18:14:13.699743 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79884f6cc-tpdsd"] Mar 12 18:14:13.743605 master-0 kubenswrapper[7337]: I0312 18:14:13.743495 7337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2d3635b-61ed-4f81-8735-47be74319c67" path="/var/lib/kubelet/pods/f2d3635b-61ed-4f81-8735-47be74319c67/volumes" Mar 12 18:14:13.765971 master-0 kubenswrapper[7337]: I0312 18:14:13.765926 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b55d98459-sr4hk"] Mar 12 18:14:13.776475 master-0 kubenswrapper[7337]: I0312 18:14:13.776435 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-67j2w" Mar 12 18:14:13.779058 master-0 kubenswrapper[7337]: I0312 18:14:13.779030 7337 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56a4489e-252b-44a7-8310-3b699b2af7d6-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:13.779165 master-0 kubenswrapper[7337]: I0312 18:14:13.779060 7337 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56a4489e-252b-44a7-8310-3b699b2af7d6-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:13.779165 master-0 kubenswrapper[7337]: I0312 18:14:13.779075 7337 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56a4489e-252b-44a7-8310-3b699b2af7d6-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:13.911433 master-0 kubenswrapper[7337]: I0312 18:14:13.911364 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2vwn7"] Mar 12 18:14:13.974254 master-0 kubenswrapper[7337]: I0312 18:14:13.970481 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 12 18:14:14.193676 master-0 kubenswrapper[7337]: I0312 18:14:14.192961 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-67j2w"] Mar 12 18:14:14.234260 master-0 kubenswrapper[7337]: I0312 18:14:14.234131 7337 generic.go:334] "Generic (PLEG): container finished" podID="f516dab9-06d1-4bea-96b9-8f3e14543bbd" containerID="8d16f3acc4f022fe83bca592108b4093b85b763774386a13c0e05aeaad9a6250" exitCode=0 Mar 12 18:14:14.234260 master-0 kubenswrapper[7337]: I0312 18:14:14.234199 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2vwn7" event={"ID":"f516dab9-06d1-4bea-96b9-8f3e14543bbd","Type":"ContainerDied","Data":"8d16f3acc4f022fe83bca592108b4093b85b763774386a13c0e05aeaad9a6250"} Mar 12 18:14:14.234260 master-0 kubenswrapper[7337]: I0312 18:14:14.234225 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2vwn7" event={"ID":"f516dab9-06d1-4bea-96b9-8f3e14543bbd","Type":"ContainerStarted","Data":"4de2af16396bc69bdc5784e33e403629152d1e0770da205647afdf23ab5a2699"} Mar 12 18:14:14.244543 master-0 kubenswrapper[7337]: I0312 18:14:14.240824 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" event={"ID":"30c5dc4b-f1c8-4773-b961-985740fcc503","Type":"ContainerStarted","Data":"f0410fcdb7f021e073b091992c982ea0c6dd9257aa500e76a08b26054e3f730d"} Mar 12 18:14:14.244543 master-0 kubenswrapper[7337]: I0312 18:14:14.240866 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" event={"ID":"30c5dc4b-f1c8-4773-b961-985740fcc503","Type":"ContainerStarted","Data":"25f85e056d06f9c263ff08ea4e6565ed3acaba0d2d09deb206fc0df16bc25d83"} Mar 12 18:14:14.244543 master-0 kubenswrapper[7337]: I0312 18:14:14.241025 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" Mar 12 18:14:14.244543 master-0 kubenswrapper[7337]: I0312 18:14:14.243577 7337 generic.go:334] "Generic (PLEG): container finished" podID="9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a" containerID="24a42c905bc5668eb7d21dc5a1cb3b30cf1599f79aa0daff8b89099b0e103199" exitCode=0 Mar 12 18:14:14.244543 master-0 kubenswrapper[7337]: I0312 18:14:14.243623 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4s28n" event={"ID":"9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a","Type":"ContainerDied","Data":"24a42c905bc5668eb7d21dc5a1cb3b30cf1599f79aa0daff8b89099b0e103199"} Mar 12 18:14:14.248906 master-0 kubenswrapper[7337]: I0312 18:14:14.247207 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79884f6cc-tpdsd" event={"ID":"e27d2693-1a06-473e-a126-614b939bae33","Type":"ContainerStarted","Data":"a3dc2b7095b6c8ddb045125ce99fe767799ec6ab7d744f0b94e5707b821e152b"} Mar 12 18:14:14.248906 master-0 kubenswrapper[7337]: I0312 18:14:14.247240 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79884f6cc-tpdsd" event={"ID":"e27d2693-1a06-473e-a126-614b939bae33","Type":"ContainerStarted","Data":"ecd8c1c23b06ddd0380989b80e906934916e1b0c7ecc136590e1c93fd774ae5b"} Mar 12 18:14:14.248906 master-0 kubenswrapper[7337]: I0312 18:14:14.247857 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-79884f6cc-tpdsd" Mar 12 18:14:14.248906 master-0 kubenswrapper[7337]: I0312 18:14:14.248040 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" Mar 12 18:14:14.255841 master-0 kubenswrapper[7337]: I0312 18:14:14.255775 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_56a4489e-252b-44a7-8310-3b699b2af7d6/installer/0.log" Mar 12 18:14:14.255928 master-0 kubenswrapper[7337]: I0312 18:14:14.255856 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"56a4489e-252b-44a7-8310-3b699b2af7d6","Type":"ContainerDied","Data":"43f219edc6bd711e95467d2fb5b26294bc0d574e573848834698d8c0e26127fa"} Mar 12 18:14:14.255928 master-0 kubenswrapper[7337]: I0312 18:14:14.255901 7337 scope.go:117] "RemoveContainer" containerID="53ddad5a8c9cbca89afea7f839486d18707671bc74821fb0560b014cdd65817f" Mar 12 18:14:14.256023 master-0 kubenswrapper[7337]: I0312 18:14:14.256004 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 18:14:14.327349 master-0 kubenswrapper[7337]: I0312 18:14:14.327029 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-79884f6cc-tpdsd" podStartSLOduration=9.327008311 podStartE2EDuration="9.327008311s" podCreationTimestamp="2026-03-12 18:14:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:14:14.32378434 +0000 UTC m=+54.792385287" watchObservedRunningTime="2026-03-12 18:14:14.327008311 +0000 UTC m=+54.795609258" Mar 12 18:14:14.332477 master-0 kubenswrapper[7337]: I0312 18:14:14.332311 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-79884f6cc-tpdsd" Mar 12 18:14:14.390631 master-0 kubenswrapper[7337]: I0312 18:14:14.390008 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" podStartSLOduration=10.389992905 podStartE2EDuration="10.389992905s" podCreationTimestamp="2026-03-12 18:14:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:14:14.361908064 +0000 UTC m=+54.830509021" watchObservedRunningTime="2026-03-12 18:14:14.389992905 +0000 UTC m=+54.858593852" Mar 12 18:14:14.390631 master-0 kubenswrapper[7337]: I0312 18:14:14.390136 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 12 18:14:14.421549 master-0 kubenswrapper[7337]: I0312 18:14:14.420548 7337 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 12 18:14:14.561406 master-0 kubenswrapper[7337]: I0312 18:14:14.561361 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ttgsx"] Mar 12 18:14:14.561666 master-0 kubenswrapper[7337]: E0312 18:14:14.561612 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56a4489e-252b-44a7-8310-3b699b2af7d6" containerName="installer" Mar 12 18:14:14.561666 master-0 kubenswrapper[7337]: I0312 18:14:14.561632 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="56a4489e-252b-44a7-8310-3b699b2af7d6" containerName="installer" Mar 12 18:14:14.561801 master-0 kubenswrapper[7337]: I0312 18:14:14.561755 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="56a4489e-252b-44a7-8310-3b699b2af7d6" containerName="installer" Mar 12 18:14:14.562587 master-0 kubenswrapper[7337]: I0312 18:14:14.562567 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ttgsx" Mar 12 18:14:14.586046 master-0 kubenswrapper[7337]: I0312 18:14:14.586022 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ttgsx"] Mar 12 18:14:14.602098 master-0 kubenswrapper[7337]: I0312 18:14:14.601979 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh6b2\" (UniqueName: \"kubernetes.io/projected/c4d20441-ec1f-4571-b590-989f2bdd4082-kube-api-access-xh6b2\") pod \"redhat-marketplace-ttgsx\" (UID: \"c4d20441-ec1f-4571-b590-989f2bdd4082\") " pod="openshift-marketplace/redhat-marketplace-ttgsx" Mar 12 18:14:14.602098 master-0 kubenswrapper[7337]: I0312 18:14:14.602040 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4d20441-ec1f-4571-b590-989f2bdd4082-catalog-content\") pod \"redhat-marketplace-ttgsx\" (UID: \"c4d20441-ec1f-4571-b590-989f2bdd4082\") " pod="openshift-marketplace/redhat-marketplace-ttgsx" Mar 12 18:14:14.602357 master-0 kubenswrapper[7337]: I0312 18:14:14.602170 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4d20441-ec1f-4571-b590-989f2bdd4082-utilities\") pod \"redhat-marketplace-ttgsx\" (UID: \"c4d20441-ec1f-4571-b590-989f2bdd4082\") " pod="openshift-marketplace/redhat-marketplace-ttgsx" Mar 12 18:14:14.704220 master-0 kubenswrapper[7337]: I0312 18:14:14.703840 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4d20441-ec1f-4571-b590-989f2bdd4082-utilities\") pod \"redhat-marketplace-ttgsx\" (UID: \"c4d20441-ec1f-4571-b590-989f2bdd4082\") " pod="openshift-marketplace/redhat-marketplace-ttgsx" Mar 12 18:14:14.704220 master-0 kubenswrapper[7337]: I0312 18:14:14.703897 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh6b2\" (UniqueName: \"kubernetes.io/projected/c4d20441-ec1f-4571-b590-989f2bdd4082-kube-api-access-xh6b2\") pod \"redhat-marketplace-ttgsx\" (UID: \"c4d20441-ec1f-4571-b590-989f2bdd4082\") " pod="openshift-marketplace/redhat-marketplace-ttgsx" Mar 12 18:14:14.704220 master-0 kubenswrapper[7337]: I0312 18:14:14.703920 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4d20441-ec1f-4571-b590-989f2bdd4082-catalog-content\") pod \"redhat-marketplace-ttgsx\" (UID: \"c4d20441-ec1f-4571-b590-989f2bdd4082\") " pod="openshift-marketplace/redhat-marketplace-ttgsx" Mar 12 18:14:14.704455 master-0 kubenswrapper[7337]: I0312 18:14:14.704343 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4d20441-ec1f-4571-b590-989f2bdd4082-catalog-content\") pod \"redhat-marketplace-ttgsx\" (UID: \"c4d20441-ec1f-4571-b590-989f2bdd4082\") " pod="openshift-marketplace/redhat-marketplace-ttgsx" Mar 12 18:14:14.704455 master-0 kubenswrapper[7337]: I0312 18:14:14.704423 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4d20441-ec1f-4571-b590-989f2bdd4082-utilities\") pod \"redhat-marketplace-ttgsx\" (UID: \"c4d20441-ec1f-4571-b590-989f2bdd4082\") " pod="openshift-marketplace/redhat-marketplace-ttgsx" Mar 12 18:14:14.727967 master-0 kubenswrapper[7337]: I0312 18:14:14.727920 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh6b2\" (UniqueName: \"kubernetes.io/projected/c4d20441-ec1f-4571-b590-989f2bdd4082-kube-api-access-xh6b2\") pod \"redhat-marketplace-ttgsx\" (UID: \"c4d20441-ec1f-4571-b590-989f2bdd4082\") " pod="openshift-marketplace/redhat-marketplace-ttgsx" Mar 12 18:14:14.896622 master-0 kubenswrapper[7337]: I0312 18:14:14.892810 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ttgsx" Mar 12 18:14:15.729109 master-0 kubenswrapper[7337]: I0312 18:14:15.729058 7337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56a4489e-252b-44a7-8310-3b699b2af7d6" path="/var/lib/kubelet/pods/56a4489e-252b-44a7-8310-3b699b2af7d6/volumes" Mar 12 18:14:16.154091 master-0 kubenswrapper[7337]: W0312 18:14:16.154050 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7542f3f1_23fe_41df_99b9_4324c75d35b7.slice/crio-b3d2e6992e795fa35374f60292962d9511ac22996078698ddd0c5f16bcc8772c WatchSource:0}: Error finding container b3d2e6992e795fa35374f60292962d9511ac22996078698ddd0c5f16bcc8772c: Status 404 returned error can't find the container with id b3d2e6992e795fa35374f60292962d9511ac22996078698ddd0c5f16bcc8772c Mar 12 18:14:16.285277 master-0 kubenswrapper[7337]: I0312 18:14:16.285023 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-67j2w" event={"ID":"2331fc4b-e67b-4496-8cae-15cd11cf3030","Type":"ContainerStarted","Data":"7a43ad09f77b7a8aa69873fac95df23196a6defbd82a520740d6f8230dd00f99"} Mar 12 18:14:16.289868 master-0 kubenswrapper[7337]: I0312 18:14:16.289670 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"7542f3f1-23fe-41df-99b9-4324c75d35b7","Type":"ContainerStarted","Data":"b3d2e6992e795fa35374f60292962d9511ac22996078698ddd0c5f16bcc8772c"} Mar 12 18:14:17.117808 master-0 kubenswrapper[7337]: I0312 18:14:17.117748 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ttgsx"] Mar 12 18:14:17.126153 master-0 kubenswrapper[7337]: W0312 18:14:17.126112 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4d20441_ec1f_4571_b590_989f2bdd4082.slice/crio-6358f2755396ee336fcf40ab974ee22dbd1b6e3333c3183dc2a8cde56949aaa6 WatchSource:0}: Error finding container 6358f2755396ee336fcf40ab974ee22dbd1b6e3333c3183dc2a8cde56949aaa6: Status 404 returned error can't find the container with id 6358f2755396ee336fcf40ab974ee22dbd1b6e3333c3183dc2a8cde56949aaa6 Mar 12 18:14:17.305217 master-0 kubenswrapper[7337]: I0312 18:14:17.304693 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2vwn7"] Mar 12 18:14:17.753893 master-0 kubenswrapper[7337]: I0312 18:14:17.752469 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6jhwp"] Mar 12 18:14:17.753893 master-0 kubenswrapper[7337]: I0312 18:14:17.753340 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6jhwp" Mar 12 18:14:17.754044 master-0 kubenswrapper[7337]: I0312 18:14:17.754015 7337 generic.go:334] "Generic (PLEG): container finished" podID="fb529297-b3de-4167-a91e-0a63725b3b0f" containerID="f77291c1df7f378588657b046f600e7b89800e859e666660a704c3c70a31f3c7" exitCode=0 Mar 12 18:14:17.754084 master-0 kubenswrapper[7337]: I0312 18:14:17.754067 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" event={"ID":"fb529297-b3de-4167-a91e-0a63725b3b0f","Type":"ContainerDied","Data":"f77291c1df7f378588657b046f600e7b89800e859e666660a704c3c70a31f3c7"} Mar 12 18:14:17.756037 master-0 kubenswrapper[7337]: I0312 18:14:17.755868 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-b88ct" Mar 12 18:14:17.756998 master-0 kubenswrapper[7337]: I0312 18:14:17.756523 7337 generic.go:334] "Generic (PLEG): container finished" podID="2331fc4b-e67b-4496-8cae-15cd11cf3030" containerID="a521c0a0426e21cb131251658526486bf3748aec5ae19ea452db84c768f630d0" exitCode=0 Mar 12 18:14:17.756998 master-0 kubenswrapper[7337]: I0312 18:14:17.756616 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-67j2w" event={"ID":"2331fc4b-e67b-4496-8cae-15cd11cf3030","Type":"ContainerDied","Data":"a521c0a0426e21cb131251658526486bf3748aec5ae19ea452db84c768f630d0"} Mar 12 18:14:17.759302 master-0 kubenswrapper[7337]: I0312 18:14:17.758995 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"7542f3f1-23fe-41df-99b9-4324c75d35b7","Type":"ContainerStarted","Data":"e3aea0a79706e5d2ced89ea30c6dab8e3469fe22291b915ce855f44fa68a87b6"} Mar 12 18:14:17.768626 master-0 kubenswrapper[7337]: I0312 18:14:17.768507 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6jhwp"] Mar 12 18:14:17.770757 master-0 kubenswrapper[7337]: I0312 18:14:17.770271 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttgsx" event={"ID":"c4d20441-ec1f-4571-b590-989f2bdd4082","Type":"ContainerStarted","Data":"1582d8faa27008c19af24da731144e2addf1cd4e41c10a2ce5d99f3806afefa4"} Mar 12 18:14:17.770757 master-0 kubenswrapper[7337]: I0312 18:14:17.770295 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttgsx" event={"ID":"c4d20441-ec1f-4571-b590-989f2bdd4082","Type":"ContainerStarted","Data":"6358f2755396ee336fcf40ab974ee22dbd1b6e3333c3183dc2a8cde56949aaa6"} Mar 12 18:14:17.845530 master-0 kubenswrapper[7337]: I0312 18:14:17.842858 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmvnh\" (UniqueName: \"kubernetes.io/projected/b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c-kube-api-access-xmvnh\") pod \"certified-operators-6jhwp\" (UID: \"b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c\") " pod="openshift-marketplace/certified-operators-6jhwp" Mar 12 18:14:17.845530 master-0 kubenswrapper[7337]: I0312 18:14:17.842988 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c-utilities\") pod \"certified-operators-6jhwp\" (UID: \"b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c\") " pod="openshift-marketplace/certified-operators-6jhwp" Mar 12 18:14:17.845530 master-0 kubenswrapper[7337]: I0312 18:14:17.843165 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c-catalog-content\") pod \"certified-operators-6jhwp\" (UID: \"b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c\") " pod="openshift-marketplace/certified-operators-6jhwp" Mar 12 18:14:17.944079 master-0 kubenswrapper[7337]: I0312 18:14:17.944038 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c-utilities\") pod \"certified-operators-6jhwp\" (UID: \"b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c\") " pod="openshift-marketplace/certified-operators-6jhwp" Mar 12 18:14:17.944332 master-0 kubenswrapper[7337]: I0312 18:14:17.944316 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c-catalog-content\") pod \"certified-operators-6jhwp\" (UID: \"b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c\") " pod="openshift-marketplace/certified-operators-6jhwp" Mar 12 18:14:17.944413 master-0 kubenswrapper[7337]: I0312 18:14:17.944400 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmvnh\" (UniqueName: \"kubernetes.io/projected/b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c-kube-api-access-xmvnh\") pod \"certified-operators-6jhwp\" (UID: \"b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c\") " pod="openshift-marketplace/certified-operators-6jhwp" Mar 12 18:14:17.944585 master-0 kubenswrapper[7337]: I0312 18:14:17.944550 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c-utilities\") pod \"certified-operators-6jhwp\" (UID: \"b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c\") " pod="openshift-marketplace/certified-operators-6jhwp" Mar 12 18:14:17.944845 master-0 kubenswrapper[7337]: I0312 18:14:17.944787 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c-catalog-content\") pod \"certified-operators-6jhwp\" (UID: \"b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c\") " pod="openshift-marketplace/certified-operators-6jhwp" Mar 12 18:14:18.090349 master-0 kubenswrapper[7337]: I0312 18:14:18.088402 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-master-0" podStartSLOduration=6.088386087 podStartE2EDuration="6.088386087s" podCreationTimestamp="2026-03-12 18:14:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:14:18.045898322 +0000 UTC m=+58.514499269" watchObservedRunningTime="2026-03-12 18:14:18.088386087 +0000 UTC m=+58.556987034" Mar 12 18:14:18.095788 master-0 kubenswrapper[7337]: I0312 18:14:18.095757 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmvnh\" (UniqueName: \"kubernetes.io/projected/b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c-kube-api-access-xmvnh\") pod \"certified-operators-6jhwp\" (UID: \"b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c\") " pod="openshift-marketplace/certified-operators-6jhwp" Mar 12 18:14:18.162901 master-0 kubenswrapper[7337]: I0312 18:14:18.162873 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6jhwp" Mar 12 18:14:18.780607 master-0 kubenswrapper[7337]: I0312 18:14:18.780547 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" event={"ID":"fb529297-b3de-4167-a91e-0a63725b3b0f","Type":"ContainerStarted","Data":"43f1864b6754cfd171ad7b7c7d28684a36e016d539c300797d2226a62bc2e96f"} Mar 12 18:14:18.783199 master-0 kubenswrapper[7337]: I0312 18:14:18.783151 7337 generic.go:334] "Generic (PLEG): container finished" podID="c4d20441-ec1f-4571-b590-989f2bdd4082" containerID="1582d8faa27008c19af24da731144e2addf1cd4e41c10a2ce5d99f3806afefa4" exitCode=0 Mar 12 18:14:18.783303 master-0 kubenswrapper[7337]: I0312 18:14:18.783208 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttgsx" event={"ID":"c4d20441-ec1f-4571-b590-989f2bdd4082","Type":"ContainerDied","Data":"1582d8faa27008c19af24da731144e2addf1cd4e41c10a2ce5d99f3806afefa4"} Mar 12 18:14:19.297424 master-0 kubenswrapper[7337]: I0312 18:14:19.296738 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-67j2w"] Mar 12 18:14:19.297424 master-0 kubenswrapper[7337]: I0312 18:14:19.296784 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6jhwp"] Mar 12 18:14:19.303967 master-0 kubenswrapper[7337]: W0312 18:14:19.303916 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb86295a3_1fc8_495f_b1b2_5cc4a3c3ab2c.slice/crio-cdac8f08c2f48b7e4861a00ec2d8e5264134cf6ddac6c83f56497027e5816cb7 WatchSource:0}: Error finding container cdac8f08c2f48b7e4861a00ec2d8e5264134cf6ddac6c83f56497027e5816cb7: Status 404 returned error can't find the container with id cdac8f08c2f48b7e4861a00ec2d8e5264134cf6ddac6c83f56497027e5816cb7 Mar 12 18:14:19.789570 master-0 kubenswrapper[7337]: I0312 18:14:19.789527 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6jhwp" event={"ID":"b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c","Type":"ContainerStarted","Data":"cdac8f08c2f48b7e4861a00ec2d8e5264134cf6ddac6c83f56497027e5816cb7"} Mar 12 18:14:20.800950 master-0 kubenswrapper[7337]: I0312 18:14:20.800892 7337 generic.go:334] "Generic (PLEG): container finished" podID="b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c" containerID="5202e2258391267eb3b45c1a1e1d347281dc176d85f3c415101ad891dd72d792" exitCode=0 Mar 12 18:14:20.800950 master-0 kubenswrapper[7337]: I0312 18:14:20.800939 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6jhwp" event={"ID":"b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c","Type":"ContainerDied","Data":"5202e2258391267eb3b45c1a1e1d347281dc176d85f3c415101ad891dd72d792"} Mar 12 18:14:21.761903 master-0 kubenswrapper[7337]: I0312 18:14:21.761607 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" podStartSLOduration=8.977256907 podStartE2EDuration="14.76158667s" podCreationTimestamp="2026-03-12 18:14:07 +0000 UTC" firstStartedPulling="2026-03-12 18:14:10.478311236 +0000 UTC m=+50.946912183" lastFinishedPulling="2026-03-12 18:14:16.262640999 +0000 UTC m=+56.731241946" observedRunningTime="2026-03-12 18:14:21.739190124 +0000 UTC m=+62.207791081" watchObservedRunningTime="2026-03-12 18:14:21.76158667 +0000 UTC m=+62.230187627" Mar 12 18:14:21.762193 master-0 kubenswrapper[7337]: I0312 18:14:21.762028 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nmmwm"] Mar 12 18:14:21.765565 master-0 kubenswrapper[7337]: I0312 18:14:21.764016 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nmmwm" Mar 12 18:14:21.771095 master-0 kubenswrapper[7337]: I0312 18:14:21.770867 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-7k9rb" Mar 12 18:14:21.905599 master-0 kubenswrapper[7337]: I0312 18:14:21.905536 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4519000b-e475-4c26-a1c0-bf05cd9c242b-utilities\") pod \"community-operators-nmmwm\" (UID: \"4519000b-e475-4c26-a1c0-bf05cd9c242b\") " pod="openshift-marketplace/community-operators-nmmwm" Mar 12 18:14:21.905599 master-0 kubenswrapper[7337]: I0312 18:14:21.905585 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4519000b-e475-4c26-a1c0-bf05cd9c242b-catalog-content\") pod \"community-operators-nmmwm\" (UID: \"4519000b-e475-4c26-a1c0-bf05cd9c242b\") " pod="openshift-marketplace/community-operators-nmmwm" Mar 12 18:14:21.905599 master-0 kubenswrapper[7337]: I0312 18:14:21.905629 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x57x\" (UniqueName: \"kubernetes.io/projected/4519000b-e475-4c26-a1c0-bf05cd9c242b-kube-api-access-5x57x\") pod \"community-operators-nmmwm\" (UID: \"4519000b-e475-4c26-a1c0-bf05cd9c242b\") " pod="openshift-marketplace/community-operators-nmmwm" Mar 12 18:14:22.007386 master-0 kubenswrapper[7337]: I0312 18:14:22.007331 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4519000b-e475-4c26-a1c0-bf05cd9c242b-utilities\") pod \"community-operators-nmmwm\" (UID: \"4519000b-e475-4c26-a1c0-bf05cd9c242b\") " pod="openshift-marketplace/community-operators-nmmwm" Mar 12 18:14:22.007386 master-0 kubenswrapper[7337]: I0312 18:14:22.007388 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4519000b-e475-4c26-a1c0-bf05cd9c242b-catalog-content\") pod \"community-operators-nmmwm\" (UID: \"4519000b-e475-4c26-a1c0-bf05cd9c242b\") " pod="openshift-marketplace/community-operators-nmmwm" Mar 12 18:14:22.007638 master-0 kubenswrapper[7337]: I0312 18:14:22.007440 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x57x\" (UniqueName: \"kubernetes.io/projected/4519000b-e475-4c26-a1c0-bf05cd9c242b-kube-api-access-5x57x\") pod \"community-operators-nmmwm\" (UID: \"4519000b-e475-4c26-a1c0-bf05cd9c242b\") " pod="openshift-marketplace/community-operators-nmmwm" Mar 12 18:14:22.008217 master-0 kubenswrapper[7337]: I0312 18:14:22.008187 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4519000b-e475-4c26-a1c0-bf05cd9c242b-utilities\") pod \"community-operators-nmmwm\" (UID: \"4519000b-e475-4c26-a1c0-bf05cd9c242b\") " pod="openshift-marketplace/community-operators-nmmwm" Mar 12 18:14:22.008664 master-0 kubenswrapper[7337]: I0312 18:14:22.008633 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4519000b-e475-4c26-a1c0-bf05cd9c242b-catalog-content\") pod \"community-operators-nmmwm\" (UID: \"4519000b-e475-4c26-a1c0-bf05cd9c242b\") " pod="openshift-marketplace/community-operators-nmmwm" Mar 12 18:14:22.331633 master-0 kubenswrapper[7337]: I0312 18:14:22.331583 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nmmwm"] Mar 12 18:14:22.332432 master-0 kubenswrapper[7337]: I0312 18:14:22.332395 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 12 18:14:22.333049 master-0 kubenswrapper[7337]: I0312 18:14:22.332612 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-1-master-0" podUID="8cd18201-afdc-4229-972e-ab01adb2a7f3" containerName="installer" containerID="cri-o://ddcae4325b25bb06e6c7df16759097b61b22ac82cbe47445944983995281e38f" gracePeriod=30 Mar 12 18:14:22.634664 master-0 kubenswrapper[7337]: I0312 18:14:22.634528 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:14:22.636368 master-0 kubenswrapper[7337]: I0312 18:14:22.635981 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:14:22.976213 master-0 kubenswrapper[7337]: I0312 18:14:22.976033 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:14:23.817455 master-0 kubenswrapper[7337]: I0312 18:14:23.817399 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:14:25.419146 master-0 kubenswrapper[7337]: I0312 18:14:25.419100 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x57x\" (UniqueName: \"kubernetes.io/projected/4519000b-e475-4c26-a1c0-bf05cd9c242b-kube-api-access-5x57x\") pod \"community-operators-nmmwm\" (UID: \"4519000b-e475-4c26-a1c0-bf05cd9c242b\") " pod="openshift-marketplace/community-operators-nmmwm" Mar 12 18:14:25.687293 master-0 kubenswrapper[7337]: I0312 18:14:25.687197 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nmmwm" Mar 12 18:14:25.832825 master-0 kubenswrapper[7337]: I0312 18:14:25.832721 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" event={"ID":"b6d288e3-8e73-44d2-874d-64c6c98dd991","Type":"ContainerDied","Data":"419df3ddca2a5c92855e29992407a4f8d75d516e7e813a5cad7b23a3a032ee64"} Mar 12 18:14:25.832992 master-0 kubenswrapper[7337]: I0312 18:14:25.832570 7337 generic.go:334] "Generic (PLEG): container finished" podID="b6d288e3-8e73-44d2-874d-64c6c98dd991" containerID="419df3ddca2a5c92855e29992407a4f8d75d516e7e813a5cad7b23a3a032ee64" exitCode=0 Mar 12 18:14:25.835791 master-0 kubenswrapper[7337]: I0312 18:14:25.833625 7337 scope.go:117] "RemoveContainer" containerID="419df3ddca2a5c92855e29992407a4f8d75d516e7e813a5cad7b23a3a032ee64" Mar 12 18:14:26.016541 master-0 kubenswrapper[7337]: I0312 18:14:26.000681 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ttgsx"] Mar 12 18:14:27.543842 master-0 kubenswrapper[7337]: I0312 18:14:27.541673 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ggkqg"] Mar 12 18:14:27.554659 master-0 kubenswrapper[7337]: I0312 18:14:27.554504 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nmmwm"] Mar 12 18:14:27.560713 master-0 kubenswrapper[7337]: I0312 18:14:27.560464 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 12 18:14:27.564644 master-0 kubenswrapper[7337]: I0312 18:14:27.564586 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ggkqg" Mar 12 18:14:27.565825 master-0 kubenswrapper[7337]: I0312 18:14:27.565529 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 18:14:27.571361 master-0 kubenswrapper[7337]: I0312 18:14:27.569148 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-tr8hr" Mar 12 18:14:27.571361 master-0 kubenswrapper[7337]: I0312 18:14:27.569544 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-hhnmb" Mar 12 18:14:27.598769 master-0 kubenswrapper[7337]: I0312 18:14:27.597657 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ggkqg"] Mar 12 18:14:27.618546 master-0 kubenswrapper[7337]: I0312 18:14:27.613693 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 12 18:14:27.636490 master-0 kubenswrapper[7337]: I0312 18:14:27.636385 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ec8121ea-f6e9-4232-9837-78b278a8cf54-var-lock\") pod \"installer-2-master-0\" (UID: \"ec8121ea-f6e9-4232-9837-78b278a8cf54\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 18:14:27.636490 master-0 kubenswrapper[7337]: I0312 18:14:27.636454 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdc26\" (UniqueName: \"kubernetes.io/projected/0cc54e47-af53-448a-b1c9-043710890a32-kube-api-access-bdc26\") pod \"redhat-marketplace-ggkqg\" (UID: \"0cc54e47-af53-448a-b1c9-043710890a32\") " pod="openshift-marketplace/redhat-marketplace-ggkqg" Mar 12 18:14:27.636712 master-0 kubenswrapper[7337]: I0312 18:14:27.636501 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec8121ea-f6e9-4232-9837-78b278a8cf54-kube-api-access\") pod \"installer-2-master-0\" (UID: \"ec8121ea-f6e9-4232-9837-78b278a8cf54\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 18:14:27.636712 master-0 kubenswrapper[7337]: I0312 18:14:27.636601 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec8121ea-f6e9-4232-9837-78b278a8cf54-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"ec8121ea-f6e9-4232-9837-78b278a8cf54\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 18:14:27.636712 master-0 kubenswrapper[7337]: I0312 18:14:27.636632 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cc54e47-af53-448a-b1c9-043710890a32-catalog-content\") pod \"redhat-marketplace-ggkqg\" (UID: \"0cc54e47-af53-448a-b1c9-043710890a32\") " pod="openshift-marketplace/redhat-marketplace-ggkqg" Mar 12 18:14:27.636712 master-0 kubenswrapper[7337]: I0312 18:14:27.636648 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cc54e47-af53-448a-b1c9-043710890a32-utilities\") pod \"redhat-marketplace-ggkqg\" (UID: \"0cc54e47-af53-448a-b1c9-043710890a32\") " pod="openshift-marketplace/redhat-marketplace-ggkqg" Mar 12 18:14:27.680594 master-0 kubenswrapper[7337]: I0312 18:14:27.679447 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zd9gm"] Mar 12 18:14:27.680594 master-0 kubenswrapper[7337]: I0312 18:14:27.680487 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zd9gm" Mar 12 18:14:27.688827 master-0 kubenswrapper[7337]: I0312 18:14:27.685061 7337 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 12 18:14:27.688827 master-0 kubenswrapper[7337]: I0312 18:14:27.685112 7337 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 12 18:14:27.688827 master-0 kubenswrapper[7337]: E0312 18:14:27.685334 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" Mar 12 18:14:27.688827 master-0 kubenswrapper[7337]: I0312 18:14:27.685349 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" Mar 12 18:14:27.688827 master-0 kubenswrapper[7337]: E0312 18:14:27.685363 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" Mar 12 18:14:27.688827 master-0 kubenswrapper[7337]: I0312 18:14:27.685370 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" Mar 12 18:14:27.688827 master-0 kubenswrapper[7337]: I0312 18:14:27.685477 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" Mar 12 18:14:27.688827 master-0 kubenswrapper[7337]: I0312 18:14:27.685494 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" Mar 12 18:14:27.688827 master-0 kubenswrapper[7337]: I0312 18:14:27.687257 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 12 18:14:27.688827 master-0 kubenswrapper[7337]: I0312 18:14:27.687584 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" containerID="cri-o://b928346cccd3250ecd2b93ee414bcede71a1e719c241a242b6ebcdc2c000ed85" gracePeriod=30 Mar 12 18:14:27.688827 master-0 kubenswrapper[7337]: I0312 18:14:27.687714 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" containerID="cri-o://5528dd5a4e01f059b975dba420b1e42f7d38d29b15865763037c06d90dade8e7" gracePeriod=30 Mar 12 18:14:27.692894 master-0 kubenswrapper[7337]: I0312 18:14:27.691665 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 12 18:14:27.702112 master-0 kubenswrapper[7337]: I0312 18:14:27.701045 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 12 18:14:27.702112 master-0 kubenswrapper[7337]: I0312 18:14:27.701413 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-h5f5n" Mar 12 18:14:27.702112 master-0 kubenswrapper[7337]: I0312 18:14:27.701678 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 12 18:14:27.747369 master-0 kubenswrapper[7337]: I0312 18:14:27.735453 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zd9gm"] Mar 12 18:14:27.747369 master-0 kubenswrapper[7337]: I0312 18:14:27.738399 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ec8121ea-f6e9-4232-9837-78b278a8cf54-var-lock\") pod \"installer-2-master-0\" (UID: \"ec8121ea-f6e9-4232-9837-78b278a8cf54\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 18:14:27.747369 master-0 kubenswrapper[7337]: I0312 18:14:27.738441 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdc26\" (UniqueName: \"kubernetes.io/projected/0cc54e47-af53-448a-b1c9-043710890a32-kube-api-access-bdc26\") pod \"redhat-marketplace-ggkqg\" (UID: \"0cc54e47-af53-448a-b1c9-043710890a32\") " pod="openshift-marketplace/redhat-marketplace-ggkqg" Mar 12 18:14:27.747369 master-0 kubenswrapper[7337]: I0312 18:14:27.738477 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec8121ea-f6e9-4232-9837-78b278a8cf54-kube-api-access\") pod \"installer-2-master-0\" (UID: \"ec8121ea-f6e9-4232-9837-78b278a8cf54\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 18:14:27.747369 master-0 kubenswrapper[7337]: I0312 18:14:27.738534 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec8121ea-f6e9-4232-9837-78b278a8cf54-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"ec8121ea-f6e9-4232-9837-78b278a8cf54\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 18:14:27.747369 master-0 kubenswrapper[7337]: I0312 18:14:27.738565 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cc54e47-af53-448a-b1c9-043710890a32-catalog-content\") pod \"redhat-marketplace-ggkqg\" (UID: \"0cc54e47-af53-448a-b1c9-043710890a32\") " pod="openshift-marketplace/redhat-marketplace-ggkqg" Mar 12 18:14:27.747369 master-0 kubenswrapper[7337]: I0312 18:14:27.738580 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cc54e47-af53-448a-b1c9-043710890a32-utilities\") pod \"redhat-marketplace-ggkqg\" (UID: \"0cc54e47-af53-448a-b1c9-043710890a32\") " pod="openshift-marketplace/redhat-marketplace-ggkqg" Mar 12 18:14:27.747369 master-0 kubenswrapper[7337]: I0312 18:14:27.739048 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cc54e47-af53-448a-b1c9-043710890a32-utilities\") pod \"redhat-marketplace-ggkqg\" (UID: \"0cc54e47-af53-448a-b1c9-043710890a32\") " pod="openshift-marketplace/redhat-marketplace-ggkqg" Mar 12 18:14:27.747369 master-0 kubenswrapper[7337]: I0312 18:14:27.739839 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec8121ea-f6e9-4232-9837-78b278a8cf54-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"ec8121ea-f6e9-4232-9837-78b278a8cf54\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 18:14:27.747369 master-0 kubenswrapper[7337]: I0312 18:14:27.740193 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cc54e47-af53-448a-b1c9-043710890a32-catalog-content\") pod \"redhat-marketplace-ggkqg\" (UID: \"0cc54e47-af53-448a-b1c9-043710890a32\") " pod="openshift-marketplace/redhat-marketplace-ggkqg" Mar 12 18:14:27.747369 master-0 kubenswrapper[7337]: I0312 18:14:27.740235 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ec8121ea-f6e9-4232-9837-78b278a8cf54-var-lock\") pod \"installer-2-master-0\" (UID: \"ec8121ea-f6e9-4232-9837-78b278a8cf54\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 18:14:27.772549 master-0 kubenswrapper[7337]: I0312 18:14:27.763173 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4s28n"] Mar 12 18:14:27.844640 master-0 kubenswrapper[7337]: I0312 18:14:27.844588 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:14:27.844640 master-0 kubenswrapper[7337]: I0312 18:14:27.844633 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:14:27.844794 master-0 kubenswrapper[7337]: I0312 18:14:27.844658 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:14:27.844794 master-0 kubenswrapper[7337]: I0312 18:14:27.844701 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:14:27.845001 master-0 kubenswrapper[7337]: I0312 18:14:27.844742 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:14:27.845001 master-0 kubenswrapper[7337]: I0312 18:14:27.844921 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmsnk\" (UniqueName: \"kubernetes.io/projected/34cbf061-4c76-476e-bed9-0a133c744862-kube-api-access-gmsnk\") pod \"control-plane-machine-set-operator-6686554ddc-zd9gm\" (UID: \"34cbf061-4c76-476e-bed9-0a133c744862\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zd9gm" Mar 12 18:14:27.845087 master-0 kubenswrapper[7337]: I0312 18:14:27.845033 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:14:27.845134 master-0 kubenswrapper[7337]: I0312 18:14:27.845106 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/34cbf061-4c76-476e-bed9-0a133c744862-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-zd9gm\" (UID: \"34cbf061-4c76-476e-bed9-0a133c744862\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zd9gm" Mar 12 18:14:27.874626 master-0 kubenswrapper[7337]: I0312 18:14:27.874577 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" event={"ID":"b6d288e3-8e73-44d2-874d-64c6c98dd991","Type":"ContainerStarted","Data":"6c849a8a5d45f345d2ba87802bb0e5ab5b7b8e621fde5662c4d921f30c496726"} Mar 12 18:14:27.878649 master-0 kubenswrapper[7337]: I0312 18:14:27.878612 7337 generic.go:334] "Generic (PLEG): container finished" podID="e418d797-2c31-404b-9dc3-251399e42542" containerID="6b7528f0c5da1778fadc0415752a37a2983c5adfa27ce67313a93246b6745480" exitCode=0 Mar 12 18:14:27.878728 master-0 kubenswrapper[7337]: I0312 18:14:27.878660 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"e418d797-2c31-404b-9dc3-251399e42542","Type":"ContainerDied","Data":"6b7528f0c5da1778fadc0415752a37a2983c5adfa27ce67313a93246b6745480"} Mar 12 18:14:27.880168 master-0 kubenswrapper[7337]: I0312 18:14:27.880145 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmmwm" event={"ID":"4519000b-e475-4c26-a1c0-bf05cd9c242b","Type":"ContainerStarted","Data":"c7f70b704680d5914ffad158cbccda7455cb9abad7ebd364fad668180fbeff37"} Mar 12 18:14:27.946019 master-0 kubenswrapper[7337]: I0312 18:14:27.945983 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:14:27.946186 master-0 kubenswrapper[7337]: I0312 18:14:27.946146 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:14:27.946254 master-0 kubenswrapper[7337]: I0312 18:14:27.946175 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmsnk\" (UniqueName: \"kubernetes.io/projected/34cbf061-4c76-476e-bed9-0a133c744862-kube-api-access-gmsnk\") pod \"control-plane-machine-set-operator-6686554ddc-zd9gm\" (UID: \"34cbf061-4c76-476e-bed9-0a133c744862\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zd9gm" Mar 12 18:14:27.946347 master-0 kubenswrapper[7337]: I0312 18:14:27.946335 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:14:27.946444 master-0 kubenswrapper[7337]: I0312 18:14:27.946428 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/34cbf061-4c76-476e-bed9-0a133c744862-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-zd9gm\" (UID: \"34cbf061-4c76-476e-bed9-0a133c744862\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zd9gm" Mar 12 18:14:27.946560 master-0 kubenswrapper[7337]: I0312 18:14:27.946448 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:14:27.946793 master-0 kubenswrapper[7337]: I0312 18:14:27.946779 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:14:27.946885 master-0 kubenswrapper[7337]: I0312 18:14:27.946873 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:14:27.946958 master-0 kubenswrapper[7337]: I0312 18:14:27.946947 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:14:27.947030 master-0 kubenswrapper[7337]: I0312 18:14:27.947015 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:14:27.947113 master-0 kubenswrapper[7337]: I0312 18:14:27.947088 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:14:27.947113 master-0 kubenswrapper[7337]: I0312 18:14:27.946834 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:14:27.947303 master-0 kubenswrapper[7337]: I0312 18:14:27.947284 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:14:27.947396 master-0 kubenswrapper[7337]: I0312 18:14:27.947383 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:14:27.949964 master-0 kubenswrapper[7337]: I0312 18:14:27.949719 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/34cbf061-4c76-476e-bed9-0a133c744862-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-zd9gm\" (UID: \"34cbf061-4c76-476e-bed9-0a133c744862\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zd9gm" Mar 12 18:14:28.886838 master-0 kubenswrapper[7337]: I0312 18:14:28.886792 7337 generic.go:334] "Generic (PLEG): container finished" podID="4519000b-e475-4c26-a1c0-bf05cd9c242b" containerID="4283fbeacecb2226a32d29824505e5362a9bc10b995fb624a688b75d67e46563" exitCode=0 Mar 12 18:14:28.887486 master-0 kubenswrapper[7337]: I0312 18:14:28.887463 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmmwm" event={"ID":"4519000b-e475-4c26-a1c0-bf05cd9c242b","Type":"ContainerDied","Data":"4283fbeacecb2226a32d29824505e5362a9bc10b995fb624a688b75d67e46563"} Mar 12 18:14:29.171959 master-0 kubenswrapper[7337]: I0312 18:14:29.171917 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 12 18:14:29.365143 master-0 kubenswrapper[7337]: I0312 18:14:29.365030 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e418d797-2c31-404b-9dc3-251399e42542-kube-api-access\") pod \"e418d797-2c31-404b-9dc3-251399e42542\" (UID: \"e418d797-2c31-404b-9dc3-251399e42542\") " Mar 12 18:14:29.365717 master-0 kubenswrapper[7337]: I0312 18:14:29.365420 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e418d797-2c31-404b-9dc3-251399e42542-var-lock\") pod \"e418d797-2c31-404b-9dc3-251399e42542\" (UID: \"e418d797-2c31-404b-9dc3-251399e42542\") " Mar 12 18:14:29.365717 master-0 kubenswrapper[7337]: I0312 18:14:29.365443 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e418d797-2c31-404b-9dc3-251399e42542-kubelet-dir\") pod \"e418d797-2c31-404b-9dc3-251399e42542\" (UID: \"e418d797-2c31-404b-9dc3-251399e42542\") " Mar 12 18:14:29.365717 master-0 kubenswrapper[7337]: I0312 18:14:29.365561 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e418d797-2c31-404b-9dc3-251399e42542-var-lock" (OuterVolumeSpecName: "var-lock") pod "e418d797-2c31-404b-9dc3-251399e42542" (UID: "e418d797-2c31-404b-9dc3-251399e42542"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:14:29.365717 master-0 kubenswrapper[7337]: I0312 18:14:29.365689 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e418d797-2c31-404b-9dc3-251399e42542-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e418d797-2c31-404b-9dc3-251399e42542" (UID: "e418d797-2c31-404b-9dc3-251399e42542"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:14:29.365935 master-0 kubenswrapper[7337]: I0312 18:14:29.365877 7337 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e418d797-2c31-404b-9dc3-251399e42542-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:29.365935 master-0 kubenswrapper[7337]: I0312 18:14:29.365897 7337 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e418d797-2c31-404b-9dc3-251399e42542-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:29.369417 master-0 kubenswrapper[7337]: I0312 18:14:29.369372 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e418d797-2c31-404b-9dc3-251399e42542-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e418d797-2c31-404b-9dc3-251399e42542" (UID: "e418d797-2c31-404b-9dc3-251399e42542"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:14:29.467261 master-0 kubenswrapper[7337]: I0312 18:14:29.467121 7337 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e418d797-2c31-404b-9dc3-251399e42542-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:29.895104 master-0 kubenswrapper[7337]: I0312 18:14:29.895065 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"e418d797-2c31-404b-9dc3-251399e42542","Type":"ContainerDied","Data":"e39d12d9165077c1566bf86fdaa9d42c6abb87768cbd70c00423b7ab08d3f0d6"} Mar 12 18:14:29.895104 master-0 kubenswrapper[7337]: I0312 18:14:29.895098 7337 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e39d12d9165077c1566bf86fdaa9d42c6abb87768cbd70c00423b7ab08d3f0d6" Mar 12 18:14:29.895624 master-0 kubenswrapper[7337]: I0312 18:14:29.895171 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 12 18:14:40.853604 master-0 kubenswrapper[7337]: E0312 18:14:40.853547 7337 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 12 18:14:40.854112 master-0 kubenswrapper[7337]: I0312 18:14:40.853936 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 12 18:14:41.910295 master-0 kubenswrapper[7337]: E0312 18:14:41.910175 7337 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io master-0)" Mar 12 18:14:42.102826 master-0 kubenswrapper[7337]: E0312 18:14:42.102777 7337 request.go:1255] Unexpected error when reading response body: net/http: request canceled (Client.Timeout or context cancellation while reading body) Mar 12 18:14:42.103069 master-0 kubenswrapper[7337]: E0312 18:14:42.102941 7337 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T18:14:32Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T18:14:32Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T18:14:32Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T18:14:32Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\\\"],\\\"sizeBytes\\\":465086330},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1\\\"],\\\"sizeBytes\\\":463700811},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b714a7ada1e295b599b432f32e1fd5b74c8cdbe6fe51e95306322b25cb873914\\\"],\\\"sizeBytes\\\":458126424},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9\\\"],\\\"sizeBytes\\\":456575686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89cb093f319eaa04acfe9431b8697bffbc71ab670546f7ed257daa332165c626\\\"],\\\"sizeBytes\\\":448828105},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783\\\"],\\\"sizeBytes\\\":448041621},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf9670d0f269f8d49fd9ef4981999be195f6624a4146aa93d9201eb8acc81053\\\"],\\\"sizeBytes\\\":443271011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ceca1efee55b9fd5089428476bbc401fe73db7c0b0f5e16d4ad28ed0f0f9d43\\\"],\\\"sizeBytes\\\":438654375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ace4dcd008420277d915fe983b07bbb50fb3ab0673f28d0166424a75bc2137e7\\\"],\\\"sizeBytes\\\":411585608},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8f0fda36e9a2040dbe0537361dcd73658df4e669d846f8101a8f9f29f0be9a7\\\"],\\\"sizeBytes\\\":407347126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3\\\"],\\\"sizeBytes\\\":396521759}]}}\" for node \"master-0\": unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body)" Mar 12 18:14:42.782062 master-0 kubenswrapper[7337]: I0312 18:14:42.781404 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:14:42.838682 master-0 kubenswrapper[7337]: I0312 18:14:42.838580 7337 prober.go:107] "Probe failed" probeType="Readiness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 12 18:14:42.956913 master-0 kubenswrapper[7337]: I0312 18:14:42.956825 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_8cd18201-afdc-4229-972e-ab01adb2a7f3/installer/0.log" Mar 12 18:14:42.957453 master-0 kubenswrapper[7337]: I0312 18:14:42.957395 7337 generic.go:334] "Generic (PLEG): container finished" podID="8cd18201-afdc-4229-972e-ab01adb2a7f3" containerID="ddcae4325b25bb06e6c7df16759097b61b22ac82cbe47445944983995281e38f" exitCode=1 Mar 12 18:14:42.957453 master-0 kubenswrapper[7337]: I0312 18:14:42.957425 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"8cd18201-afdc-4229-972e-ab01adb2a7f3","Type":"ContainerDied","Data":"ddcae4325b25bb06e6c7df16759097b61b22ac82cbe47445944983995281e38f"} Mar 12 18:14:43.198155 master-0 kubenswrapper[7337]: I0312 18:14:43.198119 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_8cd18201-afdc-4229-972e-ab01adb2a7f3/installer/0.log" Mar 12 18:14:43.198272 master-0 kubenswrapper[7337]: I0312 18:14:43.198192 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 18:14:43.272173 master-0 kubenswrapper[7337]: I0312 18:14:43.272064 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8cd18201-afdc-4229-972e-ab01adb2a7f3-kube-api-access\") pod \"8cd18201-afdc-4229-972e-ab01adb2a7f3\" (UID: \"8cd18201-afdc-4229-972e-ab01adb2a7f3\") " Mar 12 18:14:43.273034 master-0 kubenswrapper[7337]: I0312 18:14:43.272577 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8cd18201-afdc-4229-972e-ab01adb2a7f3-kubelet-dir\") pod \"8cd18201-afdc-4229-972e-ab01adb2a7f3\" (UID: \"8cd18201-afdc-4229-972e-ab01adb2a7f3\") " Mar 12 18:14:43.273034 master-0 kubenswrapper[7337]: I0312 18:14:43.272616 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8cd18201-afdc-4229-972e-ab01adb2a7f3-var-lock\") pod \"8cd18201-afdc-4229-972e-ab01adb2a7f3\" (UID: \"8cd18201-afdc-4229-972e-ab01adb2a7f3\") " Mar 12 18:14:43.273034 master-0 kubenswrapper[7337]: I0312 18:14:43.272676 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8cd18201-afdc-4229-972e-ab01adb2a7f3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8cd18201-afdc-4229-972e-ab01adb2a7f3" (UID: "8cd18201-afdc-4229-972e-ab01adb2a7f3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:14:43.273034 master-0 kubenswrapper[7337]: I0312 18:14:43.272872 7337 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8cd18201-afdc-4229-972e-ab01adb2a7f3-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:43.273034 master-0 kubenswrapper[7337]: I0312 18:14:43.272873 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8cd18201-afdc-4229-972e-ab01adb2a7f3-var-lock" (OuterVolumeSpecName: "var-lock") pod "8cd18201-afdc-4229-972e-ab01adb2a7f3" (UID: "8cd18201-afdc-4229-972e-ab01adb2a7f3"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:14:43.274784 master-0 kubenswrapper[7337]: I0312 18:14:43.274738 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cd18201-afdc-4229-972e-ab01adb2a7f3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8cd18201-afdc-4229-972e-ab01adb2a7f3" (UID: "8cd18201-afdc-4229-972e-ab01adb2a7f3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:14:43.374048 master-0 kubenswrapper[7337]: I0312 18:14:43.374015 7337 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8cd18201-afdc-4229-972e-ab01adb2a7f3-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:43.374048 master-0 kubenswrapper[7337]: I0312 18:14:43.374048 7337 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8cd18201-afdc-4229-972e-ab01adb2a7f3-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:43.975363 master-0 kubenswrapper[7337]: I0312 18:14:43.974670 7337 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="6541e49d2ae9b56107cb844297ee7f71d2445a28f09ad80d8048aba689f8fea0" exitCode=0 Mar 12 18:14:43.975363 master-0 kubenswrapper[7337]: I0312 18:14:43.974734 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerDied","Data":"6541e49d2ae9b56107cb844297ee7f71d2445a28f09ad80d8048aba689f8fea0"} Mar 12 18:14:43.975363 master-0 kubenswrapper[7337]: I0312 18:14:43.974761 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"3ca81ffdae2f15697f3c488787228f9ca024d315fe2672358c6b274976669bab"} Mar 12 18:14:43.977149 master-0 kubenswrapper[7337]: I0312 18:14:43.977077 7337 generic.go:334] "Generic (PLEG): container finished" podID="4519000b-e475-4c26-a1c0-bf05cd9c242b" containerID="41524a86d9adbef4539f0c75d4ef6de3fe400da88552adc9ce67756160dee015" exitCode=0 Mar 12 18:14:43.977149 master-0 kubenswrapper[7337]: I0312 18:14:43.977126 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmmwm" event={"ID":"4519000b-e475-4c26-a1c0-bf05cd9c242b","Type":"ContainerDied","Data":"41524a86d9adbef4539f0c75d4ef6de3fe400da88552adc9ce67756160dee015"} Mar 12 18:14:43.981369 master-0 kubenswrapper[7337]: I0312 18:14:43.981024 7337 generic.go:334] "Generic (PLEG): container finished" podID="2331fc4b-e67b-4496-8cae-15cd11cf3030" containerID="fe8d123856b6023873f1164bbd199c7180cfdbcb864e6f1cc5e0a35e972c71d7" exitCode=0 Mar 12 18:14:43.981369 master-0 kubenswrapper[7337]: I0312 18:14:43.981097 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-67j2w" event={"ID":"2331fc4b-e67b-4496-8cae-15cd11cf3030","Type":"ContainerDied","Data":"fe8d123856b6023873f1164bbd199c7180cfdbcb864e6f1cc5e0a35e972c71d7"} Mar 12 18:14:43.983629 master-0 kubenswrapper[7337]: I0312 18:14:43.983380 7337 generic.go:334] "Generic (PLEG): container finished" podID="f516dab9-06d1-4bea-96b9-8f3e14543bbd" containerID="4a23cf392c1f48fb4a01accc9b6c7554887906db0aaba3e56bb271f2f78c68b7" exitCode=0 Mar 12 18:14:43.983629 master-0 kubenswrapper[7337]: I0312 18:14:43.983444 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2vwn7" event={"ID":"f516dab9-06d1-4bea-96b9-8f3e14543bbd","Type":"ContainerDied","Data":"4a23cf392c1f48fb4a01accc9b6c7554887906db0aaba3e56bb271f2f78c68b7"} Mar 12 18:14:43.986747 master-0 kubenswrapper[7337]: I0312 18:14:43.986658 7337 generic.go:334] "Generic (PLEG): container finished" podID="b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c" containerID="ccd65fc043716d0888637293f37bd7ecd513c907229d11d94e53b330518e6446" exitCode=0 Mar 12 18:14:43.986747 master-0 kubenswrapper[7337]: I0312 18:14:43.986716 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6jhwp" event={"ID":"b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c","Type":"ContainerDied","Data":"ccd65fc043716d0888637293f37bd7ecd513c907229d11d94e53b330518e6446"} Mar 12 18:14:43.989041 master-0 kubenswrapper[7337]: I0312 18:14:43.988935 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4s28n" event={"ID":"9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a","Type":"ContainerStarted","Data":"9ba0bcd4c72dc8fb057b2371e60e63afb0cbf4071ff91aa73df8a681772fe246"} Mar 12 18:14:43.989206 master-0 kubenswrapper[7337]: I0312 18:14:43.989165 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4s28n" podUID="9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a" containerName="extract-content" containerID="cri-o://9ba0bcd4c72dc8fb057b2371e60e63afb0cbf4071ff91aa73df8a681772fe246" gracePeriod=2 Mar 12 18:14:43.991709 master-0 kubenswrapper[7337]: I0312 18:14:43.991676 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_8cd18201-afdc-4229-972e-ab01adb2a7f3/installer/0.log" Mar 12 18:14:43.991864 master-0 kubenswrapper[7337]: I0312 18:14:43.991796 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"8cd18201-afdc-4229-972e-ab01adb2a7f3","Type":"ContainerDied","Data":"be0c2d6cb5987b5b04f809bb69ec34be4077cf368f8ec9a517894818769dcd9b"} Mar 12 18:14:43.991864 master-0 kubenswrapper[7337]: I0312 18:14:43.991804 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 18:14:43.991948 master-0 kubenswrapper[7337]: I0312 18:14:43.991843 7337 scope.go:117] "RemoveContainer" containerID="ddcae4325b25bb06e6c7df16759097b61b22ac82cbe47445944983995281e38f" Mar 12 18:14:43.995644 master-0 kubenswrapper[7337]: I0312 18:14:43.994838 7337 generic.go:334] "Generic (PLEG): container finished" podID="c4d20441-ec1f-4571-b590-989f2bdd4082" containerID="13e9779caaa66e3b82a1adb472b0e7f8855e2a4fe36755b17d281fbf929cf9ea" exitCode=0 Mar 12 18:14:43.995644 master-0 kubenswrapper[7337]: I0312 18:14:43.994881 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttgsx" event={"ID":"c4d20441-ec1f-4571-b590-989f2bdd4082","Type":"ContainerDied","Data":"13e9779caaa66e3b82a1adb472b0e7f8855e2a4fe36755b17d281fbf929cf9ea"} Mar 12 18:14:43.998971 master-0 kubenswrapper[7337]: I0312 18:14:43.998915 7337 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="95d38f1431066968104bb51ab17f5c680fc28063e6ba5f01ad252c4fc619c1e1" exitCode=1 Mar 12 18:14:43.999671 master-0 kubenswrapper[7337]: I0312 18:14:43.999133 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"95d38f1431066968104bb51ab17f5c680fc28063e6ba5f01ad252c4fc619c1e1"} Mar 12 18:14:43.999671 master-0 kubenswrapper[7337]: I0312 18:14:43.999584 7337 scope.go:117] "RemoveContainer" containerID="95d38f1431066968104bb51ab17f5c680fc28063e6ba5f01ad252c4fc619c1e1" Mar 12 18:14:44.057420 master-0 kubenswrapper[7337]: I0312 18:14:44.057304 7337 scope.go:117] "RemoveContainer" containerID="d26315a3bde904b11ba5d9d409301a02b1633a540a8d5ec716c4b45d6b097f49" Mar 12 18:14:44.331544 master-0 kubenswrapper[7337]: I0312 18:14:44.331446 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2vwn7" Mar 12 18:14:44.372301 master-0 kubenswrapper[7337]: I0312 18:14:44.372266 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-67j2w" Mar 12 18:14:44.376902 master-0 kubenswrapper[7337]: I0312 18:14:44.376861 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ttgsx" Mar 12 18:14:44.386049 master-0 kubenswrapper[7337]: I0312 18:14:44.386010 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvd6n\" (UniqueName: \"kubernetes.io/projected/f516dab9-06d1-4bea-96b9-8f3e14543bbd-kube-api-access-qvd6n\") pod \"f516dab9-06d1-4bea-96b9-8f3e14543bbd\" (UID: \"f516dab9-06d1-4bea-96b9-8f3e14543bbd\") " Mar 12 18:14:44.386187 master-0 kubenswrapper[7337]: I0312 18:14:44.386086 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f516dab9-06d1-4bea-96b9-8f3e14543bbd-catalog-content\") pod \"f516dab9-06d1-4bea-96b9-8f3e14543bbd\" (UID: \"f516dab9-06d1-4bea-96b9-8f3e14543bbd\") " Mar 12 18:14:44.386471 master-0 kubenswrapper[7337]: I0312 18:14:44.386243 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f516dab9-06d1-4bea-96b9-8f3e14543bbd-utilities\") pod \"f516dab9-06d1-4bea-96b9-8f3e14543bbd\" (UID: \"f516dab9-06d1-4bea-96b9-8f3e14543bbd\") " Mar 12 18:14:44.388178 master-0 kubenswrapper[7337]: I0312 18:14:44.386920 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f516dab9-06d1-4bea-96b9-8f3e14543bbd-utilities" (OuterVolumeSpecName: "utilities") pod "f516dab9-06d1-4bea-96b9-8f3e14543bbd" (UID: "f516dab9-06d1-4bea-96b9-8f3e14543bbd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:14:44.390188 master-0 kubenswrapper[7337]: I0312 18:14:44.388702 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f516dab9-06d1-4bea-96b9-8f3e14543bbd-kube-api-access-qvd6n" (OuterVolumeSpecName: "kube-api-access-qvd6n") pod "f516dab9-06d1-4bea-96b9-8f3e14543bbd" (UID: "f516dab9-06d1-4bea-96b9-8f3e14543bbd"). InnerVolumeSpecName "kube-api-access-qvd6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:14:44.528025 master-0 kubenswrapper[7337]: I0312 18:14:44.490996 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2331fc4b-e67b-4496-8cae-15cd11cf3030-catalog-content\") pod \"2331fc4b-e67b-4496-8cae-15cd11cf3030\" (UID: \"2331fc4b-e67b-4496-8cae-15cd11cf3030\") " Mar 12 18:14:44.528025 master-0 kubenswrapper[7337]: I0312 18:14:44.491140 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2331fc4b-e67b-4496-8cae-15cd11cf3030-utilities\") pod \"2331fc4b-e67b-4496-8cae-15cd11cf3030\" (UID: \"2331fc4b-e67b-4496-8cae-15cd11cf3030\") " Mar 12 18:14:44.528025 master-0 kubenswrapper[7337]: I0312 18:14:44.491199 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4d20441-ec1f-4571-b590-989f2bdd4082-catalog-content\") pod \"c4d20441-ec1f-4571-b590-989f2bdd4082\" (UID: \"c4d20441-ec1f-4571-b590-989f2bdd4082\") " Mar 12 18:14:44.528025 master-0 kubenswrapper[7337]: I0312 18:14:44.491242 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4d20441-ec1f-4571-b590-989f2bdd4082-utilities\") pod \"c4d20441-ec1f-4571-b590-989f2bdd4082\" (UID: \"c4d20441-ec1f-4571-b590-989f2bdd4082\") " Mar 12 18:14:44.528025 master-0 kubenswrapper[7337]: I0312 18:14:44.491315 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh6b2\" (UniqueName: \"kubernetes.io/projected/c4d20441-ec1f-4571-b590-989f2bdd4082-kube-api-access-xh6b2\") pod \"c4d20441-ec1f-4571-b590-989f2bdd4082\" (UID: \"c4d20441-ec1f-4571-b590-989f2bdd4082\") " Mar 12 18:14:44.528025 master-0 kubenswrapper[7337]: I0312 18:14:44.491342 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhjpk\" (UniqueName: \"kubernetes.io/projected/2331fc4b-e67b-4496-8cae-15cd11cf3030-kube-api-access-nhjpk\") pod \"2331fc4b-e67b-4496-8cae-15cd11cf3030\" (UID: \"2331fc4b-e67b-4496-8cae-15cd11cf3030\") " Mar 12 18:14:44.528025 master-0 kubenswrapper[7337]: I0312 18:14:44.491759 7337 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f516dab9-06d1-4bea-96b9-8f3e14543bbd-utilities\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:44.528025 master-0 kubenswrapper[7337]: I0312 18:14:44.491778 7337 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvd6n\" (UniqueName: \"kubernetes.io/projected/f516dab9-06d1-4bea-96b9-8f3e14543bbd-kube-api-access-qvd6n\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:44.528025 master-0 kubenswrapper[7337]: I0312 18:14:44.492311 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4d20441-ec1f-4571-b590-989f2bdd4082-utilities" (OuterVolumeSpecName: "utilities") pod "c4d20441-ec1f-4571-b590-989f2bdd4082" (UID: "c4d20441-ec1f-4571-b590-989f2bdd4082"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:14:44.528025 master-0 kubenswrapper[7337]: I0312 18:14:44.492367 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2331fc4b-e67b-4496-8cae-15cd11cf3030-utilities" (OuterVolumeSpecName: "utilities") pod "2331fc4b-e67b-4496-8cae-15cd11cf3030" (UID: "2331fc4b-e67b-4496-8cae-15cd11cf3030"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:14:44.529571 master-0 kubenswrapper[7337]: I0312 18:14:44.528956 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4d20441-ec1f-4571-b590-989f2bdd4082-kube-api-access-xh6b2" (OuterVolumeSpecName: "kube-api-access-xh6b2") pod "c4d20441-ec1f-4571-b590-989f2bdd4082" (UID: "c4d20441-ec1f-4571-b590-989f2bdd4082"). InnerVolumeSpecName "kube-api-access-xh6b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:14:44.529571 master-0 kubenswrapper[7337]: I0312 18:14:44.529107 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2331fc4b-e67b-4496-8cae-15cd11cf3030-kube-api-access-nhjpk" (OuterVolumeSpecName: "kube-api-access-nhjpk") pod "2331fc4b-e67b-4496-8cae-15cd11cf3030" (UID: "2331fc4b-e67b-4496-8cae-15cd11cf3030"). InnerVolumeSpecName "kube-api-access-nhjpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:14:44.531300 master-0 kubenswrapper[7337]: I0312 18:14:44.531212 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4d20441-ec1f-4571-b590-989f2bdd4082-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4d20441-ec1f-4571-b590-989f2bdd4082" (UID: "c4d20441-ec1f-4571-b590-989f2bdd4082"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:14:44.597234 master-0 kubenswrapper[7337]: I0312 18:14:44.593005 7337 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh6b2\" (UniqueName: \"kubernetes.io/projected/c4d20441-ec1f-4571-b590-989f2bdd4082-kube-api-access-xh6b2\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:44.597234 master-0 kubenswrapper[7337]: I0312 18:14:44.593039 7337 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhjpk\" (UniqueName: \"kubernetes.io/projected/2331fc4b-e67b-4496-8cae-15cd11cf3030-kube-api-access-nhjpk\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:44.597234 master-0 kubenswrapper[7337]: I0312 18:14:44.593052 7337 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2331fc4b-e67b-4496-8cae-15cd11cf3030-utilities\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:44.597234 master-0 kubenswrapper[7337]: I0312 18:14:44.593061 7337 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4d20441-ec1f-4571-b590-989f2bdd4082-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:44.597234 master-0 kubenswrapper[7337]: I0312 18:14:44.593070 7337 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4d20441-ec1f-4571-b590-989f2bdd4082-utilities\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:44.620778 master-0 kubenswrapper[7337]: I0312 18:14:44.620744 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4s28n_9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a/extract-content/0.log" Mar 12 18:14:44.621177 master-0 kubenswrapper[7337]: I0312 18:14:44.621148 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4s28n" Mar 12 18:14:44.694220 master-0 kubenswrapper[7337]: I0312 18:14:44.694182 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6q2f\" (UniqueName: \"kubernetes.io/projected/9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a-kube-api-access-b6q2f\") pod \"9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a\" (UID: \"9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a\") " Mar 12 18:14:44.694412 master-0 kubenswrapper[7337]: I0312 18:14:44.694384 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a-utilities\") pod \"9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a\" (UID: \"9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a\") " Mar 12 18:14:44.694453 master-0 kubenswrapper[7337]: I0312 18:14:44.694434 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a-catalog-content\") pod \"9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a\" (UID: \"9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a\") " Mar 12 18:14:44.696393 master-0 kubenswrapper[7337]: I0312 18:14:44.696333 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a-utilities" (OuterVolumeSpecName: "utilities") pod "9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a" (UID: "9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:14:44.697673 master-0 kubenswrapper[7337]: I0312 18:14:44.697627 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a-kube-api-access-b6q2f" (OuterVolumeSpecName: "kube-api-access-b6q2f") pod "9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a" (UID: "9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a"). InnerVolumeSpecName "kube-api-access-b6q2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:14:44.795784 master-0 kubenswrapper[7337]: I0312 18:14:44.795700 7337 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a-utilities\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:44.795784 master-0 kubenswrapper[7337]: I0312 18:14:44.795781 7337 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6q2f\" (UniqueName: \"kubernetes.io/projected/9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a-kube-api-access-b6q2f\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:45.007660 master-0 kubenswrapper[7337]: I0312 18:14:45.007573 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2vwn7" event={"ID":"f516dab9-06d1-4bea-96b9-8f3e14543bbd","Type":"ContainerDied","Data":"4de2af16396bc69bdc5784e33e403629152d1e0770da205647afdf23ab5a2699"} Mar 12 18:14:45.007660 master-0 kubenswrapper[7337]: I0312 18:14:45.007668 7337 scope.go:117] "RemoveContainer" containerID="4a23cf392c1f48fb4a01accc9b6c7554887906db0aaba3e56bb271f2f78c68b7" Mar 12 18:14:45.008466 master-0 kubenswrapper[7337]: I0312 18:14:45.007843 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2vwn7" Mar 12 18:14:45.011082 master-0 kubenswrapper[7337]: I0312 18:14:45.011033 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttgsx" event={"ID":"c4d20441-ec1f-4571-b590-989f2bdd4082","Type":"ContainerDied","Data":"6358f2755396ee336fcf40ab974ee22dbd1b6e3333c3183dc2a8cde56949aaa6"} Mar 12 18:14:45.011255 master-0 kubenswrapper[7337]: I0312 18:14:45.011219 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ttgsx" Mar 12 18:14:45.018691 master-0 kubenswrapper[7337]: I0312 18:14:45.018658 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4s28n_9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a/extract-content/0.log" Mar 12 18:14:45.018980 master-0 kubenswrapper[7337]: I0312 18:14:45.018952 7337 generic.go:334] "Generic (PLEG): container finished" podID="9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a" containerID="9ba0bcd4c72dc8fb057b2371e60e63afb0cbf4071ff91aa73df8a681772fe246" exitCode=2 Mar 12 18:14:45.019049 master-0 kubenswrapper[7337]: I0312 18:14:45.019028 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4s28n" event={"ID":"9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a","Type":"ContainerDied","Data":"9ba0bcd4c72dc8fb057b2371e60e63afb0cbf4071ff91aa73df8a681772fe246"} Mar 12 18:14:45.019082 master-0 kubenswrapper[7337]: I0312 18:14:45.019072 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4s28n" event={"ID":"9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a","Type":"ContainerDied","Data":"585e569ac901da0beb62f12e0509996b671a526fd344836c25da805ed3a58520"} Mar 12 18:14:45.019183 master-0 kubenswrapper[7337]: I0312 18:14:45.019171 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4s28n" Mar 12 18:14:45.021212 master-0 kubenswrapper[7337]: I0312 18:14:45.021193 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"e36200a118a6b9e121bed30c8ddbc2aa00c9c281d2b93d4ca81dd8623b5aea84"} Mar 12 18:14:45.031069 master-0 kubenswrapper[7337]: I0312 18:14:45.030952 7337 scope.go:117] "RemoveContainer" containerID="8d16f3acc4f022fe83bca592108b4093b85b763774386a13c0e05aeaad9a6250" Mar 12 18:14:45.031382 master-0 kubenswrapper[7337]: I0312 18:14:45.031335 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-m6hsp_223a548b-a3ad-40dd-82de-e3dbb7f3e4fa/openshift-controller-manager-operator/0.log" Mar 12 18:14:45.031465 master-0 kubenswrapper[7337]: I0312 18:14:45.031422 7337 generic.go:334] "Generic (PLEG): container finished" podID="223a548b-a3ad-40dd-82de-e3dbb7f3e4fa" containerID="05c0afcccf4bf3051eac46ea2747146033d8dbf283902873560ad4999c7825f8" exitCode=1 Mar 12 18:14:45.031630 master-0 kubenswrapper[7337]: I0312 18:14:45.031581 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-m6hsp" event={"ID":"223a548b-a3ad-40dd-82de-e3dbb7f3e4fa","Type":"ContainerDied","Data":"05c0afcccf4bf3051eac46ea2747146033d8dbf283902873560ad4999c7825f8"} Mar 12 18:14:45.032340 master-0 kubenswrapper[7337]: I0312 18:14:45.032216 7337 scope.go:117] "RemoveContainer" containerID="05c0afcccf4bf3051eac46ea2747146033d8dbf283902873560ad4999c7825f8" Mar 12 18:14:45.035428 master-0 kubenswrapper[7337]: I0312 18:14:45.035073 7337 generic.go:334] "Generic (PLEG): container finished" podID="a1a56802af72ce1aac6b5077f1695ac0" containerID="7905195025f5bcb8de89213d12f95430c8990ba81078908548bc95e6c97e2325" exitCode=1 Mar 12 18:14:45.035428 master-0 kubenswrapper[7337]: I0312 18:14:45.035150 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerDied","Data":"7905195025f5bcb8de89213d12f95430c8990ba81078908548bc95e6c97e2325"} Mar 12 18:14:45.035945 master-0 kubenswrapper[7337]: I0312 18:14:45.035836 7337 scope.go:117] "RemoveContainer" containerID="7905195025f5bcb8de89213d12f95430c8990ba81078908548bc95e6c97e2325" Mar 12 18:14:45.041209 master-0 kubenswrapper[7337]: I0312 18:14:45.041134 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-67j2w" event={"ID":"2331fc4b-e67b-4496-8cae-15cd11cf3030","Type":"ContainerDied","Data":"7a43ad09f77b7a8aa69873fac95df23196a6defbd82a520740d6f8230dd00f99"} Mar 12 18:14:45.041209 master-0 kubenswrapper[7337]: I0312 18:14:45.041193 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-67j2w" Mar 12 18:14:45.050020 master-0 kubenswrapper[7337]: I0312 18:14:45.049826 7337 scope.go:117] "RemoveContainer" containerID="13e9779caaa66e3b82a1adb472b0e7f8855e2a4fe36755b17d281fbf929cf9ea" Mar 12 18:14:45.066843 master-0 kubenswrapper[7337]: I0312 18:14:45.066799 7337 scope.go:117] "RemoveContainer" containerID="1582d8faa27008c19af24da731144e2addf1cd4e41c10a2ce5d99f3806afefa4" Mar 12 18:14:45.086833 master-0 kubenswrapper[7337]: I0312 18:14:45.086793 7337 scope.go:117] "RemoveContainer" containerID="9ba0bcd4c72dc8fb057b2371e60e63afb0cbf4071ff91aa73df8a681772fe246" Mar 12 18:14:45.114899 master-0 kubenswrapper[7337]: I0312 18:14:45.114850 7337 scope.go:117] "RemoveContainer" containerID="24a42c905bc5668eb7d21dc5a1cb3b30cf1599f79aa0daff8b89099b0e103199" Mar 12 18:14:45.158722 master-0 kubenswrapper[7337]: I0312 18:14:45.158577 7337 scope.go:117] "RemoveContainer" containerID="9ba0bcd4c72dc8fb057b2371e60e63afb0cbf4071ff91aa73df8a681772fe246" Mar 12 18:14:45.159159 master-0 kubenswrapper[7337]: E0312 18:14:45.159114 7337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ba0bcd4c72dc8fb057b2371e60e63afb0cbf4071ff91aa73df8a681772fe246\": container with ID starting with 9ba0bcd4c72dc8fb057b2371e60e63afb0cbf4071ff91aa73df8a681772fe246 not found: ID does not exist" containerID="9ba0bcd4c72dc8fb057b2371e60e63afb0cbf4071ff91aa73df8a681772fe246" Mar 12 18:14:45.159221 master-0 kubenswrapper[7337]: I0312 18:14:45.159156 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ba0bcd4c72dc8fb057b2371e60e63afb0cbf4071ff91aa73df8a681772fe246"} err="failed to get container status \"9ba0bcd4c72dc8fb057b2371e60e63afb0cbf4071ff91aa73df8a681772fe246\": rpc error: code = NotFound desc = could not find container \"9ba0bcd4c72dc8fb057b2371e60e63afb0cbf4071ff91aa73df8a681772fe246\": container with ID starting with 9ba0bcd4c72dc8fb057b2371e60e63afb0cbf4071ff91aa73df8a681772fe246 not found: ID does not exist" Mar 12 18:14:45.159221 master-0 kubenswrapper[7337]: I0312 18:14:45.159192 7337 scope.go:117] "RemoveContainer" containerID="24a42c905bc5668eb7d21dc5a1cb3b30cf1599f79aa0daff8b89099b0e103199" Mar 12 18:14:45.159550 master-0 kubenswrapper[7337]: E0312 18:14:45.159497 7337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24a42c905bc5668eb7d21dc5a1cb3b30cf1599f79aa0daff8b89099b0e103199\": container with ID starting with 24a42c905bc5668eb7d21dc5a1cb3b30cf1599f79aa0daff8b89099b0e103199 not found: ID does not exist" containerID="24a42c905bc5668eb7d21dc5a1cb3b30cf1599f79aa0daff8b89099b0e103199" Mar 12 18:14:45.159775 master-0 kubenswrapper[7337]: I0312 18:14:45.159733 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24a42c905bc5668eb7d21dc5a1cb3b30cf1599f79aa0daff8b89099b0e103199"} err="failed to get container status \"24a42c905bc5668eb7d21dc5a1cb3b30cf1599f79aa0daff8b89099b0e103199\": rpc error: code = NotFound desc = could not find container \"24a42c905bc5668eb7d21dc5a1cb3b30cf1599f79aa0daff8b89099b0e103199\": container with ID starting with 24a42c905bc5668eb7d21dc5a1cb3b30cf1599f79aa0daff8b89099b0e103199 not found: ID does not exist" Mar 12 18:14:45.159775 master-0 kubenswrapper[7337]: I0312 18:14:45.159765 7337 scope.go:117] "RemoveContainer" containerID="fe8d123856b6023873f1164bbd199c7180cfdbcb864e6f1cc5e0a35e972c71d7" Mar 12 18:14:45.214723 master-0 kubenswrapper[7337]: I0312 18:14:45.203679 7337 scope.go:117] "RemoveContainer" containerID="a521c0a0426e21cb131251658526486bf3748aec5ae19ea452db84c768f630d0" Mar 12 18:14:45.557753 master-0 kubenswrapper[7337]: I0312 18:14:45.557673 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f516dab9-06d1-4bea-96b9-8f3e14543bbd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f516dab9-06d1-4bea-96b9-8f3e14543bbd" (UID: "f516dab9-06d1-4bea-96b9-8f3e14543bbd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:14:45.608062 master-0 kubenswrapper[7337]: I0312 18:14:45.608007 7337 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f516dab9-06d1-4bea-96b9-8f3e14543bbd-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:45.892927 master-0 kubenswrapper[7337]: I0312 18:14:45.892615 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2331fc4b-e67b-4496-8cae-15cd11cf3030-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2331fc4b-e67b-4496-8cae-15cd11cf3030" (UID: "2331fc4b-e67b-4496-8cae-15cd11cf3030"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:14:45.913128 master-0 kubenswrapper[7337]: I0312 18:14:45.913044 7337 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2331fc4b-e67b-4496-8cae-15cd11cf3030-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:46.048852 master-0 kubenswrapper[7337]: I0312 18:14:46.048810 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"ab77ac8c9287ab57ea2a467e8e1f1dee411c2f94f5642f785b9b7baa2542752b"} Mar 12 18:14:46.056806 master-0 kubenswrapper[7337]: I0312 18:14:46.056781 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-m6hsp_223a548b-a3ad-40dd-82de-e3dbb7f3e4fa/openshift-controller-manager-operator/0.log" Mar 12 18:14:46.057025 master-0 kubenswrapper[7337]: I0312 18:14:46.056836 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-m6hsp" event={"ID":"223a548b-a3ad-40dd-82de-e3dbb7f3e4fa","Type":"ContainerStarted","Data":"3e81068034bf9c9fbfc0dcacd5d8ed6f99d4b966db54edfeaa5ae37af6e0a1a5"} Mar 12 18:14:46.839242 master-0 kubenswrapper[7337]: I0312 18:14:46.839192 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a" (UID: "9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:14:46.924871 master-0 kubenswrapper[7337]: I0312 18:14:46.924797 7337 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:50.746955 master-0 kubenswrapper[7337]: I0312 18:14:50.746899 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:14:51.911225 master-0 kubenswrapper[7337]: E0312 18:14:51.911039 7337 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:14:52.104562 master-0 kubenswrapper[7337]: E0312 18:14:52.103799 7337 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:14:52.838397 master-0 kubenswrapper[7337]: I0312 18:14:52.838310 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:14:53.747857 master-0 kubenswrapper[7337]: I0312 18:14:53.747782 7337 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 18:14:54.095968 master-0 kubenswrapper[7337]: I0312 18:14:54.095898 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6jhwp" event={"ID":"b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c","Type":"ContainerStarted","Data":"0cd9358fdbbb9eecb3911428cb6d2c080f63b7c3a8411d6ce24be9cec10f1f9b"} Mar 12 18:14:54.097707 master-0 kubenswrapper[7337]: I0312 18:14:54.097668 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmmwm" event={"ID":"4519000b-e475-4c26-a1c0-bf05cd9c242b","Type":"ContainerStarted","Data":"af460f32b152de65e9fcce9ef6ac7f65117d787bdb27713f388bdc1892d24da9"} Mar 12 18:14:55.104279 master-0 kubenswrapper[7337]: I0312 18:14:55.104172 7337 generic.go:334] "Generic (PLEG): container finished" podID="354f29997baa583b6238f7de9108ee10" containerID="5528dd5a4e01f059b975dba420b1e42f7d38d29b15865763037c06d90dade8e7" exitCode=0 Mar 12 18:14:55.688622 master-0 kubenswrapper[7337]: I0312 18:14:55.688551 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nmmwm" Mar 12 18:14:55.688622 master-0 kubenswrapper[7337]: I0312 18:14:55.688610 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nmmwm" Mar 12 18:14:55.738943 master-0 kubenswrapper[7337]: I0312 18:14:55.738890 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nmmwm" Mar 12 18:14:56.111364 master-0 kubenswrapper[7337]: I0312 18:14:56.111263 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_38785e6e-3052-405c-8874-4f295985def5/installer/0.log" Mar 12 18:14:56.111364 master-0 kubenswrapper[7337]: I0312 18:14:56.111317 7337 generic.go:334] "Generic (PLEG): container finished" podID="38785e6e-3052-405c-8874-4f295985def5" containerID="ad09860af65a7f4806ecc5c16545e1e14574d76310388c1e9bda798b177013f0" exitCode=1 Mar 12 18:14:56.112255 master-0 kubenswrapper[7337]: I0312 18:14:56.112054 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"38785e6e-3052-405c-8874-4f295985def5","Type":"ContainerDied","Data":"ad09860af65a7f4806ecc5c16545e1e14574d76310388c1e9bda798b177013f0"} Mar 12 18:14:56.985920 master-0 kubenswrapper[7337]: E0312 18:14:56.985796 7337 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 12 18:14:57.403105 master-0 kubenswrapper[7337]: I0312 18:14:57.403043 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_38785e6e-3052-405c-8874-4f295985def5/installer/0.log" Mar 12 18:14:57.403105 master-0 kubenswrapper[7337]: I0312 18:14:57.403109 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 18:14:57.458659 master-0 kubenswrapper[7337]: I0312 18:14:57.458585 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38785e6e-3052-405c-8874-4f295985def5-kubelet-dir\") pod \"38785e6e-3052-405c-8874-4f295985def5\" (UID: \"38785e6e-3052-405c-8874-4f295985def5\") " Mar 12 18:14:57.458880 master-0 kubenswrapper[7337]: I0312 18:14:57.458702 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38785e6e-3052-405c-8874-4f295985def5-kube-api-access\") pod \"38785e6e-3052-405c-8874-4f295985def5\" (UID: \"38785e6e-3052-405c-8874-4f295985def5\") " Mar 12 18:14:57.458880 master-0 kubenswrapper[7337]: I0312 18:14:57.458735 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/38785e6e-3052-405c-8874-4f295985def5-var-lock\") pod \"38785e6e-3052-405c-8874-4f295985def5\" (UID: \"38785e6e-3052-405c-8874-4f295985def5\") " Mar 12 18:14:57.458880 master-0 kubenswrapper[7337]: I0312 18:14:57.458756 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38785e6e-3052-405c-8874-4f295985def5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "38785e6e-3052-405c-8874-4f295985def5" (UID: "38785e6e-3052-405c-8874-4f295985def5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:14:57.459008 master-0 kubenswrapper[7337]: I0312 18:14:57.458912 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38785e6e-3052-405c-8874-4f295985def5-var-lock" (OuterVolumeSpecName: "var-lock") pod "38785e6e-3052-405c-8874-4f295985def5" (UID: "38785e6e-3052-405c-8874-4f295985def5"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:14:57.459008 master-0 kubenswrapper[7337]: I0312 18:14:57.458966 7337 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38785e6e-3052-405c-8874-4f295985def5-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:57.462059 master-0 kubenswrapper[7337]: I0312 18:14:57.462015 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38785e6e-3052-405c-8874-4f295985def5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "38785e6e-3052-405c-8874-4f295985def5" (UID: "38785e6e-3052-405c-8874-4f295985def5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:14:57.559821 master-0 kubenswrapper[7337]: I0312 18:14:57.559761 7337 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38785e6e-3052-405c-8874-4f295985def5-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:57.559821 master-0 kubenswrapper[7337]: I0312 18:14:57.559811 7337 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/38785e6e-3052-405c-8874-4f295985def5-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:57.790905 master-0 kubenswrapper[7337]: I0312 18:14:57.790852 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_354f29997baa583b6238f7de9108ee10/etcdctl/0.log" Mar 12 18:14:57.791099 master-0 kubenswrapper[7337]: I0312 18:14:57.790982 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 12 18:14:57.862613 master-0 kubenswrapper[7337]: I0312 18:14:57.862290 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"354f29997baa583b6238f7de9108ee10\" (UID: \"354f29997baa583b6238f7de9108ee10\") " Mar 12 18:14:57.862613 master-0 kubenswrapper[7337]: I0312 18:14:57.862397 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"354f29997baa583b6238f7de9108ee10\" (UID: \"354f29997baa583b6238f7de9108ee10\") " Mar 12 18:14:57.862613 master-0 kubenswrapper[7337]: I0312 18:14:57.862464 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs" (OuterVolumeSpecName: "certs") pod "354f29997baa583b6238f7de9108ee10" (UID: "354f29997baa583b6238f7de9108ee10"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:14:57.862613 master-0 kubenswrapper[7337]: I0312 18:14:57.862593 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir" (OuterVolumeSpecName: "data-dir") pod "354f29997baa583b6238f7de9108ee10" (UID: "354f29997baa583b6238f7de9108ee10"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:14:57.863111 master-0 kubenswrapper[7337]: I0312 18:14:57.863073 7337 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:57.863111 master-0 kubenswrapper[7337]: I0312 18:14:57.863100 7337 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:14:58.127234 master-0 kubenswrapper[7337]: I0312 18:14:58.127167 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_38785e6e-3052-405c-8874-4f295985def5/installer/0.log" Mar 12 18:14:58.127717 master-0 kubenswrapper[7337]: I0312 18:14:58.127329 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"38785e6e-3052-405c-8874-4f295985def5","Type":"ContainerDied","Data":"d2719f778fae8a1410c74e71ed0769412f583c56fba7c4dc342221e161dce0bd"} Mar 12 18:14:58.127717 master-0 kubenswrapper[7337]: I0312 18:14:58.127370 7337 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2719f778fae8a1410c74e71ed0769412f583c56fba7c4dc342221e161dce0bd" Mar 12 18:14:58.127717 master-0 kubenswrapper[7337]: I0312 18:14:58.127414 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 18:14:58.130769 master-0 kubenswrapper[7337]: I0312 18:14:58.130707 7337 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="842896c0759c15623b7ea51af59e9868a9fd1b13d33758d9aef5d9eaaeb4b330" exitCode=0 Mar 12 18:14:58.130920 master-0 kubenswrapper[7337]: I0312 18:14:58.130826 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerDied","Data":"842896c0759c15623b7ea51af59e9868a9fd1b13d33758d9aef5d9eaaeb4b330"} Mar 12 18:14:58.138358 master-0 kubenswrapper[7337]: I0312 18:14:58.134678 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_354f29997baa583b6238f7de9108ee10/etcdctl/0.log" Mar 12 18:14:58.138358 master-0 kubenswrapper[7337]: I0312 18:14:58.134751 7337 generic.go:334] "Generic (PLEG): container finished" podID="354f29997baa583b6238f7de9108ee10" containerID="b928346cccd3250ecd2b93ee414bcede71a1e719c241a242b6ebcdc2c000ed85" exitCode=137 Mar 12 18:14:58.138358 master-0 kubenswrapper[7337]: I0312 18:14:58.134816 7337 scope.go:117] "RemoveContainer" containerID="5528dd5a4e01f059b975dba420b1e42f7d38d29b15865763037c06d90dade8e7" Mar 12 18:14:58.138358 master-0 kubenswrapper[7337]: I0312 18:14:58.134997 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 12 18:14:58.164221 master-0 kubenswrapper[7337]: I0312 18:14:58.164139 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6jhwp" Mar 12 18:14:58.164445 master-0 kubenswrapper[7337]: I0312 18:14:58.164233 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6jhwp" Mar 12 18:14:58.174036 master-0 kubenswrapper[7337]: I0312 18:14:58.173950 7337 scope.go:117] "RemoveContainer" containerID="b928346cccd3250ecd2b93ee414bcede71a1e719c241a242b6ebcdc2c000ed85" Mar 12 18:14:58.219784 master-0 kubenswrapper[7337]: I0312 18:14:58.219697 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6jhwp" Mar 12 18:14:58.248317 master-0 kubenswrapper[7337]: I0312 18:14:58.248255 7337 scope.go:117] "RemoveContainer" containerID="5528dd5a4e01f059b975dba420b1e42f7d38d29b15865763037c06d90dade8e7" Mar 12 18:14:58.249087 master-0 kubenswrapper[7337]: E0312 18:14:58.249015 7337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5528dd5a4e01f059b975dba420b1e42f7d38d29b15865763037c06d90dade8e7\": container with ID starting with 5528dd5a4e01f059b975dba420b1e42f7d38d29b15865763037c06d90dade8e7 not found: ID does not exist" containerID="5528dd5a4e01f059b975dba420b1e42f7d38d29b15865763037c06d90dade8e7" Mar 12 18:14:58.249242 master-0 kubenswrapper[7337]: I0312 18:14:58.249080 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5528dd5a4e01f059b975dba420b1e42f7d38d29b15865763037c06d90dade8e7"} err="failed to get container status \"5528dd5a4e01f059b975dba420b1e42f7d38d29b15865763037c06d90dade8e7\": rpc error: code = NotFound desc = could not find container \"5528dd5a4e01f059b975dba420b1e42f7d38d29b15865763037c06d90dade8e7\": container with ID starting with 5528dd5a4e01f059b975dba420b1e42f7d38d29b15865763037c06d90dade8e7 not found: ID does not exist" Mar 12 18:14:58.249242 master-0 kubenswrapper[7337]: I0312 18:14:58.249132 7337 scope.go:117] "RemoveContainer" containerID="b928346cccd3250ecd2b93ee414bcede71a1e719c241a242b6ebcdc2c000ed85" Mar 12 18:14:58.249834 master-0 kubenswrapper[7337]: E0312 18:14:58.249748 7337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b928346cccd3250ecd2b93ee414bcede71a1e719c241a242b6ebcdc2c000ed85\": container with ID starting with b928346cccd3250ecd2b93ee414bcede71a1e719c241a242b6ebcdc2c000ed85 not found: ID does not exist" containerID="b928346cccd3250ecd2b93ee414bcede71a1e719c241a242b6ebcdc2c000ed85" Mar 12 18:14:58.249956 master-0 kubenswrapper[7337]: I0312 18:14:58.249856 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b928346cccd3250ecd2b93ee414bcede71a1e719c241a242b6ebcdc2c000ed85"} err="failed to get container status \"b928346cccd3250ecd2b93ee414bcede71a1e719c241a242b6ebcdc2c000ed85\": rpc error: code = NotFound desc = could not find container \"b928346cccd3250ecd2b93ee414bcede71a1e719c241a242b6ebcdc2c000ed85\": container with ID starting with b928346cccd3250ecd2b93ee414bcede71a1e719c241a242b6ebcdc2c000ed85 not found: ID does not exist" Mar 12 18:14:59.145391 master-0 kubenswrapper[7337]: I0312 18:14:59.145304 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-vksss_b6d288e3-8e73-44d2-874d-64c6c98dd991/network-operator/1.log" Mar 12 18:14:59.146187 master-0 kubenswrapper[7337]: I0312 18:14:59.145833 7337 generic.go:334] "Generic (PLEG): container finished" podID="b6d288e3-8e73-44d2-874d-64c6c98dd991" containerID="6c849a8a5d45f345d2ba87802bb0e5ab5b7b8e621fde5662c4d921f30c496726" exitCode=255 Mar 12 18:14:59.146187 master-0 kubenswrapper[7337]: I0312 18:14:59.145885 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" event={"ID":"b6d288e3-8e73-44d2-874d-64c6c98dd991","Type":"ContainerDied","Data":"6c849a8a5d45f345d2ba87802bb0e5ab5b7b8e621fde5662c4d921f30c496726"} Mar 12 18:14:59.146187 master-0 kubenswrapper[7337]: I0312 18:14:59.145977 7337 scope.go:117] "RemoveContainer" containerID="419df3ddca2a5c92855e29992407a4f8d75d516e7e813a5cad7b23a3a032ee64" Mar 12 18:14:59.146771 master-0 kubenswrapper[7337]: I0312 18:14:59.146716 7337 scope.go:117] "RemoveContainer" containerID="6c849a8a5d45f345d2ba87802bb0e5ab5b7b8e621fde5662c4d921f30c496726" Mar 12 18:14:59.147073 master-0 kubenswrapper[7337]: E0312 18:14:59.147025 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=network-operator pod=network-operator-7c649bf6d4-vksss_openshift-network-operator(b6d288e3-8e73-44d2-874d-64c6c98dd991)\"" pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" podUID="b6d288e3-8e73-44d2-874d-64c6c98dd991" Mar 12 18:14:59.214550 master-0 kubenswrapper[7337]: I0312 18:14:59.214466 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6jhwp" Mar 12 18:14:59.733916 master-0 kubenswrapper[7337]: I0312 18:14:59.733724 7337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="354f29997baa583b6238f7de9108ee10" path="/var/lib/kubelet/pods/354f29997baa583b6238f7de9108ee10/volumes" Mar 12 18:14:59.734465 master-0 kubenswrapper[7337]: I0312 18:14:59.734405 7337 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 12 18:15:00.157217 master-0 kubenswrapper[7337]: I0312 18:15:00.157112 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-vksss_b6d288e3-8e73-44d2-874d-64c6c98dd991/network-operator/1.log" Mar 12 18:15:01.696001 master-0 kubenswrapper[7337]: I0312 18:15:01.695894 7337 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/installer-2-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8121ea-f6e9-4232-9837-78b278a8cf54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T18:14:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T18:14:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T18:14:27Z\\\",\\\"message\\\":\\\"containers with unready status: [installer]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T18:14:27Z\\\",\\\"message\\\":\\\"containers with unready status: [installer]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T18:14:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"installer\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/\\\",\\\"name\\\":\\\"kubelet-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lock\\\",\\\"name\\\":\\\"var-lock\\\"}]}],\\\"hostIP\\\":\\\"192.168.32.10\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.32.10\\\"}],\\\"startTime\\\":\\\"2026-03-12T18:14:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"installer-2-master-0\": Timeout: request did not complete within requested timeout - context deadline exceeded" Mar 12 18:15:01.723502 master-0 kubenswrapper[7337]: E0312 18:15:01.723330 7337 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c2aab1cc6f0f0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Killing,Message:Stopping container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:14:27.687706864 +0000 UTC m=+68.156307831,LastTimestamp:2026-03-12 18:14:27.687706864 +0000 UTC m=+68.156307831,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:15:01.742925 master-0 kubenswrapper[7337]: E0312 18:15:01.742894 7337 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 18:15:01.743142 master-0 kubenswrapper[7337]: E0312 18:15:01.743130 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ec8121ea-f6e9-4232-9837-78b278a8cf54-kube-api-access podName:ec8121ea-f6e9-4232-9837-78b278a8cf54 nodeName:}" failed. No retries permitted until 2026-03-12 18:15:02.243112589 +0000 UTC m=+102.711713536 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ec8121ea-f6e9-4232-9837-78b278a8cf54-kube-api-access") pod "installer-2-master-0" (UID: "ec8121ea-f6e9-4232-9837-78b278a8cf54") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 18:15:01.754544 master-0 kubenswrapper[7337]: E0312 18:15:01.754464 7337 projected.go:194] Error preparing data for projected volume kube-api-access-bdc26 for pod openshift-marketplace/redhat-marketplace-ggkqg: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 18:15:01.754816 master-0 kubenswrapper[7337]: E0312 18:15:01.754625 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0cc54e47-af53-448a-b1c9-043710890a32-kube-api-access-bdc26 podName:0cc54e47-af53-448a-b1c9-043710890a32 nodeName:}" failed. No retries permitted until 2026-03-12 18:15:02.254591839 +0000 UTC m=+102.723192826 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bdc26" (UniqueName: "kubernetes.io/projected/0cc54e47-af53-448a-b1c9-043710890a32-kube-api-access-bdc26") pod "redhat-marketplace-ggkqg" (UID: "0cc54e47-af53-448a-b1c9-043710890a32") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 18:15:01.911389 master-0 kubenswrapper[7337]: E0312 18:15:01.911321 7337 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:15:01.949571 master-0 kubenswrapper[7337]: E0312 18:15:01.949429 7337 projected.go:194] Error preparing data for projected volume kube-api-access-gmsnk for pod openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zd9gm: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 18:15:01.949571 master-0 kubenswrapper[7337]: E0312 18:15:01.949535 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34cbf061-4c76-476e-bed9-0a133c744862-kube-api-access-gmsnk podName:34cbf061-4c76-476e-bed9-0a133c744862 nodeName:}" failed. No retries permitted until 2026-03-12 18:15:02.449486914 +0000 UTC m=+102.918087861 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gmsnk" (UniqueName: "kubernetes.io/projected/34cbf061-4c76-476e-bed9-0a133c744862-kube-api-access-gmsnk") pod "control-plane-machine-set-operator-6686554ddc-zd9gm" (UID: "34cbf061-4c76-476e-bed9-0a133c744862") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 18:15:02.104496 master-0 kubenswrapper[7337]: E0312 18:15:02.104426 7337 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:15:02.167992 master-0 kubenswrapper[7337]: I0312 18:15:02.167921 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_7542f3f1-23fe-41df-99b9-4324c75d35b7/installer/0.log" Mar 12 18:15:02.167992 master-0 kubenswrapper[7337]: I0312 18:15:02.167969 7337 generic.go:334] "Generic (PLEG): container finished" podID="7542f3f1-23fe-41df-99b9-4324c75d35b7" containerID="e3aea0a79706e5d2ced89ea30c6dab8e3469fe22291b915ce855f44fa68a87b6" exitCode=1 Mar 12 18:15:02.316925 master-0 kubenswrapper[7337]: I0312 18:15:02.316828 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdc26\" (UniqueName: \"kubernetes.io/projected/0cc54e47-af53-448a-b1c9-043710890a32-kube-api-access-bdc26\") pod \"redhat-marketplace-ggkqg\" (UID: \"0cc54e47-af53-448a-b1c9-043710890a32\") " pod="openshift-marketplace/redhat-marketplace-ggkqg" Mar 12 18:15:02.316925 master-0 kubenswrapper[7337]: I0312 18:15:02.316900 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec8121ea-f6e9-4232-9837-78b278a8cf54-kube-api-access\") pod \"installer-2-master-0\" (UID: \"ec8121ea-f6e9-4232-9837-78b278a8cf54\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 18:15:02.519963 master-0 kubenswrapper[7337]: I0312 18:15:02.519894 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmsnk\" (UniqueName: \"kubernetes.io/projected/34cbf061-4c76-476e-bed9-0a133c744862-kube-api-access-gmsnk\") pod \"control-plane-machine-set-operator-6686554ddc-zd9gm\" (UID: \"34cbf061-4c76-476e-bed9-0a133c744862\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zd9gm" Mar 12 18:15:03.177109 master-0 kubenswrapper[7337]: I0312 18:15:03.176899 7337 generic.go:334] "Generic (PLEG): container finished" podID="055f5c67-f512-4510-99c5-e194944b0599" containerID="fce4a972222f063110d34772de7116adb2483b3e9c195060fc1414ecf2cd9f6c" exitCode=0 Mar 12 18:15:03.747406 master-0 kubenswrapper[7337]: I0312 18:15:03.747294 7337 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 18:15:11.139550 master-0 kubenswrapper[7337]: E0312 18:15:11.139484 7337 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 12 18:15:11.912236 master-0 kubenswrapper[7337]: E0312 18:15:11.911866 7337 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:15:12.104805 master-0 kubenswrapper[7337]: E0312 18:15:12.104738 7337 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:15:12.234758 master-0 kubenswrapper[7337]: I0312 18:15:12.234679 7337 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="6396a294b799f6df8bf8204dbbb2ddb526c388ee810b3a7185e4216792bec332" exitCode=0 Mar 12 18:15:13.243281 master-0 kubenswrapper[7337]: I0312 18:15:13.243213 7337 generic.go:334] "Generic (PLEG): container finished" podID="e697746f-fb9e-4d10-ab61-33c68e62cc0d" containerID="2a197e2fe83ed2e384dda0d8770ef6e8d98b56d89ae78066b100f526847a5d4c" exitCode=0 Mar 12 18:15:13.748322 master-0 kubenswrapper[7337]: I0312 18:15:13.748231 7337 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 18:15:14.659610 master-0 kubenswrapper[7337]: I0312 18:15:14.659500 7337 patch_prober.go:28] interesting pod/etcd-operator-5884b9cd56-bfq7b container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" start-of-body= Mar 12 18:15:14.660243 master-0 kubenswrapper[7337]: I0312 18:15:14.659661 7337 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" podUID="e697746f-fb9e-4d10-ab61-33c68e62cc0d" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" Mar 12 18:15:18.283315 master-0 kubenswrapper[7337]: I0312 18:15:18.283261 7337 generic.go:334] "Generic (PLEG): container finished" podID="a1e2340b-ebca-40de-b1e0-8133999cd860" containerID="fff98590531dfb71359f592b09852a158d9cf8cc7fff20e92644173e6e6819dc" exitCode=0 Mar 12 18:15:18.285086 master-0 kubenswrapper[7337]: I0312 18:15:18.285037 7337 generic.go:334] "Generic (PLEG): container finished" podID="236f2886-bb69-49a7-9471-36454fd1cbd3" containerID="6ae7a934b8aa2f254b8b82bbc367d7391db11d303ac3c55852c1da10c3f95301" exitCode=0 Mar 12 18:15:21.912387 master-0 kubenswrapper[7337]: E0312 18:15:21.912278 7337 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:15:21.912387 master-0 kubenswrapper[7337]: I0312 18:15:21.912376 7337 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 12 18:15:22.106095 master-0 kubenswrapper[7337]: E0312 18:15:22.106038 7337 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": the server was unable to return a response in the time allotted, but may still be processing the request (get nodes master-0)" Mar 12 18:15:22.106095 master-0 kubenswrapper[7337]: E0312 18:15:22.106072 7337 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 18:15:22.982479 master-0 kubenswrapper[7337]: E0312 18:15:22.982356 7337 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/53c3e9b4b8cca5096418831f916cb1cb0c57b6fb8aae97e2652c5c5a1bbcab4a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/53c3e9b4b8cca5096418831f916cb1cb0c57b6fb8aae97e2652c5c5a1bbcab4a/diff: no such file or directory, extraDiskErr: Mar 12 18:15:23.316976 master-0 kubenswrapper[7337]: I0312 18:15:23.316919 7337 generic.go:334] "Generic (PLEG): container finished" podID="ab926874-9722-4e65-9084-27b2f9915450" containerID="f47fabdc4bdd8a3562bf6c4bb328b7b2603314ba7c3e007528769af4852f929f" exitCode=0 Mar 12 18:15:27.343644 master-0 kubenswrapper[7337]: I0312 18:15:27.343406 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-hqrqt_8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe/approver/0.log" Mar 12 18:15:27.344412 master-0 kubenswrapper[7337]: I0312 18:15:27.344264 7337 generic.go:334] "Generic (PLEG): container finished" podID="8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe" containerID="0e62c9f4417a5a9e30eb23f06a18c4ab2b7d089c3e060926866187529335e3de" exitCode=1 Mar 12 18:15:28.349868 master-0 kubenswrapper[7337]: I0312 18:15:28.349801 7337 generic.go:334] "Generic (PLEG): container finished" podID="d4ae1240-e04e-48e9-88df-9f1a53508da7" containerID="336f9bff957643e2b1614f5b9ab58d3286fac81af162d3e42ef2ab143bd1a53e" exitCode=0 Mar 12 18:15:31.912852 master-0 kubenswrapper[7337]: E0312 18:15:31.912718 7337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 12 18:15:33.379501 master-0 kubenswrapper[7337]: I0312 18:15:33.379398 7337 generic.go:334] "Generic (PLEG): container finished" podID="e720e1d0-5a6d-4b76-8b25-5963e24950f5" containerID="6cbf8532a0aab6166e00e40dafe24b7c97f2d79bb9206285a901edb45142b490" exitCode=0 Mar 12 18:15:33.737538 master-0 kubenswrapper[7337]: E0312 18:15:33.737389 7337 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 12 18:15:33.738061 master-0 kubenswrapper[7337]: E0312 18:15:33.738032 7337 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.016s" Mar 12 18:15:33.738238 master-0 kubenswrapper[7337]: I0312 18:15:33.738218 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nmmwm" Mar 12 18:15:33.748397 master-0 kubenswrapper[7337]: I0312 18:15:33.748331 7337 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 12 18:15:35.725522 master-0 kubenswrapper[7337]: E0312 18:15:35.725407 7337 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{community-operators-nmmwm.189c2aab295cd16c openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-nmmwm,UID:4519000b-e475-4c26-a1c0-bf05cd9c242b,APIVersion:v1,ResourceVersion:8630,FieldPath:spec.initContainers{extract-utilities},},Reason:Created,Message:Created container: extract-utilities,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:14:27.898855788 +0000 UTC m=+68.367456735,LastTimestamp:2026-03-12 18:14:27.898855788 +0000 UTC m=+68.367456735,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:15:36.319774 master-0 kubenswrapper[7337]: E0312 18:15:36.319717 7337 projected.go:194] Error preparing data for projected volume kube-api-access-bdc26 for pod openshift-marketplace/redhat-marketplace-ggkqg: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 18:15:36.319966 master-0 kubenswrapper[7337]: E0312 18:15:36.319799 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0cc54e47-af53-448a-b1c9-043710890a32-kube-api-access-bdc26 podName:0cc54e47-af53-448a-b1c9-043710890a32 nodeName:}" failed. No retries permitted until 2026-03-12 18:15:37.319781418 +0000 UTC m=+137.788382365 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-bdc26" (UniqueName: "kubernetes.io/projected/0cc54e47-af53-448a-b1c9-043710890a32-kube-api-access-bdc26") pod "redhat-marketplace-ggkqg" (UID: "0cc54e47-af53-448a-b1c9-043710890a32") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 18:15:36.319966 master-0 kubenswrapper[7337]: E0312 18:15:36.319807 7337 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 18:15:36.319966 master-0 kubenswrapper[7337]: E0312 18:15:36.319937 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ec8121ea-f6e9-4232-9837-78b278a8cf54-kube-api-access podName:ec8121ea-f6e9-4232-9837-78b278a8cf54 nodeName:}" failed. No retries permitted until 2026-03-12 18:15:37.319898361 +0000 UTC m=+137.788499388 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ec8121ea-f6e9-4232-9837-78b278a8cf54-kube-api-access") pod "installer-2-master-0" (UID: "ec8121ea-f6e9-4232-9837-78b278a8cf54") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 18:15:36.523324 master-0 kubenswrapper[7337]: E0312 18:15:36.523256 7337 projected.go:194] Error preparing data for projected volume kube-api-access-gmsnk for pod openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zd9gm: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 18:15:36.523583 master-0 kubenswrapper[7337]: E0312 18:15:36.523366 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34cbf061-4c76-476e-bed9-0a133c744862-kube-api-access-gmsnk podName:34cbf061-4c76-476e-bed9-0a133c744862 nodeName:}" failed. No retries permitted until 2026-03-12 18:15:37.523337641 +0000 UTC m=+137.991938618 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gmsnk" (UniqueName: "kubernetes.io/projected/34cbf061-4c76-476e-bed9-0a133c744862-kube-api-access-gmsnk") pod "control-plane-machine-set-operator-6686554ddc-zd9gm" (UID: "34cbf061-4c76-476e-bed9-0a133c744862") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 18:15:37.359845 master-0 kubenswrapper[7337]: I0312 18:15:37.359744 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdc26\" (UniqueName: \"kubernetes.io/projected/0cc54e47-af53-448a-b1c9-043710890a32-kube-api-access-bdc26\") pod \"redhat-marketplace-ggkqg\" (UID: \"0cc54e47-af53-448a-b1c9-043710890a32\") " pod="openshift-marketplace/redhat-marketplace-ggkqg" Mar 12 18:15:37.359845 master-0 kubenswrapper[7337]: I0312 18:15:37.359848 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec8121ea-f6e9-4232-9837-78b278a8cf54-kube-api-access\") pod \"installer-2-master-0\" (UID: \"ec8121ea-f6e9-4232-9837-78b278a8cf54\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 18:15:37.562749 master-0 kubenswrapper[7337]: I0312 18:15:37.562676 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmsnk\" (UniqueName: \"kubernetes.io/projected/34cbf061-4c76-476e-bed9-0a133c744862-kube-api-access-gmsnk\") pod \"control-plane-machine-set-operator-6686554ddc-zd9gm\" (UID: \"34cbf061-4c76-476e-bed9-0a133c744862\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zd9gm" Mar 12 18:15:42.113413 master-0 kubenswrapper[7337]: E0312 18:15:42.113302 7337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 12 18:15:42.490605 master-0 kubenswrapper[7337]: E0312 18:15:42.490295 7337 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T18:15:32Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T18:15:32Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T18:15:32Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T18:15:32Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:24fdf6755aec2ff108ceee2e24eee87c6953140e4325a59c8d1ddbf1dca41828\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:ea25f58b1e485b176739a042e02cb509306918451cb4ee862117f0d0892ea2c1\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1739033560},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1fce8b5c6b0206ecb4ddc7de47062bed853b88d4e34415e9e5a2a6bc99cf6aad\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:8bd0ffcb6caac4a5d03346b5f7cdfaf2f6f9f9d0a30deff8f216e6cb63b0ee75\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1282704097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:896dc34712ba5eb2d9daa6e77a55cb67501435f8f108cc4e5eda3ece5212c2b0\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:ef4e5c2f6262e5a68f785f148c9da79aefa72e2e20a365276bc996658ce6689c\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221806801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\\\"],\\\"sizeBytes\\\":465086330},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1\\\"],\\\"sizeBytes\\\":463700811},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b714a7ada1e295b599b432f32e1fd5b74c8cdbe6fe51e95306322b25cb873914\\\"],\\\"sizeBytes\\\":458126424},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9\\\"],\\\"sizeBytes\\\":456575686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89cb093f319eaa04acfe9431b8697bffbc71ab670546f7ed257daa332165c626\\\"],\\\"sizeBytes\\\":448828105},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783\\\"],\\\"sizeBytes\\\":448041621},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf9670d0f269f8d49fd9ef4981999be195f6624a4146aa93d9201eb8acc81053\\\"],\\\"sizeBytes\\\":443271011}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:15:45.447285 master-0 kubenswrapper[7337]: I0312 18:15:45.447165 7337 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="e36200a118a6b9e121bed30c8ddbc2aa00c9c281d2b93d4ca81dd8623b5aea84" exitCode=1 Mar 12 18:15:52.491407 master-0 kubenswrapper[7337]: E0312 18:15:52.491341 7337 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:15:52.514931 master-0 kubenswrapper[7337]: E0312 18:15:52.514857 7337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Mar 12 18:16:01.697664 master-0 kubenswrapper[7337]: I0312 18:16:01.697599 7337 status_manager.go:851] "Failed to get status for pod" podUID="b6d288e3-8e73-44d2-874d-64c6c98dd991" pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods network-operator-7c649bf6d4-vksss)" Mar 12 18:16:02.491749 master-0 kubenswrapper[7337]: E0312 18:16:02.491688 7337 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:16:03.316168 master-0 kubenswrapper[7337]: E0312 18:16:03.316091 7337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 12 18:16:07.750997 master-0 kubenswrapper[7337]: E0312 18:16:07.750934 7337 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 12 18:16:07.751603 master-0 kubenswrapper[7337]: E0312 18:16:07.751140 7337 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.013s" Mar 12 18:16:07.759018 master-0 kubenswrapper[7337]: I0312 18:16:07.758979 7337 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 12 18:16:09.729050 master-0 kubenswrapper[7337]: E0312 18:16:09.728859 7337 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{community-operators-nmmwm.189c2aab2a4596b4 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-nmmwm,UID:4519000b-e475-4c26-a1c0-bf05cd9c242b,APIVersion:v1,ResourceVersion:8630,FieldPath:spec.initContainers{extract-utilities},},Reason:Started,Message:Started container extract-utilities,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:14:27.914110644 +0000 UTC m=+68.382711591,LastTimestamp:2026-03-12 18:14:27.914110644 +0000 UTC m=+68.382711591,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:16:11.361868 master-0 kubenswrapper[7337]: E0312 18:16:11.361825 7337 projected.go:194] Error preparing data for projected volume kube-api-access-bdc26 for pod openshift-marketplace/redhat-marketplace-ggkqg: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 18:16:11.362597 master-0 kubenswrapper[7337]: E0312 18:16:11.361919 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0cc54e47-af53-448a-b1c9-043710890a32-kube-api-access-bdc26 podName:0cc54e47-af53-448a-b1c9-043710890a32 nodeName:}" failed. No retries permitted until 2026-03-12 18:16:13.361895282 +0000 UTC m=+173.830496229 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-bdc26" (UniqueName: "kubernetes.io/projected/0cc54e47-af53-448a-b1c9-043710890a32-kube-api-access-bdc26") pod "redhat-marketplace-ggkqg" (UID: "0cc54e47-af53-448a-b1c9-043710890a32") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 18:16:11.375615 master-0 kubenswrapper[7337]: E0312 18:16:11.375561 7337 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 18:16:11.375745 master-0 kubenswrapper[7337]: E0312 18:16:11.375646 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ec8121ea-f6e9-4232-9837-78b278a8cf54-kube-api-access podName:ec8121ea-f6e9-4232-9837-78b278a8cf54 nodeName:}" failed. No retries permitted until 2026-03-12 18:16:13.375624209 +0000 UTC m=+173.844225196 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ec8121ea-f6e9-4232-9837-78b278a8cf54-kube-api-access") pod "installer-2-master-0" (UID: "ec8121ea-f6e9-4232-9837-78b278a8cf54") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 18:16:11.565834 master-0 kubenswrapper[7337]: E0312 18:16:11.565772 7337 projected.go:194] Error preparing data for projected volume kube-api-access-gmsnk for pod openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zd9gm: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 18:16:11.566315 master-0 kubenswrapper[7337]: E0312 18:16:11.566277 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34cbf061-4c76-476e-bed9-0a133c744862-kube-api-access-gmsnk podName:34cbf061-4c76-476e-bed9-0a133c744862 nodeName:}" failed. No retries permitted until 2026-03-12 18:16:13.566227834 +0000 UTC m=+174.034828901 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gmsnk" (UniqueName: "kubernetes.io/projected/34cbf061-4c76-476e-bed9-0a133c744862-kube-api-access-gmsnk") pod "control-plane-machine-set-operator-6686554ddc-zd9gm" (UID: "34cbf061-4c76-476e-bed9-0a133c744862") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 18:16:12.492584 master-0 kubenswrapper[7337]: E0312 18:16:12.492428 7337 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:16:13.383635 master-0 kubenswrapper[7337]: I0312 18:16:13.383570 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdc26\" (UniqueName: \"kubernetes.io/projected/0cc54e47-af53-448a-b1c9-043710890a32-kube-api-access-bdc26\") pod \"redhat-marketplace-ggkqg\" (UID: \"0cc54e47-af53-448a-b1c9-043710890a32\") " pod="openshift-marketplace/redhat-marketplace-ggkqg" Mar 12 18:16:13.383635 master-0 kubenswrapper[7337]: I0312 18:16:13.383620 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec8121ea-f6e9-4232-9837-78b278a8cf54-kube-api-access\") pod \"installer-2-master-0\" (UID: \"ec8121ea-f6e9-4232-9837-78b278a8cf54\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 18:16:13.587010 master-0 kubenswrapper[7337]: I0312 18:16:13.586301 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmsnk\" (UniqueName: \"kubernetes.io/projected/34cbf061-4c76-476e-bed9-0a133c744862-kube-api-access-gmsnk\") pod \"control-plane-machine-set-operator-6686554ddc-zd9gm\" (UID: \"34cbf061-4c76-476e-bed9-0a133c744862\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zd9gm" Mar 12 18:16:14.658530 master-0 kubenswrapper[7337]: I0312 18:16:14.658474 7337 patch_prober.go:28] interesting pod/etcd-operator-5884b9cd56-bfq7b container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" start-of-body= Mar 12 18:16:14.658954 master-0 kubenswrapper[7337]: I0312 18:16:14.658559 7337 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" podUID="e697746f-fb9e-4d10-ab61-33c68e62cc0d" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" Mar 12 18:16:14.918084 master-0 kubenswrapper[7337]: E0312 18:16:14.917964 7337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Mar 12 18:16:16.183857 master-0 kubenswrapper[7337]: I0312 18:16:15.523123 7337 generic.go:334] "Generic (PLEG): container finished" podID="4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64" containerID="91ac1142b73c6d3658240c3848ad3ec4d35a6a2c1e366a3eec630ba38825ae3c" exitCode=0 Mar 12 18:16:16.529929 master-0 kubenswrapper[7337]: I0312 18:16:16.529737 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-m6hsp_223a548b-a3ad-40dd-82de-e3dbb7f3e4fa/openshift-controller-manager-operator/1.log" Mar 12 18:16:16.530718 master-0 kubenswrapper[7337]: I0312 18:16:16.530675 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-m6hsp_223a548b-a3ad-40dd-82de-e3dbb7f3e4fa/openshift-controller-manager-operator/0.log" Mar 12 18:16:16.530837 master-0 kubenswrapper[7337]: I0312 18:16:16.530722 7337 generic.go:334] "Generic (PLEG): container finished" podID="223a548b-a3ad-40dd-82de-e3dbb7f3e4fa" containerID="3e81068034bf9c9fbfc0dcacd5d8ed6f99d4b966db54edfeaa5ae37af6e0a1a5" exitCode=255 Mar 12 18:16:18.545814 master-0 kubenswrapper[7337]: I0312 18:16:18.545736 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4527l_d94dc349-c5cb-4f12-8e48-867030af4981/ingress-operator/0.log" Mar 12 18:16:18.545814 master-0 kubenswrapper[7337]: I0312 18:16:18.545808 7337 generic.go:334] "Generic (PLEG): container finished" podID="d94dc349-c5cb-4f12-8e48-867030af4981" containerID="fd2b6be186aaa869f9c5743426ef2bc5d49bada1c5fa7a307e7f55efa78a7bbf" exitCode=1 Mar 12 18:16:22.493709 master-0 kubenswrapper[7337]: E0312 18:16:22.493629 7337 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:16:22.493709 master-0 kubenswrapper[7337]: E0312 18:16:22.493690 7337 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 18:16:22.770561 master-0 kubenswrapper[7337]: I0312 18:16:22.770177 7337 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-clkx5 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.8:8080/healthz\": dial tcp 10.128.0.8:8080: connect: connection refused" start-of-body= Mar 12 18:16:22.770561 master-0 kubenswrapper[7337]: I0312 18:16:22.770253 7337 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" podUID="4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.8:8080/healthz\": dial tcp 10.128.0.8:8080: connect: connection refused" Mar 12 18:16:22.770561 master-0 kubenswrapper[7337]: I0312 18:16:22.770361 7337 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-clkx5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.8:8080/healthz\": dial tcp 10.128.0.8:8080: connect: connection refused" start-of-body= Mar 12 18:16:22.770561 master-0 kubenswrapper[7337]: I0312 18:16:22.770440 7337 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" podUID="4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.8:8080/healthz\": dial tcp 10.128.0.8:8080: connect: connection refused" Mar 12 18:16:28.120030 master-0 kubenswrapper[7337]: E0312 18:16:28.119867 7337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Mar 12 18:16:30.638634 master-0 kubenswrapper[7337]: E0312 18:16:30.638535 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-kube-controller-manager/installer-2-master-0" podUID="ec8121ea-f6e9-4232-9837-78b278a8cf54" Mar 12 18:16:30.701390 master-0 kubenswrapper[7337]: E0312 18:16:30.701261 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-bdc26], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-marketplace/redhat-marketplace-ggkqg" podUID="0cc54e47-af53-448a-b1c9-043710890a32" Mar 12 18:16:30.767179 master-0 kubenswrapper[7337]: E0312 18:16:30.767050 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-gmsnk], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zd9gm" podUID="34cbf061-4c76-476e-bed9-0a133c744862" Mar 12 18:16:31.628909 master-0 kubenswrapper[7337]: I0312 18:16:31.628810 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zd9gm" Mar 12 18:16:31.629344 master-0 kubenswrapper[7337]: I0312 18:16:31.628910 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 18:16:31.629344 master-0 kubenswrapper[7337]: I0312 18:16:31.628949 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ggkqg" Mar 12 18:16:32.770210 master-0 kubenswrapper[7337]: I0312 18:16:32.770123 7337 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-clkx5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.8:8080/healthz\": dial tcp 10.128.0.8:8080: connect: connection refused" start-of-body= Mar 12 18:16:32.771213 master-0 kubenswrapper[7337]: I0312 18:16:32.770249 7337 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" podUID="4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.8:8080/healthz\": dial tcp 10.128.0.8:8080: connect: connection refused" Mar 12 18:16:32.771213 master-0 kubenswrapper[7337]: I0312 18:16:32.770465 7337 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-clkx5 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.8:8080/healthz\": dial tcp 10.128.0.8:8080: connect: connection refused" start-of-body= Mar 12 18:16:32.771213 master-0 kubenswrapper[7337]: I0312 18:16:32.770597 7337 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" podUID="4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.8:8080/healthz\": dial tcp 10.128.0.8:8080: connect: connection refused" Mar 12 18:16:33.651070 master-0 kubenswrapper[7337]: I0312 18:16:33.650901 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-2ltx9_bce831df-c604-4608-a24e-b14d62c5287a/snapshot-controller/0.log" Mar 12 18:16:33.651070 master-0 kubenswrapper[7337]: I0312 18:16:33.650986 7337 generic.go:334] "Generic (PLEG): container finished" podID="bce831df-c604-4608-a24e-b14d62c5287a" containerID="e722a80d2945b3420fc5caeacc6065896e4511dd6a2605aa37b3ca081a3ef049" exitCode=1 Mar 12 18:16:35.664802 master-0 kubenswrapper[7337]: I0312 18:16:35.664725 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-9nzsn_b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652/manager/0.log" Mar 12 18:16:35.664802 master-0 kubenswrapper[7337]: I0312 18:16:35.664773 7337 generic.go:334] "Generic (PLEG): container finished" podID="b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652" containerID="a15650ff0279cc1eb053cd0564e886ecaf1299636ec1285faa1562a29a442c43" exitCode=1 Mar 12 18:16:35.666641 master-0 kubenswrapper[7337]: I0312 18:16:35.666608 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-mb6tc_d1b3859c-20a1-4a1c-8508-86ed843768f5/manager/0.log" Mar 12 18:16:35.667307 master-0 kubenswrapper[7337]: I0312 18:16:35.667232 7337 generic.go:334] "Generic (PLEG): container finished" podID="d1b3859c-20a1-4a1c-8508-86ed843768f5" containerID="736a8404a1683d56f8dbc8f71de47cc325d858c0409febcb5d511b27a322ce13" exitCode=1 Mar 12 18:16:40.666225 master-0 kubenswrapper[7337]: I0312 18:16:40.666124 7337 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-mb6tc container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.41:8081/readyz\": dial tcp 10.128.0.41:8081: connect: connection refused" start-of-body= Mar 12 18:16:40.666993 master-0 kubenswrapper[7337]: I0312 18:16:40.666256 7337 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" podUID="d1b3859c-20a1-4a1c-8508-86ed843768f5" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.41:8081/readyz\": dial tcp 10.128.0.41:8081: connect: connection refused" Mar 12 18:16:41.354627 master-0 kubenswrapper[7337]: I0312 18:16:41.354431 7337 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-9nzsn container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" start-of-body= Mar 12 18:16:41.354627 master-0 kubenswrapper[7337]: I0312 18:16:41.354601 7337 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" podUID="b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" Mar 12 18:16:41.762626 master-0 kubenswrapper[7337]: E0312 18:16:41.762421 7337 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 12 18:16:41.763759 master-0 kubenswrapper[7337]: E0312 18:16:41.762715 7337 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.012s" Mar 12 18:16:41.763759 master-0 kubenswrapper[7337]: I0312 18:16:41.763595 7337 scope.go:117] "RemoveContainer" containerID="a15650ff0279cc1eb053cd0564e886ecaf1299636ec1285faa1562a29a442c43" Mar 12 18:16:41.766438 master-0 kubenswrapper[7337]: I0312 18:16:41.765300 7337 scope.go:117] "RemoveContainer" containerID="e36200a118a6b9e121bed30c8ddbc2aa00c9c281d2b93d4ca81dd8623b5aea84" Mar 12 18:16:41.766438 master-0 kubenswrapper[7337]: I0312 18:16:41.765474 7337 scope.go:117] "RemoveContainer" containerID="fce4a972222f063110d34772de7116adb2483b3e9c195060fc1414ecf2cd9f6c" Mar 12 18:16:41.766438 master-0 kubenswrapper[7337]: I0312 18:16:41.765788 7337 scope.go:117] "RemoveContainer" containerID="fd2b6be186aaa869f9c5743426ef2bc5d49bada1c5fa7a307e7f55efa78a7bbf" Mar 12 18:16:41.778183 master-0 kubenswrapper[7337]: I0312 18:16:41.777997 7337 scope.go:117] "RemoveContainer" containerID="91ac1142b73c6d3658240c3848ad3ec4d35a6a2c1e366a3eec630ba38825ae3c" Mar 12 18:16:41.794465 master-0 kubenswrapper[7337]: I0312 18:16:41.793831 7337 scope.go:117] "RemoveContainer" containerID="e722a80d2945b3420fc5caeacc6065896e4511dd6a2605aa37b3ca081a3ef049" Mar 12 18:16:41.796368 master-0 kubenswrapper[7337]: I0312 18:16:41.796258 7337 scope.go:117] "RemoveContainer" containerID="fff98590531dfb71359f592b09852a158d9cf8cc7fff20e92644173e6e6819dc" Mar 12 18:16:41.797872 master-0 kubenswrapper[7337]: I0312 18:16:41.797823 7337 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 12 18:16:41.798743 master-0 kubenswrapper[7337]: I0312 18:16:41.798718 7337 scope.go:117] "RemoveContainer" containerID="6cbf8532a0aab6166e00e40dafe24b7c97f2d79bb9206285a901edb45142b490" Mar 12 18:16:41.799314 master-0 kubenswrapper[7337]: I0312 18:16:41.799232 7337 scope.go:117] "RemoveContainer" containerID="736a8404a1683d56f8dbc8f71de47cc325d858c0409febcb5d511b27a322ce13" Mar 12 18:16:41.799655 master-0 kubenswrapper[7337]: I0312 18:16:41.799635 7337 scope.go:117] "RemoveContainer" containerID="3e81068034bf9c9fbfc0dcacd5d8ed6f99d4b966db54edfeaa5ae37af6e0a1a5" Mar 12 18:16:41.800852 master-0 kubenswrapper[7337]: I0312 18:16:41.800834 7337 scope.go:117] "RemoveContainer" containerID="2a197e2fe83ed2e384dda0d8770ef6e8d98b56d89ae78066b100f526847a5d4c" Mar 12 18:16:41.801210 master-0 kubenswrapper[7337]: I0312 18:16:41.801194 7337 scope.go:117] "RemoveContainer" containerID="336f9bff957643e2b1614f5b9ab58d3286fac81af162d3e42ef2ab143bd1a53e" Mar 12 18:16:41.801377 master-0 kubenswrapper[7337]: I0312 18:16:41.801341 7337 scope.go:117] "RemoveContainer" containerID="6c849a8a5d45f345d2ba87802bb0e5ab5b7b8e621fde5662c4d921f30c496726" Mar 12 18:16:41.801572 master-0 kubenswrapper[7337]: I0312 18:16:41.801547 7337 scope.go:117] "RemoveContainer" containerID="0e62c9f4417a5a9e30eb23f06a18c4ab2b7d089c3e060926866187529335e3de" Mar 12 18:16:41.801639 master-0 kubenswrapper[7337]: I0312 18:16:41.801614 7337 scope.go:117] "RemoveContainer" containerID="f47fabdc4bdd8a3562bf6c4bb328b7b2603314ba7c3e007528769af4852f929f" Mar 12 18:16:41.802224 master-0 kubenswrapper[7337]: I0312 18:16:41.802153 7337 scope.go:117] "RemoveContainer" containerID="6ae7a934b8aa2f254b8b82bbc367d7391db11d303ac3c55852c1da10c3f95301" Mar 12 18:16:42.716208 master-0 kubenswrapper[7337]: I0312 18:16:42.716128 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-vksss_b6d288e3-8e73-44d2-874d-64c6c98dd991/network-operator/1.log" Mar 12 18:16:42.719655 master-0 kubenswrapper[7337]: I0312 18:16:42.719597 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-hqrqt_8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe/approver/0.log" Mar 12 18:16:42.726966 master-0 kubenswrapper[7337]: I0312 18:16:42.726909 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-9nzsn_b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652/manager/0.log" Mar 12 18:16:42.738891 master-0 kubenswrapper[7337]: I0312 18:16:42.738824 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-mb6tc_d1b3859c-20a1-4a1c-8508-86ed843768f5/manager/0.log" Mar 12 18:16:42.747266 master-0 kubenswrapper[7337]: I0312 18:16:42.747215 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-2ltx9_bce831df-c604-4608-a24e-b14d62c5287a/snapshot-controller/0.log" Mar 12 18:16:42.750153 master-0 kubenswrapper[7337]: I0312 18:16:42.750096 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-m6hsp_223a548b-a3ad-40dd-82de-e3dbb7f3e4fa/openshift-controller-manager-operator/1.log" Mar 12 18:16:42.751059 master-0 kubenswrapper[7337]: I0312 18:16:42.750999 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-m6hsp_223a548b-a3ad-40dd-82de-e3dbb7f3e4fa/openshift-controller-manager-operator/0.log" Mar 12 18:16:42.760364 master-0 kubenswrapper[7337]: I0312 18:16:42.760294 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4527l_d94dc349-c5cb-4f12-8e48-867030af4981/ingress-operator/0.log" Mar 12 18:16:42.832681 master-0 kubenswrapper[7337]: E0312 18:16:42.832400 7337 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T18:16:32Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T18:16:32Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T18:16:32Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T18:16:32Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:24fdf6755aec2ff108ceee2e24eee87c6953140e4325a59c8d1ddbf1dca41828\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:ea25f58b1e485b176739a042e02cb509306918451cb4ee862117f0d0892ea2c1\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1739033560},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1fce8b5c6b0206ecb4ddc7de47062bed853b88d4e34415e9e5a2a6bc99cf6aad\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:8bd0ffcb6caac4a5d03346b5f7cdfaf2f6f9f9d0a30deff8f216e6cb63b0ee75\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1282704097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:896dc34712ba5eb2d9daa6e77a55cb67501435f8f108cc4e5eda3ece5212c2b0\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:ef4e5c2f6262e5a68f785f148c9da79aefa72e2e20a365276bc996658ce6689c\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221806801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\\\"],\\\"sizeBytes\\\":465086330},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1\\\"],\\\"sizeBytes\\\":463700811},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b714a7ada1e295b599b432f32e1fd5b74c8cdbe6fe51e95306322b25cb873914\\\"],\\\"sizeBytes\\\":458126424},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9\\\"],\\\"sizeBytes\\\":456575686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89cb093f319eaa04acfe9431b8697bffbc71ab670546f7ed257daa332165c626\\\"],\\\"sizeBytes\\\":448828105},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783\\\"],\\\"sizeBytes\\\":448041621},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf9670d0f269f8d49fd9ef4981999be195f6624a4146aa93d9201eb8acc81053\\\"],\\\"sizeBytes\\\":443271011}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:16:43.110237 master-0 kubenswrapper[7337]: I0312 18:16:43.110168 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_7542f3f1-23fe-41df-99b9-4324c75d35b7/installer/0.log" Mar 12 18:16:43.110501 master-0 kubenswrapper[7337]: I0312 18:16:43.110304 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 18:16:43.233689 master-0 kubenswrapper[7337]: I0312 18:16:43.233610 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7542f3f1-23fe-41df-99b9-4324c75d35b7-var-lock\") pod \"7542f3f1-23fe-41df-99b9-4324c75d35b7\" (UID: \"7542f3f1-23fe-41df-99b9-4324c75d35b7\") " Mar 12 18:16:43.233909 master-0 kubenswrapper[7337]: I0312 18:16:43.233723 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7542f3f1-23fe-41df-99b9-4324c75d35b7-kube-api-access\") pod \"7542f3f1-23fe-41df-99b9-4324c75d35b7\" (UID: \"7542f3f1-23fe-41df-99b9-4324c75d35b7\") " Mar 12 18:16:43.233909 master-0 kubenswrapper[7337]: I0312 18:16:43.233767 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7542f3f1-23fe-41df-99b9-4324c75d35b7-var-lock" (OuterVolumeSpecName: "var-lock") pod "7542f3f1-23fe-41df-99b9-4324c75d35b7" (UID: "7542f3f1-23fe-41df-99b9-4324c75d35b7"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:16:43.233909 master-0 kubenswrapper[7337]: I0312 18:16:43.233786 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7542f3f1-23fe-41df-99b9-4324c75d35b7-kubelet-dir\") pod \"7542f3f1-23fe-41df-99b9-4324c75d35b7\" (UID: \"7542f3f1-23fe-41df-99b9-4324c75d35b7\") " Mar 12 18:16:43.233909 master-0 kubenswrapper[7337]: I0312 18:16:43.233844 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7542f3f1-23fe-41df-99b9-4324c75d35b7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7542f3f1-23fe-41df-99b9-4324c75d35b7" (UID: "7542f3f1-23fe-41df-99b9-4324c75d35b7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:16:43.235337 master-0 kubenswrapper[7337]: I0312 18:16:43.235247 7337 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7542f3f1-23fe-41df-99b9-4324c75d35b7-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 18:16:43.235392 master-0 kubenswrapper[7337]: I0312 18:16:43.235341 7337 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7542f3f1-23fe-41df-99b9-4324c75d35b7-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:16:43.238313 master-0 kubenswrapper[7337]: I0312 18:16:43.238259 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7542f3f1-23fe-41df-99b9-4324c75d35b7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7542f3f1-23fe-41df-99b9-4324c75d35b7" (UID: "7542f3f1-23fe-41df-99b9-4324c75d35b7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:16:43.336303 master-0 kubenswrapper[7337]: I0312 18:16:43.336193 7337 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7542f3f1-23fe-41df-99b9-4324c75d35b7-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 18:16:43.734721 master-0 kubenswrapper[7337]: E0312 18:16:43.734407 7337 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{community-operators-nmmwm.189c2aab6450d162 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-nmmwm,UID:4519000b-e475-4c26-a1c0-bf05cd9c242b,APIVersion:v1,ResourceVersion:8630,FieldPath:spec.initContainers{extract-content},},Reason:Pulling,Message:Pulling image \"registry.redhat.io/redhat/community-operator-index:v4.18\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:14:28.88792509 +0000 UTC m=+69.356526037,LastTimestamp:2026-03-12 18:14:28.88792509 +0000 UTC m=+69.356526037,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:16:43.771640 master-0 kubenswrapper[7337]: I0312 18:16:43.771594 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_7542f3f1-23fe-41df-99b9-4324c75d35b7/installer/0.log" Mar 12 18:16:43.771830 master-0 kubenswrapper[7337]: I0312 18:16:43.771687 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 18:16:44.521230 master-0 kubenswrapper[7337]: E0312 18:16:44.521126 7337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 18:16:47.387553 master-0 kubenswrapper[7337]: E0312 18:16:47.387483 7337 projected.go:194] Error preparing data for projected volume kube-api-access-bdc26 for pod openshift-marketplace/redhat-marketplace-ggkqg: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 18:16:47.387553 master-0 kubenswrapper[7337]: E0312 18:16:47.387497 7337 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 18:16:47.388654 master-0 kubenswrapper[7337]: E0312 18:16:47.387581 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0cc54e47-af53-448a-b1c9-043710890a32-kube-api-access-bdc26 podName:0cc54e47-af53-448a-b1c9-043710890a32 nodeName:}" failed. No retries permitted until 2026-03-12 18:16:51.38756336 +0000 UTC m=+211.856164307 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-bdc26" (UniqueName: "kubernetes.io/projected/0cc54e47-af53-448a-b1c9-043710890a32-kube-api-access-bdc26") pod "redhat-marketplace-ggkqg" (UID: "0cc54e47-af53-448a-b1c9-043710890a32") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 18:16:47.388654 master-0 kubenswrapper[7337]: E0312 18:16:47.387667 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ec8121ea-f6e9-4232-9837-78b278a8cf54-kube-api-access podName:ec8121ea-f6e9-4232-9837-78b278a8cf54 nodeName:}" failed. No retries permitted until 2026-03-12 18:16:51.38759626 +0000 UTC m=+211.856197247 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ec8121ea-f6e9-4232-9837-78b278a8cf54-kube-api-access") pod "installer-2-master-0" (UID: "ec8121ea-f6e9-4232-9837-78b278a8cf54") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 18:16:47.591372 master-0 kubenswrapper[7337]: E0312 18:16:47.591281 7337 projected.go:194] Error preparing data for projected volume kube-api-access-gmsnk for pod openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zd9gm: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 18:16:47.591372 master-0 kubenswrapper[7337]: E0312 18:16:47.591384 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34cbf061-4c76-476e-bed9-0a133c744862-kube-api-access-gmsnk podName:34cbf061-4c76-476e-bed9-0a133c744862 nodeName:}" failed. No retries permitted until 2026-03-12 18:16:51.591359148 +0000 UTC m=+212.059960125 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gmsnk" (UniqueName: "kubernetes.io/projected/34cbf061-4c76-476e-bed9-0a133c744862-kube-api-access-gmsnk") pod "control-plane-machine-set-operator-6686554ddc-zd9gm" (UID: "34cbf061-4c76-476e-bed9-0a133c744862") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 18:16:51.451352 master-0 kubenswrapper[7337]: I0312 18:16:51.451218 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdc26\" (UniqueName: \"kubernetes.io/projected/0cc54e47-af53-448a-b1c9-043710890a32-kube-api-access-bdc26\") pod \"redhat-marketplace-ggkqg\" (UID: \"0cc54e47-af53-448a-b1c9-043710890a32\") " pod="openshift-marketplace/redhat-marketplace-ggkqg" Mar 12 18:16:51.451352 master-0 kubenswrapper[7337]: I0312 18:16:51.451309 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec8121ea-f6e9-4232-9837-78b278a8cf54-kube-api-access\") pod \"installer-2-master-0\" (UID: \"ec8121ea-f6e9-4232-9837-78b278a8cf54\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 18:16:51.653827 master-0 kubenswrapper[7337]: I0312 18:16:51.653668 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmsnk\" (UniqueName: \"kubernetes.io/projected/34cbf061-4c76-476e-bed9-0a133c744862-kube-api-access-gmsnk\") pod \"control-plane-machine-set-operator-6686554ddc-zd9gm\" (UID: \"34cbf061-4c76-476e-bed9-0a133c744862\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zd9gm" Mar 12 18:16:52.833556 master-0 kubenswrapper[7337]: E0312 18:16:52.833399 7337 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:16:54.793132 master-0 kubenswrapper[7337]: E0312 18:16:54.793034 7337 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 12 18:17:01.523211 master-0 kubenswrapper[7337]: E0312 18:17:01.523032 7337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 18:17:01.698777 master-0 kubenswrapper[7337]: I0312 18:17:01.698719 7337 status_manager.go:851] "Failed to get status for pod" podUID="38785e6e-3052-405c-8874-4f295985def5" pod="openshift-kube-apiserver/installer-1-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-1-master-0)" Mar 12 18:17:01.917487 master-0 kubenswrapper[7337]: I0312 18:17:01.917417 7337 generic.go:334] "Generic (PLEG): container finished" podID="30c5dc4b-f1c8-4773-b961-985740fcc503" containerID="f0410fcdb7f021e073b091992c982ea0c6dd9257aa500e76a08b26054e3f730d" exitCode=0 Mar 12 18:17:02.834000 master-0 kubenswrapper[7337]: E0312 18:17:02.833871 7337 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:17:03.027421 master-0 kubenswrapper[7337]: I0312 18:17:03.027364 7337 patch_prober.go:28] interesting pod/controller-manager-5b55d98459-sr4hk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.50:8443/healthz\": dial tcp 10.128.0.50:8443: connect: connection refused" start-of-body= Mar 12 18:17:03.027421 master-0 kubenswrapper[7337]: I0312 18:17:03.027407 7337 patch_prober.go:28] interesting pod/controller-manager-5b55d98459-sr4hk container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.50:8443/healthz\": dial tcp 10.128.0.50:8443: connect: connection refused" start-of-body= Mar 12 18:17:03.027684 master-0 kubenswrapper[7337]: I0312 18:17:03.027413 7337 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" podUID="30c5dc4b-f1c8-4773-b961-985740fcc503" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.50:8443/healthz\": dial tcp 10.128.0.50:8443: connect: connection refused" Mar 12 18:17:03.027684 master-0 kubenswrapper[7337]: I0312 18:17:03.027463 7337 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" podUID="30c5dc4b-f1c8-4773-b961-985740fcc503" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.50:8443/healthz\": dial tcp 10.128.0.50:8443: connect: connection refused" Mar 12 18:17:12.834711 master-0 kubenswrapper[7337]: E0312 18:17:12.834633 7337 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:17:12.995972 master-0 kubenswrapper[7337]: I0312 18:17:12.995886 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-2ltx9_bce831df-c604-4608-a24e-b14d62c5287a/snapshot-controller/1.log" Mar 12 18:17:12.996640 master-0 kubenswrapper[7337]: I0312 18:17:12.996614 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-2ltx9_bce831df-c604-4608-a24e-b14d62c5287a/snapshot-controller/0.log" Mar 12 18:17:12.996787 master-0 kubenswrapper[7337]: I0312 18:17:12.996760 7337 generic.go:334] "Generic (PLEG): container finished" podID="bce831df-c604-4608-a24e-b14d62c5287a" containerID="d836c15d1f62aaf6703c6affbd63fc3695d34670b745bd3f6f244a838540e38e" exitCode=1 Mar 12 18:17:13.026783 master-0 kubenswrapper[7337]: I0312 18:17:13.026721 7337 patch_prober.go:28] interesting pod/controller-manager-5b55d98459-sr4hk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.50:8443/healthz\": dial tcp 10.128.0.50:8443: connect: connection refused" start-of-body= Mar 12 18:17:13.026974 master-0 kubenswrapper[7337]: I0312 18:17:13.026781 7337 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" podUID="30c5dc4b-f1c8-4773-b961-985740fcc503" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.50:8443/healthz\": dial tcp 10.128.0.50:8443: connect: connection refused" Mar 12 18:17:13.026974 master-0 kubenswrapper[7337]: I0312 18:17:13.026954 7337 patch_prober.go:28] interesting pod/controller-manager-5b55d98459-sr4hk container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.50:8443/healthz\": dial tcp 10.128.0.50:8443: connect: connection refused" start-of-body= Mar 12 18:17:13.027156 master-0 kubenswrapper[7337]: I0312 18:17:13.027015 7337 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" podUID="30c5dc4b-f1c8-4773-b961-985740fcc503" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.50:8443/healthz\": dial tcp 10.128.0.50:8443: connect: connection refused" Mar 12 18:17:15.010248 master-0 kubenswrapper[7337]: I0312 18:17:15.010192 7337 generic.go:334] "Generic (PLEG): container finished" podID="74eb1407-de29-42e5-9e6c-ce1bec3a9d80" containerID="b68bb8a45412c32b722e21748839c3672ba272871c5c90f6c3a4e4de1a85ff86" exitCode=0 Mar 12 18:17:15.800507 master-0 kubenswrapper[7337]: E0312 18:17:15.800459 7337 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 12 18:17:15.801098 master-0 kubenswrapper[7337]: E0312 18:17:15.801071 7337 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.038s" Mar 12 18:17:15.801272 master-0 kubenswrapper[7337]: I0312 18:17:15.801255 7337 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:17:15.812194 master-0 kubenswrapper[7337]: I0312 18:17:15.812150 7337 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 12 18:17:17.739744 master-0 kubenswrapper[7337]: E0312 18:17:17.739563 7337 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c2aaea3d7a5b5 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:14:42.838627765 +0000 UTC m=+83.307228712,LastTimestamp:2026-03-12 18:14:42.838627765 +0000 UTC m=+83.307228712,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:17:18.523987 master-0 kubenswrapper[7337]: E0312 18:17:18.523883 7337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 18:17:19.114449 master-0 kubenswrapper[7337]: E0312 18:17:19.114348 7337 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.313s" Mar 12 18:17:19.114449 master-0 kubenswrapper[7337]: I0312 18:17:19.114400 7337 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" containerID="cri-o://91ac1142b73c6d3658240c3848ad3ec4d35a6a2c1e366a3eec630ba38825ae3c" Mar 12 18:17:19.114449 master-0 kubenswrapper[7337]: I0312 18:17:19.114414 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:17:19.124629 master-0 kubenswrapper[7337]: I0312 18:17:19.122973 7337 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 12 18:17:19.126474 master-0 kubenswrapper[7337]: I0312 18:17:19.126424 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"7542f3f1-23fe-41df-99b9-4324c75d35b7","Type":"ContainerDied","Data":"e3aea0a79706e5d2ced89ea30c6dab8e3469fe22291b915ce855f44fa68a87b6"} Mar 12 18:17:19.126671 master-0 kubenswrapper[7337]: I0312 18:17:19.126495 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:17:19.126671 master-0 kubenswrapper[7337]: I0312 18:17:19.126552 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 12 18:17:19.126671 master-0 kubenswrapper[7337]: I0312 18:17:19.126572 7337 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="2417522b-0ead-4451-8882-4143d36aea12" Mar 12 18:17:19.126671 master-0 kubenswrapper[7337]: I0312 18:17:19.126597 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-9xlgl" event={"ID":"055f5c67-f512-4510-99c5-e194944b0599","Type":"ContainerDied","Data":"fce4a972222f063110d34772de7116adb2483b3e9c195060fc1414ecf2cd9f6c"} Mar 12 18:17:19.126671 master-0 kubenswrapper[7337]: I0312 18:17:19.126653 7337 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:17:19.126671 master-0 kubenswrapper[7337]: I0312 18:17:19.126678 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerDied","Data":"6396a294b799f6df8bf8204dbbb2ddb526c388ee810b3a7185e4216792bec332"} Mar 12 18:17:19.127066 master-0 kubenswrapper[7337]: I0312 18:17:19.126706 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" event={"ID":"e697746f-fb9e-4d10-ab61-33c68e62cc0d","Type":"ContainerDied","Data":"2a197e2fe83ed2e384dda0d8770ef6e8d98b56d89ae78066b100f526847a5d4c"} Mar 12 18:17:19.127066 master-0 kubenswrapper[7337]: I0312 18:17:19.126738 7337 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:17:19.127066 master-0 kubenswrapper[7337]: I0312 18:17:19.126768 7337 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:17:19.127066 master-0 kubenswrapper[7337]: I0312 18:17:19.126797 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-2xr98" event={"ID":"a1e2340b-ebca-40de-b1e0-8133999cd860","Type":"ContainerDied","Data":"fff98590531dfb71359f592b09852a158d9cf8cc7fff20e92644173e6e6819dc"} Mar 12 18:17:19.127066 master-0 kubenswrapper[7337]: I0312 18:17:19.126830 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:17:19.127066 master-0 kubenswrapper[7337]: I0312 18:17:19.126853 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:17:19.127066 master-0 kubenswrapper[7337]: I0312 18:17:19.126871 7337 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:17:19.127066 master-0 kubenswrapper[7337]: I0312 18:17:19.126891 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6qlzz" event={"ID":"236f2886-bb69-49a7-9471-36454fd1cbd3","Type":"ContainerDied","Data":"6ae7a934b8aa2f254b8b82bbc367d7391db11d303ac3c55852c1da10c3f95301"} Mar 12 18:17:19.127066 master-0 kubenswrapper[7337]: I0312 18:17:19.126957 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:17:19.127066 master-0 kubenswrapper[7337]: I0312 18:17:19.127000 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:17:19.127066 master-0 kubenswrapper[7337]: I0312 18:17:19.127018 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-ddsbn" event={"ID":"ab926874-9722-4e65-9084-27b2f9915450","Type":"ContainerDied","Data":"f47fabdc4bdd8a3562bf6c4bb328b7b2603314ba7c3e007528769af4852f929f"} Mar 12 18:17:19.127066 master-0 kubenswrapper[7337]: I0312 18:17:19.127040 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-hqrqt" event={"ID":"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe","Type":"ContainerDied","Data":"0e62c9f4417a5a9e30eb23f06a18c4ab2b7d089c3e060926866187529335e3de"} Mar 12 18:17:19.127066 master-0 kubenswrapper[7337]: I0312 18:17:19.127062 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpb6k" event={"ID":"d4ae1240-e04e-48e9-88df-9f1a53508da7","Type":"ContainerDied","Data":"336f9bff957643e2b1614f5b9ab58d3286fac81af162d3e42ef2ab143bd1a53e"} Mar 12 18:17:19.127066 master-0 kubenswrapper[7337]: I0312 18:17:19.127084 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-w7xvp" event={"ID":"e720e1d0-5a6d-4b76-8b25-5963e24950f5","Type":"ContainerDied","Data":"6cbf8532a0aab6166e00e40dafe24b7c97f2d79bb9206285a901edb45142b490"} Mar 12 18:17:19.127997 master-0 kubenswrapper[7337]: I0312 18:17:19.127106 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"e36200a118a6b9e121bed30c8ddbc2aa00c9c281d2b93d4ca81dd8623b5aea84"} Mar 12 18:17:19.127997 master-0 kubenswrapper[7337]: I0312 18:17:19.127131 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" event={"ID":"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64","Type":"ContainerDied","Data":"91ac1142b73c6d3658240c3848ad3ec4d35a6a2c1e366a3eec630ba38825ae3c"} Mar 12 18:17:19.127997 master-0 kubenswrapper[7337]: I0312 18:17:19.127157 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-m6hsp" event={"ID":"223a548b-a3ad-40dd-82de-e3dbb7f3e4fa","Type":"ContainerDied","Data":"3e81068034bf9c9fbfc0dcacd5d8ed6f99d4b966db54edfeaa5ae37af6e0a1a5"} Mar 12 18:17:19.127997 master-0 kubenswrapper[7337]: I0312 18:17:19.127180 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" event={"ID":"d94dc349-c5cb-4f12-8e48-867030af4981","Type":"ContainerDied","Data":"fd2b6be186aaa869f9c5743426ef2bc5d49bada1c5fa7a307e7f55efa78a7bbf"} Mar 12 18:17:19.127997 master-0 kubenswrapper[7337]: I0312 18:17:19.127201 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-2ltx9" event={"ID":"bce831df-c604-4608-a24e-b14d62c5287a","Type":"ContainerDied","Data":"e722a80d2945b3420fc5caeacc6065896e4511dd6a2605aa37b3ca081a3ef049"} Mar 12 18:17:19.127997 master-0 kubenswrapper[7337]: I0312 18:17:19.127222 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" event={"ID":"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652","Type":"ContainerDied","Data":"a15650ff0279cc1eb053cd0564e886ecaf1299636ec1285faa1562a29a442c43"} Mar 12 18:17:19.127997 master-0 kubenswrapper[7337]: I0312 18:17:19.127289 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" event={"ID":"d1b3859c-20a1-4a1c-8508-86ed843768f5","Type":"ContainerDied","Data":"736a8404a1683d56f8dbc8f71de47cc325d858c0409febcb5d511b27a322ce13"} Mar 12 18:17:19.127997 master-0 kubenswrapper[7337]: I0312 18:17:19.127323 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-2xr98" event={"ID":"a1e2340b-ebca-40de-b1e0-8133999cd860","Type":"ContainerStarted","Data":"9de5a3b93eb3f1136dd34751bc0d652341fdfc646209d52ecaff219c3bdfc30b"} Mar 12 18:17:19.127997 master-0 kubenswrapper[7337]: I0312 18:17:19.127344 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" event={"ID":"b6d288e3-8e73-44d2-874d-64c6c98dd991","Type":"ContainerStarted","Data":"61391e64ce8e20710a16e47ab514517643e782dc9c713a84a5cefd62cff8c6ad"} Mar 12 18:17:19.127997 master-0 kubenswrapper[7337]: I0312 18:17:19.127364 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-hqrqt" event={"ID":"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe","Type":"ContainerStarted","Data":"f3f978addd81177f408450b5c8b37d7927d5e40c6e8538e905d2bc327fb8a086"} Mar 12 18:17:19.127997 master-0 kubenswrapper[7337]: I0312 18:17:19.127382 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpb6k" event={"ID":"d4ae1240-e04e-48e9-88df-9f1a53508da7","Type":"ContainerStarted","Data":"3bfdf8caec49323e35f87883171b05e3d1f44df1c027fc9a9977c37c9de794d7"} Mar 12 18:17:19.127997 master-0 kubenswrapper[7337]: I0312 18:17:19.127401 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" event={"ID":"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652","Type":"ContainerStarted","Data":"68e99d6ea6e8d10062ce5f4ba00f8d6b01cd9c70d46c0f8c6da206028e5ae034"} Mar 12 18:17:19.127997 master-0 kubenswrapper[7337]: I0312 18:17:19.127420 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" event={"ID":"e697746f-fb9e-4d10-ab61-33c68e62cc0d","Type":"ContainerStarted","Data":"4322cdc97f321d2418571282b2d0a02572a0fe1f4c6c9ffe9fbcda76c46d48dc"} Mar 12 18:17:19.127997 master-0 kubenswrapper[7337]: I0312 18:17:19.127437 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-9xlgl" event={"ID":"055f5c67-f512-4510-99c5-e194944b0599","Type":"ContainerStarted","Data":"516fa9aa37ae542ea59bf93e3a95540f2151edcb864ff7b3d27213b8dc8b61bc"} Mar 12 18:17:19.127997 master-0 kubenswrapper[7337]: I0312 18:17:19.127455 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-w7xvp" event={"ID":"e720e1d0-5a6d-4b76-8b25-5963e24950f5","Type":"ContainerStarted","Data":"ba030f3020f3f1090c4c425e4d3efed561b91812de9e192e22ab2d5f0f03e899"} Mar 12 18:17:19.127997 master-0 kubenswrapper[7337]: I0312 18:17:19.127474 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" event={"ID":"d1b3859c-20a1-4a1c-8508-86ed843768f5","Type":"ContainerStarted","Data":"96aef0daff5b4b065b760a53564e1e05a4751150ad79dc2f9bc551c5dafe3e48"} Mar 12 18:17:19.127997 master-0 kubenswrapper[7337]: I0312 18:17:19.127492 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"0b463f9b24d0ab49df3e727d4db7dfa04150b13289e95c0e9bcdb7b146ec69e5"} Mar 12 18:17:19.127997 master-0 kubenswrapper[7337]: I0312 18:17:19.127553 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-2ltx9" event={"ID":"bce831df-c604-4608-a24e-b14d62c5287a","Type":"ContainerStarted","Data":"d836c15d1f62aaf6703c6affbd63fc3695d34670b745bd3f6f244a838540e38e"} Mar 12 18:17:19.127997 master-0 kubenswrapper[7337]: I0312 18:17:19.127579 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-m6hsp" event={"ID":"223a548b-a3ad-40dd-82de-e3dbb7f3e4fa","Type":"ContainerStarted","Data":"5d0241cc7009d3306ab2a5141a1e724e5d10e6d5ba8b992abdc867f5e3227b5c"} Mar 12 18:17:19.127997 master-0 kubenswrapper[7337]: I0312 18:17:19.127599 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-ddsbn" event={"ID":"ab926874-9722-4e65-9084-27b2f9915450","Type":"ContainerStarted","Data":"9213ca251d514ef6a4502ec562d995d2bad7cf3823696ef39a8b4b2403b0a326"} Mar 12 18:17:19.127997 master-0 kubenswrapper[7337]: I0312 18:17:19.127617 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6qlzz" event={"ID":"236f2886-bb69-49a7-9471-36454fd1cbd3","Type":"ContainerStarted","Data":"7cab1578ed2470f7e82997cc453e19ce4ec7011f918fcfd2c8f748a2ab79d803"} Mar 12 18:17:19.127997 master-0 kubenswrapper[7337]: I0312 18:17:19.127635 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" event={"ID":"d94dc349-c5cb-4f12-8e48-867030af4981","Type":"ContainerStarted","Data":"95c9f43b56f422e41bfe1eff7ae31e7f220fb9fe9c3b7c82ac5ec70e4a6cd3be"} Mar 12 18:17:19.127997 master-0 kubenswrapper[7337]: I0312 18:17:19.127653 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" event={"ID":"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64","Type":"ContainerStarted","Data":"7ae4d8d774c3ae2fa8787557fe823a911ba5793827a61120b252083df1bc5f38"} Mar 12 18:17:19.127997 master-0 kubenswrapper[7337]: I0312 18:17:19.127670 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"7542f3f1-23fe-41df-99b9-4324c75d35b7","Type":"ContainerDied","Data":"b3d2e6992e795fa35374f60292962d9511ac22996078698ddd0c5f16bcc8772c"} Mar 12 18:17:19.127997 master-0 kubenswrapper[7337]: I0312 18:17:19.127690 7337 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3d2e6992e795fa35374f60292962d9511ac22996078698ddd0c5f16bcc8772c" Mar 12 18:17:19.127997 master-0 kubenswrapper[7337]: I0312 18:17:19.127709 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"85481ac128156495a33b73e0081f7dd95a2a0153c6a78beb32cc5e772332a467"} Mar 12 18:17:19.127997 master-0 kubenswrapper[7337]: I0312 18:17:19.127729 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"36cd53616859a569c9c20c415080914078c0deb1d74d4eb2e8bfe2dd611dc704"} Mar 12 18:17:19.127997 master-0 kubenswrapper[7337]: I0312 18:17:19.127747 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"feb59d6cd6246e5eb3016a3006c4795b8f2234b2b5776c1e7531d520271356d0"} Mar 12 18:17:19.127997 master-0 kubenswrapper[7337]: I0312 18:17:19.127765 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"b5ec126109a6232b9aa6b3936f959178287d28c871336513191b8b615b8c76f4"} Mar 12 18:17:19.127997 master-0 kubenswrapper[7337]: I0312 18:17:19.127783 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"e44a9cc4aee80272b38b5e4941479361f9126b94fc8be3e31ee711d40c88f185"} Mar 12 18:17:19.127997 master-0 kubenswrapper[7337]: I0312 18:17:19.127800 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" event={"ID":"30c5dc4b-f1c8-4773-b961-985740fcc503","Type":"ContainerDied","Data":"f0410fcdb7f021e073b091992c982ea0c6dd9257aa500e76a08b26054e3f730d"} Mar 12 18:17:19.127997 master-0 kubenswrapper[7337]: I0312 18:17:19.127821 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-2ltx9" event={"ID":"bce831df-c604-4608-a24e-b14d62c5287a","Type":"ContainerDied","Data":"d836c15d1f62aaf6703c6affbd63fc3695d34670b745bd3f6f244a838540e38e"} Mar 12 18:17:19.127997 master-0 kubenswrapper[7337]: I0312 18:17:19.127849 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" event={"ID":"74eb1407-de29-42e5-9e6c-ce1bec3a9d80","Type":"ContainerDied","Data":"b68bb8a45412c32b722e21748839c3672ba272871c5c90f6c3a4e4de1a85ff86"} Mar 12 18:17:19.133779 master-0 kubenswrapper[7337]: I0312 18:17:19.128593 7337 scope.go:117] "RemoveContainer" containerID="b68bb8a45412c32b722e21748839c3672ba272871c5c90f6c3a4e4de1a85ff86" Mar 12 18:17:19.133779 master-0 kubenswrapper[7337]: I0312 18:17:19.133328 7337 scope.go:117] "RemoveContainer" containerID="95d38f1431066968104bb51ab17f5c680fc28063e6ba5f01ad252c4fc619c1e1" Mar 12 18:17:19.138910 master-0 kubenswrapper[7337]: I0312 18:17:19.134300 7337 scope.go:117] "RemoveContainer" containerID="d836c15d1f62aaf6703c6affbd63fc3695d34670b745bd3f6f244a838540e38e" Mar 12 18:17:19.138910 master-0 kubenswrapper[7337]: E0312 18:17:19.134649 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-2ltx9_openshift-cluster-storage-operator(bce831df-c604-4608-a24e-b14d62c5287a)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-2ltx9" podUID="bce831df-c604-4608-a24e-b14d62c5287a" Mar 12 18:17:19.138910 master-0 kubenswrapper[7337]: I0312 18:17:19.134793 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:17:19.138910 master-0 kubenswrapper[7337]: I0312 18:17:19.134869 7337 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 12 18:17:19.138910 master-0 kubenswrapper[7337]: I0312 18:17:19.134871 7337 scope.go:117] "RemoveContainer" containerID="f0410fcdb7f021e073b091992c982ea0c6dd9257aa500e76a08b26054e3f730d" Mar 12 18:17:19.138910 master-0 kubenswrapper[7337]: I0312 18:17:19.134891 7337 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="2417522b-0ead-4451-8882-4143d36aea12" Mar 12 18:17:19.138910 master-0 kubenswrapper[7337]: I0312 18:17:19.134916 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:17:19.138910 master-0 kubenswrapper[7337]: I0312 18:17:19.135662 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:17:19.138910 master-0 kubenswrapper[7337]: I0312 18:17:19.136869 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:17:19.170956 master-0 kubenswrapper[7337]: I0312 18:17:19.170898 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2vwn7"] Mar 12 18:17:19.179883 master-0 kubenswrapper[7337]: I0312 18:17:19.176006 7337 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2vwn7"] Mar 12 18:17:19.180063 master-0 kubenswrapper[7337]: I0312 18:17:19.180013 7337 scope.go:117] "RemoveContainer" containerID="05c0afcccf4bf3051eac46ea2747146033d8dbf283902873560ad4999c7825f8" Mar 12 18:17:19.210041 master-0 kubenswrapper[7337]: I0312 18:17:19.209991 7337 scope.go:117] "RemoveContainer" containerID="e722a80d2945b3420fc5caeacc6065896e4511dd6a2605aa37b3ca081a3ef049" Mar 12 18:17:19.254497 master-0 kubenswrapper[7337]: I0312 18:17:19.254459 7337 scope.go:117] "RemoveContainer" containerID="e722a80d2945b3420fc5caeacc6065896e4511dd6a2605aa37b3ca081a3ef049" Mar 12 18:17:19.255164 master-0 kubenswrapper[7337]: E0312 18:17:19.255117 7337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e722a80d2945b3420fc5caeacc6065896e4511dd6a2605aa37b3ca081a3ef049\": container with ID starting with e722a80d2945b3420fc5caeacc6065896e4511dd6a2605aa37b3ca081a3ef049 not found: ID does not exist" containerID="e722a80d2945b3420fc5caeacc6065896e4511dd6a2605aa37b3ca081a3ef049" Mar 12 18:17:19.255298 master-0 kubenswrapper[7337]: I0312 18:17:19.255158 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e722a80d2945b3420fc5caeacc6065896e4511dd6a2605aa37b3ca081a3ef049"} err="failed to get container status \"e722a80d2945b3420fc5caeacc6065896e4511dd6a2605aa37b3ca081a3ef049\": rpc error: code = NotFound desc = could not find container \"e722a80d2945b3420fc5caeacc6065896e4511dd6a2605aa37b3ca081a3ef049\": container with ID starting with e722a80d2945b3420fc5caeacc6065896e4511dd6a2605aa37b3ca081a3ef049 not found: ID does not exist" Mar 12 18:17:19.321796 master-0 kubenswrapper[7337]: I0312 18:17:19.321728 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4s28n"] Mar 12 18:17:19.327340 master-0 kubenswrapper[7337]: I0312 18:17:19.325125 7337 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4s28n"] Mar 12 18:17:19.349251 master-0 kubenswrapper[7337]: I0312 18:17:19.349180 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ttgsx"] Mar 12 18:17:19.355913 master-0 kubenswrapper[7337]: I0312 18:17:19.355849 7337 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ttgsx"] Mar 12 18:17:19.374329 master-0 kubenswrapper[7337]: I0312 18:17:19.372123 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nmmwm" podStartSLOduration=156.155742182 podStartE2EDuration="3m0.372104156s" podCreationTimestamp="2026-03-12 18:14:19 +0000 UTC" firstStartedPulling="2026-03-12 18:14:28.88792108 +0000 UTC m=+69.356522027" lastFinishedPulling="2026-03-12 18:14:53.104283054 +0000 UTC m=+93.572884001" observedRunningTime="2026-03-12 18:17:19.370198088 +0000 UTC m=+239.838799045" watchObservedRunningTime="2026-03-12 18:17:19.372104156 +0000 UTC m=+239.840705103" Mar 12 18:17:19.418192 master-0 kubenswrapper[7337]: I0312 18:17:19.418128 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 12 18:17:19.424413 master-0 kubenswrapper[7337]: I0312 18:17:19.424373 7337 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 12 18:17:19.455898 master-0 kubenswrapper[7337]: I0312 18:17:19.455840 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-67j2w"] Mar 12 18:17:19.458878 master-0 kubenswrapper[7337]: I0312 18:17:19.458696 7337 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-67j2w"] Mar 12 18:17:19.517745 master-0 kubenswrapper[7337]: I0312 18:17:19.517647 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6jhwp" podStartSLOduration=150.202593664 podStartE2EDuration="3m2.517614095s" podCreationTimestamp="2026-03-12 18:14:17 +0000 UTC" firstStartedPulling="2026-03-12 18:14:20.801979414 +0000 UTC m=+61.270580351" lastFinishedPulling="2026-03-12 18:14:53.116999835 +0000 UTC m=+93.585600782" observedRunningTime="2026-03-12 18:17:19.516988919 +0000 UTC m=+239.985589876" watchObservedRunningTime="2026-03-12 18:17:19.517614095 +0000 UTC m=+239.986215042" Mar 12 18:17:19.730953 master-0 kubenswrapper[7337]: I0312 18:17:19.730647 7337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2331fc4b-e67b-4496-8cae-15cd11cf3030" path="/var/lib/kubelet/pods/2331fc4b-e67b-4496-8cae-15cd11cf3030/volumes" Mar 12 18:17:19.731434 master-0 kubenswrapper[7337]: I0312 18:17:19.731395 7337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cd18201-afdc-4229-972e-ab01adb2a7f3" path="/var/lib/kubelet/pods/8cd18201-afdc-4229-972e-ab01adb2a7f3/volumes" Mar 12 18:17:19.732217 master-0 kubenswrapper[7337]: I0312 18:17:19.732180 7337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a" path="/var/lib/kubelet/pods/9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a/volumes" Mar 12 18:17:19.736787 master-0 kubenswrapper[7337]: I0312 18:17:19.736576 7337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4d20441-ec1f-4571-b590-989f2bdd4082" path="/var/lib/kubelet/pods/c4d20441-ec1f-4571-b590-989f2bdd4082/volumes" Mar 12 18:17:19.737830 master-0 kubenswrapper[7337]: I0312 18:17:19.737798 7337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f516dab9-06d1-4bea-96b9-8f3e14543bbd" path="/var/lib/kubelet/pods/f516dab9-06d1-4bea-96b9-8f3e14543bbd/volumes" Mar 12 18:17:20.045755 master-0 kubenswrapper[7337]: I0312 18:17:20.045678 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" event={"ID":"30c5dc4b-f1c8-4773-b961-985740fcc503","Type":"ContainerStarted","Data":"4c8710948dd5b2bc4491d4be8e860204494bc7d0e3f8f5ee9d0d17c5e1a20b86"} Mar 12 18:17:20.047048 master-0 kubenswrapper[7337]: I0312 18:17:20.046827 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" Mar 12 18:17:20.054503 master-0 kubenswrapper[7337]: I0312 18:17:20.052538 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" Mar 12 18:17:20.054503 master-0 kubenswrapper[7337]: I0312 18:17:20.053564 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" event={"ID":"74eb1407-de29-42e5-9e6c-ce1bec3a9d80","Type":"ContainerStarted","Data":"cf7ec04355ab534ccfb643cab9c3d22d23f3ccdda0dc0dcaa6f049053cf3267f"} Mar 12 18:17:20.069272 master-0 kubenswrapper[7337]: I0312 18:17:20.067747 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-2ltx9_bce831df-c604-4608-a24e-b14d62c5287a/snapshot-controller/1.log" Mar 12 18:17:20.073353 master-0 kubenswrapper[7337]: I0312 18:17:20.072913 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-m6hsp_223a548b-a3ad-40dd-82de-e3dbb7f3e4fa/openshift-controller-manager-operator/1.log" Mar 12 18:17:20.747118 master-0 kubenswrapper[7337]: I0312 18:17:20.747007 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:17:20.854962 master-0 kubenswrapper[7337]: I0312 18:17:20.854871 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 12 18:17:20.854962 master-0 kubenswrapper[7337]: I0312 18:17:20.854944 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 12 18:17:20.886096 master-0 kubenswrapper[7337]: I0312 18:17:20.886017 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 12 18:17:21.584682 master-0 kubenswrapper[7337]: E0312 18:17:21.584558 7337 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Mar 12 18:17:21.585034 master-0 kubenswrapper[7337]: E0312 18:17:21.584703 7337 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:17:21.585034 master-0 kubenswrapper[7337]: E0312 18:17:21.584728 7337 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:17:21.585034 master-0 kubenswrapper[7337]: E0312 18:17:21.584780 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-cpthp_openshift-network-diagnostics(33feec78-4592-4343-965b-aa1b7044fcf3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-cpthp_openshift-network-diagnostics(33feec78-4592-4343-965b-aa1b7044fcf3)\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="openshift-network-diagnostics/network-check-target-cpthp" podUID="33feec78-4592-4343-965b-aa1b7044fcf3" Mar 12 18:17:22.084077 master-0 kubenswrapper[7337]: I0312 18:17:22.084027 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:17:22.084720 master-0 kubenswrapper[7337]: I0312 18:17:22.084686 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:17:22.531902 master-0 kubenswrapper[7337]: W0312 18:17:22.531850 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33feec78_4592_4343_965b_aa1b7044fcf3.slice/crio-3092260e94cf7b40349ae07a2ae8e596f460006829227c4e274b91910ac605bd WatchSource:0}: Error finding container 3092260e94cf7b40349ae07a2ae8e596f460006829227c4e274b91910ac605bd: Status 404 returned error can't find the container with id 3092260e94cf7b40349ae07a2ae8e596f460006829227c4e274b91910ac605bd Mar 12 18:17:22.835619 master-0 kubenswrapper[7337]: E0312 18:17:22.835479 7337 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:17:22.835619 master-0 kubenswrapper[7337]: E0312 18:17:22.835601 7337 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 18:17:22.838930 master-0 kubenswrapper[7337]: I0312 18:17:22.838838 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:17:22.983530 master-0 kubenswrapper[7337]: E0312 18:17:22.983447 7337 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/53c3e9b4b8cca5096418831f916cb1cb0c57b6fb8aae97e2652c5c5a1bbcab4a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/53c3e9b4b8cca5096418831f916cb1cb0c57b6fb8aae97e2652c5c5a1bbcab4a/diff: no such file or directory, extraDiskErr: Mar 12 18:17:23.092903 master-0 kubenswrapper[7337]: I0312 18:17:23.092745 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-cpthp" event={"ID":"33feec78-4592-4343-965b-aa1b7044fcf3","Type":"ContainerStarted","Data":"9ba7bf5965eadffe4beaf88fd479b4e2db6a3760b155335f94f38129cd944778"} Mar 12 18:17:23.092903 master-0 kubenswrapper[7337]: I0312 18:17:23.092825 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-cpthp" event={"ID":"33feec78-4592-4343-965b-aa1b7044fcf3","Type":"ContainerStarted","Data":"3092260e94cf7b40349ae07a2ae8e596f460006829227c4e274b91910ac605bd"} Mar 12 18:17:23.093473 master-0 kubenswrapper[7337]: I0312 18:17:23.093081 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:17:23.747867 master-0 kubenswrapper[7337]: I0312 18:17:23.747739 7337 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 18:17:25.454945 master-0 kubenswrapper[7337]: E0312 18:17:25.454861 7337 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 18:17:25.455539 master-0 kubenswrapper[7337]: E0312 18:17:25.454997 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ec8121ea-f6e9-4232-9837-78b278a8cf54-kube-api-access podName:ec8121ea-f6e9-4232-9837-78b278a8cf54 nodeName:}" failed. No retries permitted until 2026-03-12 18:17:33.454956014 +0000 UTC m=+253.923556961 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ec8121ea-f6e9-4232-9837-78b278a8cf54-kube-api-access") pod "installer-2-master-0" (UID: "ec8121ea-f6e9-4232-9837-78b278a8cf54") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 18:17:25.455539 master-0 kubenswrapper[7337]: E0312 18:17:25.455156 7337 projected.go:194] Error preparing data for projected volume kube-api-access-bdc26 for pod openshift-marketplace/redhat-marketplace-ggkqg: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 18:17:25.455539 master-0 kubenswrapper[7337]: E0312 18:17:25.455266 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0cc54e47-af53-448a-b1c9-043710890a32-kube-api-access-bdc26 podName:0cc54e47-af53-448a-b1c9-043710890a32 nodeName:}" failed. No retries permitted until 2026-03-12 18:17:33.455237351 +0000 UTC m=+253.923838328 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-bdc26" (UniqueName: "kubernetes.io/projected/0cc54e47-af53-448a-b1c9-043710890a32-kube-api-access-bdc26") pod "redhat-marketplace-ggkqg" (UID: "0cc54e47-af53-448a-b1c9-043710890a32") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 18:17:25.658085 master-0 kubenswrapper[7337]: E0312 18:17:25.657984 7337 projected.go:194] Error preparing data for projected volume kube-api-access-gmsnk for pod openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zd9gm: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 18:17:25.658355 master-0 kubenswrapper[7337]: E0312 18:17:25.658231 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34cbf061-4c76-476e-bed9-0a133c744862-kube-api-access-gmsnk podName:34cbf061-4c76-476e-bed9-0a133c744862 nodeName:}" failed. No retries permitted until 2026-03-12 18:17:33.658133885 +0000 UTC m=+254.126734872 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gmsnk" (UniqueName: "kubernetes.io/projected/34cbf061-4c76-476e-bed9-0a133c744862-kube-api-access-gmsnk") pod "control-plane-machine-set-operator-6686554ddc-zd9gm" (UID: "34cbf061-4c76-476e-bed9-0a133c744862") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 18:17:25.873304 master-0 kubenswrapper[7337]: I0312 18:17:25.873252 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 12 18:17:29.726793 master-0 kubenswrapper[7337]: I0312 18:17:29.726740 7337 scope.go:117] "RemoveContainer" containerID="d836c15d1f62aaf6703c6affbd63fc3695d34670b745bd3f6f244a838540e38e" Mar 12 18:17:30.144647 master-0 kubenswrapper[7337]: I0312 18:17:30.144604 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-2ltx9_bce831df-c604-4608-a24e-b14d62c5287a/snapshot-controller/1.log" Mar 12 18:17:30.144861 master-0 kubenswrapper[7337]: I0312 18:17:30.144662 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-2ltx9" event={"ID":"bce831df-c604-4608-a24e-b14d62c5287a","Type":"ContainerStarted","Data":"b960a5afb85862022f06f7d612fd0f8b4c4023ddaec1e0fe6a309ca8f51ad930"} Mar 12 18:17:32.146763 master-0 kubenswrapper[7337]: E0312 18:17:32.146666 7337 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 12 18:17:33.289321 master-0 kubenswrapper[7337]: I0312 18:17:33.289258 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:17:33.295962 master-0 kubenswrapper[7337]: I0312 18:17:33.295917 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:17:33.514464 master-0 kubenswrapper[7337]: I0312 18:17:33.514374 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdc26\" (UniqueName: \"kubernetes.io/projected/0cc54e47-af53-448a-b1c9-043710890a32-kube-api-access-bdc26\") pod \"redhat-marketplace-ggkqg\" (UID: \"0cc54e47-af53-448a-b1c9-043710890a32\") " pod="openshift-marketplace/redhat-marketplace-ggkqg" Mar 12 18:17:33.514750 master-0 kubenswrapper[7337]: I0312 18:17:33.514501 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec8121ea-f6e9-4232-9837-78b278a8cf54-kube-api-access\") pod \"installer-2-master-0\" (UID: \"ec8121ea-f6e9-4232-9837-78b278a8cf54\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 18:17:33.544092 master-0 kubenswrapper[7337]: I0312 18:17:33.544042 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdc26\" (UniqueName: \"kubernetes.io/projected/0cc54e47-af53-448a-b1c9-043710890a32-kube-api-access-bdc26\") pod \"redhat-marketplace-ggkqg\" (UID: \"0cc54e47-af53-448a-b1c9-043710890a32\") " pod="openshift-marketplace/redhat-marketplace-ggkqg" Mar 12 18:17:33.547920 master-0 kubenswrapper[7337]: I0312 18:17:33.547881 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec8121ea-f6e9-4232-9837-78b278a8cf54-kube-api-access\") pod \"installer-2-master-0\" (UID: \"ec8121ea-f6e9-4232-9837-78b278a8cf54\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 18:17:33.716777 master-0 kubenswrapper[7337]: I0312 18:17:33.716671 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmsnk\" (UniqueName: \"kubernetes.io/projected/34cbf061-4c76-476e-bed9-0a133c744862-kube-api-access-gmsnk\") pod \"control-plane-machine-set-operator-6686554ddc-zd9gm\" (UID: \"34cbf061-4c76-476e-bed9-0a133c744862\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zd9gm" Mar 12 18:17:33.732418 master-0 kubenswrapper[7337]: I0312 18:17:33.732312 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-hhnmb" Mar 12 18:17:33.733493 master-0 kubenswrapper[7337]: I0312 18:17:33.733431 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-tr8hr" Mar 12 18:17:33.740939 master-0 kubenswrapper[7337]: I0312 18:17:33.740866 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ggkqg" Mar 12 18:17:33.741142 master-0 kubenswrapper[7337]: I0312 18:17:33.740953 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 18:17:33.751190 master-0 kubenswrapper[7337]: I0312 18:17:33.751112 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmsnk\" (UniqueName: \"kubernetes.io/projected/34cbf061-4c76-476e-bed9-0a133c744862-kube-api-access-gmsnk\") pod \"control-plane-machine-set-operator-6686554ddc-zd9gm\" (UID: \"34cbf061-4c76-476e-bed9-0a133c744862\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zd9gm" Mar 12 18:17:34.031474 master-0 kubenswrapper[7337]: I0312 18:17:34.031377 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-h5f5n" Mar 12 18:17:34.040857 master-0 kubenswrapper[7337]: I0312 18:17:34.040810 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zd9gm" Mar 12 18:17:34.163495 master-0 kubenswrapper[7337]: I0312 18:17:34.163433 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 12 18:17:34.173845 master-0 kubenswrapper[7337]: W0312 18:17:34.173779 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podec8121ea_f6e9_4232_9837_78b278a8cf54.slice/crio-1fbd8581b67e0a5e29b36d7c0987774ae0aa02a95a0bdf7e572b9e31a319d172 WatchSource:0}: Error finding container 1fbd8581b67e0a5e29b36d7c0987774ae0aa02a95a0bdf7e572b9e31a319d172: Status 404 returned error can't find the container with id 1fbd8581b67e0a5e29b36d7c0987774ae0aa02a95a0bdf7e572b9e31a319d172 Mar 12 18:17:34.217876 master-0 kubenswrapper[7337]: I0312 18:17:34.217827 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ggkqg"] Mar 12 18:17:34.423467 master-0 kubenswrapper[7337]: I0312 18:17:34.423421 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zd9gm"] Mar 12 18:17:34.434411 master-0 kubenswrapper[7337]: W0312 18:17:34.434363 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34cbf061_4c76_476e_bed9_0a133c744862.slice/crio-86c89d73955641e1c897c48a0e24b070831cd22e13c7526466c6f9aac066f9fb WatchSource:0}: Error finding container 86c89d73955641e1c897c48a0e24b070831cd22e13c7526466c6f9aac066f9fb: Status 404 returned error can't find the container with id 86c89d73955641e1c897c48a0e24b070831cd22e13c7526466c6f9aac066f9fb Mar 12 18:17:35.177103 master-0 kubenswrapper[7337]: I0312 18:17:35.177019 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zd9gm" event={"ID":"34cbf061-4c76-476e-bed9-0a133c744862","Type":"ContainerStarted","Data":"86c89d73955641e1c897c48a0e24b070831cd22e13c7526466c6f9aac066f9fb"} Mar 12 18:17:35.179385 master-0 kubenswrapper[7337]: I0312 18:17:35.179325 7337 generic.go:334] "Generic (PLEG): container finished" podID="0cc54e47-af53-448a-b1c9-043710890a32" containerID="d444bb83001cd903efee9e4b70e81f0883fb0a84f83f9034f7633dc5339f7ac1" exitCode=0 Mar 12 18:17:35.179499 master-0 kubenswrapper[7337]: I0312 18:17:35.179420 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ggkqg" event={"ID":"0cc54e47-af53-448a-b1c9-043710890a32","Type":"ContainerDied","Data":"d444bb83001cd903efee9e4b70e81f0883fb0a84f83f9034f7633dc5339f7ac1"} Mar 12 18:17:35.179499 master-0 kubenswrapper[7337]: I0312 18:17:35.179454 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ggkqg" event={"ID":"0cc54e47-af53-448a-b1c9-043710890a32","Type":"ContainerStarted","Data":"07e040d6dfa9951cac42e33315b3d655ef1dac90f6ba66c364219500701a9ef4"} Mar 12 18:17:35.182993 master-0 kubenswrapper[7337]: I0312 18:17:35.182951 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"ec8121ea-f6e9-4232-9837-78b278a8cf54","Type":"ContainerStarted","Data":"858ee31a04ea10059b361ef351f3695c906bcd6e4d8c64728b6201ca11a0a592"} Mar 12 18:17:35.183086 master-0 kubenswrapper[7337]: I0312 18:17:35.183005 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"ec8121ea-f6e9-4232-9837-78b278a8cf54","Type":"ContainerStarted","Data":"1fbd8581b67e0a5e29b36d7c0987774ae0aa02a95a0bdf7e572b9e31a319d172"} Mar 12 18:17:35.221733 master-0 kubenswrapper[7337]: I0312 18:17:35.221495 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-master-0" podStartSLOduration=188.221463916 podStartE2EDuration="3m8.221463916s" podCreationTimestamp="2026-03-12 18:14:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:17:35.221203209 +0000 UTC m=+255.689804186" watchObservedRunningTime="2026-03-12 18:17:35.221463916 +0000 UTC m=+255.690064933" Mar 12 18:17:35.525170 master-0 kubenswrapper[7337]: E0312 18:17:35.524754 7337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 18:17:36.196552 master-0 kubenswrapper[7337]: I0312 18:17:36.194648 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ggkqg" event={"ID":"0cc54e47-af53-448a-b1c9-043710890a32","Type":"ContainerStarted","Data":"64b8db76d38d762e3433321ecb6cf6a40c39a4859996726a4dbe65ebe8ab152e"} Mar 12 18:17:37.204172 master-0 kubenswrapper[7337]: I0312 18:17:37.204110 7337 generic.go:334] "Generic (PLEG): container finished" podID="0cc54e47-af53-448a-b1c9-043710890a32" containerID="64b8db76d38d762e3433321ecb6cf6a40c39a4859996726a4dbe65ebe8ab152e" exitCode=0 Mar 12 18:17:37.204172 master-0 kubenswrapper[7337]: I0312 18:17:37.204151 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ggkqg" event={"ID":"0cc54e47-af53-448a-b1c9-043710890a32","Type":"ContainerDied","Data":"64b8db76d38d762e3433321ecb6cf6a40c39a4859996726a4dbe65ebe8ab152e"} Mar 12 18:17:37.206058 master-0 kubenswrapper[7337]: I0312 18:17:37.206019 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zd9gm" event={"ID":"34cbf061-4c76-476e-bed9-0a133c744862","Type":"ContainerStarted","Data":"d72afaed4f952dfc1603764d86ef509711bb42af6ee8dbbfe68a46a833266739"} Mar 12 18:17:37.250623 master-0 kubenswrapper[7337]: I0312 18:17:37.250479 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zd9gm" podStartSLOduration=188.37194106 podStartE2EDuration="3m10.250453557s" podCreationTimestamp="2026-03-12 18:14:27 +0000 UTC" firstStartedPulling="2026-03-12 18:17:34.436673222 +0000 UTC m=+254.905274169" lastFinishedPulling="2026-03-12 18:17:36.315185719 +0000 UTC m=+256.783786666" observedRunningTime="2026-03-12 18:17:37.248650091 +0000 UTC m=+257.717251078" watchObservedRunningTime="2026-03-12 18:17:37.250453557 +0000 UTC m=+257.719054544" Mar 12 18:17:38.219218 master-0 kubenswrapper[7337]: I0312 18:17:38.218562 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ggkqg" event={"ID":"0cc54e47-af53-448a-b1c9-043710890a32","Type":"ContainerStarted","Data":"5aafe75dccc28bb2200202091bf3cef5ad843d85112bb6c2f016f795026d8085"} Mar 12 18:17:38.240410 master-0 kubenswrapper[7337]: I0312 18:17:38.240311 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ggkqg" podStartSLOduration=189.709752936 podStartE2EDuration="3m12.24028392s" podCreationTimestamp="2026-03-12 18:14:26 +0000 UTC" firstStartedPulling="2026-03-12 18:17:35.18115853 +0000 UTC m=+255.649759477" lastFinishedPulling="2026-03-12 18:17:37.711689514 +0000 UTC m=+258.180290461" observedRunningTime="2026-03-12 18:17:38.239169942 +0000 UTC m=+258.707770919" watchObservedRunningTime="2026-03-12 18:17:38.24028392 +0000 UTC m=+258.708884867" Mar 12 18:17:41.792426 master-0 kubenswrapper[7337]: E0312 18:17:41.792354 7337 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" is forbidden: the server was unable to return a response in the time allotted, but may still be processing the request (get limitranges)" pod="openshift-etcd/etcd-master-0" Mar 12 18:17:43.223982 master-0 kubenswrapper[7337]: E0312 18:17:43.223818 7337 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T18:17:33Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T18:17:33Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T18:17:33Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T18:17:33Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:24fdf6755aec2ff108ceee2e24eee87c6953140e4325a59c8d1ddbf1dca41828\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:ea25f58b1e485b176739a042e02cb509306918451cb4ee862117f0d0892ea2c1\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1739033560},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1fce8b5c6b0206ecb4ddc7de47062bed853b88d4e34415e9e5a2a6bc99cf6aad\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:8bd0ffcb6caac4a5d03346b5f7cdfaf2f6f9f9d0a30deff8f216e6cb63b0ee75\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1282704097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:896dc34712ba5eb2d9daa6e77a55cb67501435f8f108cc4e5eda3ece5212c2b0\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:ef4e5c2f6262e5a68f785f148c9da79aefa72e2e20a365276bc996658ce6689c\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221806801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\\\"],\\\"sizeBytes\\\":465086330},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1\\\"],\\\"sizeBytes\\\":463700811},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b714a7ada1e295b599b432f32e1fd5b74c8cdbe6fe51e95306322b25cb873914\\\"],\\\"sizeBytes\\\":458126424},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9\\\"],\\\"sizeBytes\\\":456575686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89cb093f319eaa04acfe9431b8697bffbc71ab670546f7ed257daa332165c626\\\"],\\\"sizeBytes\\\":448828105},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783\\\"],\\\"sizeBytes\\\":448041621},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf9670d0f269f8d49fd9ef4981999be195f6624a4146aa93d9201eb8acc81053\\\"],\\\"sizeBytes\\\":443271011}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:17:43.741622 master-0 kubenswrapper[7337]: I0312 18:17:43.741550 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ggkqg" Mar 12 18:17:43.741622 master-0 kubenswrapper[7337]: I0312 18:17:43.741621 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ggkqg" Mar 12 18:17:43.776537 master-0 kubenswrapper[7337]: I0312 18:17:43.776475 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ggkqg" Mar 12 18:17:44.302152 master-0 kubenswrapper[7337]: I0312 18:17:44.302089 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ggkqg" Mar 12 18:17:55.986126 master-0 kubenswrapper[7337]: I0312 18:17:55.986057 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:17:57.031673 master-0 kubenswrapper[7337]: I0312 18:17:57.031588 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d5tcw"] Mar 12 18:17:57.032939 master-0 kubenswrapper[7337]: E0312 18:17:57.032895 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a" containerName="extract-content" Mar 12 18:17:57.033252 master-0 kubenswrapper[7337]: I0312 18:17:57.033216 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a" containerName="extract-content" Mar 12 18:17:57.033466 master-0 kubenswrapper[7337]: E0312 18:17:57.033432 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2331fc4b-e67b-4496-8cae-15cd11cf3030" containerName="extract-utilities" Mar 12 18:17:57.033727 master-0 kubenswrapper[7337]: I0312 18:17:57.033693 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="2331fc4b-e67b-4496-8cae-15cd11cf3030" containerName="extract-utilities" Mar 12 18:17:57.033916 master-0 kubenswrapper[7337]: E0312 18:17:57.033887 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4d20441-ec1f-4571-b590-989f2bdd4082" containerName="extract-content" Mar 12 18:17:57.034100 master-0 kubenswrapper[7337]: I0312 18:17:57.034070 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4d20441-ec1f-4571-b590-989f2bdd4082" containerName="extract-content" Mar 12 18:17:57.034291 master-0 kubenswrapper[7337]: E0312 18:17:57.034262 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a" containerName="extract-utilities" Mar 12 18:17:57.034542 master-0 kubenswrapper[7337]: I0312 18:17:57.034483 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a" containerName="extract-utilities" Mar 12 18:17:57.034742 master-0 kubenswrapper[7337]: E0312 18:17:57.034710 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38785e6e-3052-405c-8874-4f295985def5" containerName="installer" Mar 12 18:17:57.034924 master-0 kubenswrapper[7337]: I0312 18:17:57.034895 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="38785e6e-3052-405c-8874-4f295985def5" containerName="installer" Mar 12 18:17:57.035115 master-0 kubenswrapper[7337]: E0312 18:17:57.035082 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f516dab9-06d1-4bea-96b9-8f3e14543bbd" containerName="extract-utilities" Mar 12 18:17:57.035310 master-0 kubenswrapper[7337]: I0312 18:17:57.035278 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="f516dab9-06d1-4bea-96b9-8f3e14543bbd" containerName="extract-utilities" Mar 12 18:17:57.035503 master-0 kubenswrapper[7337]: E0312 18:17:57.035471 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4d20441-ec1f-4571-b590-989f2bdd4082" containerName="extract-utilities" Mar 12 18:17:57.035772 master-0 kubenswrapper[7337]: I0312 18:17:57.035738 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4d20441-ec1f-4571-b590-989f2bdd4082" containerName="extract-utilities" Mar 12 18:17:57.035967 master-0 kubenswrapper[7337]: E0312 18:17:57.035938 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f516dab9-06d1-4bea-96b9-8f3e14543bbd" containerName="extract-content" Mar 12 18:17:57.036144 master-0 kubenswrapper[7337]: I0312 18:17:57.036116 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="f516dab9-06d1-4bea-96b9-8f3e14543bbd" containerName="extract-content" Mar 12 18:17:57.036332 master-0 kubenswrapper[7337]: E0312 18:17:57.036303 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7542f3f1-23fe-41df-99b9-4324c75d35b7" containerName="installer" Mar 12 18:17:57.036511 master-0 kubenswrapper[7337]: I0312 18:17:57.036485 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="7542f3f1-23fe-41df-99b9-4324c75d35b7" containerName="installer" Mar 12 18:17:57.036744 master-0 kubenswrapper[7337]: E0312 18:17:57.036715 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e418d797-2c31-404b-9dc3-251399e42542" containerName="installer" Mar 12 18:17:57.036927 master-0 kubenswrapper[7337]: I0312 18:17:57.036900 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="e418d797-2c31-404b-9dc3-251399e42542" containerName="installer" Mar 12 18:17:57.037117 master-0 kubenswrapper[7337]: E0312 18:17:57.037090 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cd18201-afdc-4229-972e-ab01adb2a7f3" containerName="installer" Mar 12 18:17:57.037277 master-0 kubenswrapper[7337]: I0312 18:17:57.037253 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cd18201-afdc-4229-972e-ab01adb2a7f3" containerName="installer" Mar 12 18:17:57.037437 master-0 kubenswrapper[7337]: E0312 18:17:57.037411 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2331fc4b-e67b-4496-8cae-15cd11cf3030" containerName="extract-content" Mar 12 18:17:57.037641 master-0 kubenswrapper[7337]: I0312 18:17:57.037610 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="2331fc4b-e67b-4496-8cae-15cd11cf3030" containerName="extract-content" Mar 12 18:17:57.038099 master-0 kubenswrapper[7337]: I0312 18:17:57.038060 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cd18201-afdc-4229-972e-ab01adb2a7f3" containerName="installer" Mar 12 18:17:57.038302 master-0 kubenswrapper[7337]: I0312 18:17:57.038272 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4d20441-ec1f-4571-b590-989f2bdd4082" containerName="extract-content" Mar 12 18:17:57.038563 master-0 kubenswrapper[7337]: I0312 18:17:57.038501 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="7542f3f1-23fe-41df-99b9-4324c75d35b7" containerName="installer" Mar 12 18:17:57.038760 master-0 kubenswrapper[7337]: I0312 18:17:57.038730 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="e418d797-2c31-404b-9dc3-251399e42542" containerName="installer" Mar 12 18:17:57.038939 master-0 kubenswrapper[7337]: I0312 18:17:57.038911 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="9813ec3a-943f-4f29-b2dc-9a92ac3c0d4a" containerName="extract-content" Mar 12 18:17:57.039198 master-0 kubenswrapper[7337]: I0312 18:17:57.039166 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="38785e6e-3052-405c-8874-4f295985def5" containerName="installer" Mar 12 18:17:57.039391 master-0 kubenswrapper[7337]: I0312 18:17:57.039360 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="f516dab9-06d1-4bea-96b9-8f3e14543bbd" containerName="extract-content" Mar 12 18:17:57.039615 master-0 kubenswrapper[7337]: I0312 18:17:57.039584 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="2331fc4b-e67b-4496-8cae-15cd11cf3030" containerName="extract-content" Mar 12 18:17:57.041429 master-0 kubenswrapper[7337]: I0312 18:17:57.041386 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d5tcw" Mar 12 18:17:57.044881 master-0 kubenswrapper[7337]: I0312 18:17:57.044843 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-gzn76" Mar 12 18:17:57.061179 master-0 kubenswrapper[7337]: I0312 18:17:57.061118 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d5tcw"] Mar 12 18:17:57.119721 master-0 kubenswrapper[7337]: I0312 18:17:57.119662 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3e5b8c8-a100-4880-a0b9-9c3989d4e739-catalog-content\") pod \"redhat-operators-d5tcw\" (UID: \"d3e5b8c8-a100-4880-a0b9-9c3989d4e739\") " pod="openshift-marketplace/redhat-operators-d5tcw" Mar 12 18:17:57.120099 master-0 kubenswrapper[7337]: I0312 18:17:57.120076 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrg6p\" (UniqueName: \"kubernetes.io/projected/d3e5b8c8-a100-4880-a0b9-9c3989d4e739-kube-api-access-jrg6p\") pod \"redhat-operators-d5tcw\" (UID: \"d3e5b8c8-a100-4880-a0b9-9c3989d4e739\") " pod="openshift-marketplace/redhat-operators-d5tcw" Mar 12 18:17:57.120276 master-0 kubenswrapper[7337]: I0312 18:17:57.120252 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3e5b8c8-a100-4880-a0b9-9c3989d4e739-utilities\") pod \"redhat-operators-d5tcw\" (UID: \"d3e5b8c8-a100-4880-a0b9-9c3989d4e739\") " pod="openshift-marketplace/redhat-operators-d5tcw" Mar 12 18:17:57.221420 master-0 kubenswrapper[7337]: I0312 18:17:57.221365 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3e5b8c8-a100-4880-a0b9-9c3989d4e739-utilities\") pod \"redhat-operators-d5tcw\" (UID: \"d3e5b8c8-a100-4880-a0b9-9c3989d4e739\") " pod="openshift-marketplace/redhat-operators-d5tcw" Mar 12 18:17:57.221420 master-0 kubenswrapper[7337]: I0312 18:17:57.221421 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3e5b8c8-a100-4880-a0b9-9c3989d4e739-catalog-content\") pod \"redhat-operators-d5tcw\" (UID: \"d3e5b8c8-a100-4880-a0b9-9c3989d4e739\") " pod="openshift-marketplace/redhat-operators-d5tcw" Mar 12 18:17:57.221653 master-0 kubenswrapper[7337]: I0312 18:17:57.221469 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrg6p\" (UniqueName: \"kubernetes.io/projected/d3e5b8c8-a100-4880-a0b9-9c3989d4e739-kube-api-access-jrg6p\") pod \"redhat-operators-d5tcw\" (UID: \"d3e5b8c8-a100-4880-a0b9-9c3989d4e739\") " pod="openshift-marketplace/redhat-operators-d5tcw" Mar 12 18:17:57.222157 master-0 kubenswrapper[7337]: I0312 18:17:57.222136 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3e5b8c8-a100-4880-a0b9-9c3989d4e739-utilities\") pod \"redhat-operators-d5tcw\" (UID: \"d3e5b8c8-a100-4880-a0b9-9c3989d4e739\") " pod="openshift-marketplace/redhat-operators-d5tcw" Mar 12 18:17:57.222259 master-0 kubenswrapper[7337]: I0312 18:17:57.222148 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3e5b8c8-a100-4880-a0b9-9c3989d4e739-catalog-content\") pod \"redhat-operators-d5tcw\" (UID: \"d3e5b8c8-a100-4880-a0b9-9c3989d4e739\") " pod="openshift-marketplace/redhat-operators-d5tcw" Mar 12 18:17:57.236412 master-0 kubenswrapper[7337]: I0312 18:17:57.236364 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrg6p\" (UniqueName: \"kubernetes.io/projected/d3e5b8c8-a100-4880-a0b9-9c3989d4e739-kube-api-access-jrg6p\") pod \"redhat-operators-d5tcw\" (UID: \"d3e5b8c8-a100-4880-a0b9-9c3989d4e739\") " pod="openshift-marketplace/redhat-operators-d5tcw" Mar 12 18:17:57.391585 master-0 kubenswrapper[7337]: I0312 18:17:57.391484 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d5tcw" Mar 12 18:17:57.794068 master-0 kubenswrapper[7337]: I0312 18:17:57.794023 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d5tcw"] Mar 12 18:17:57.802380 master-0 kubenswrapper[7337]: W0312 18:17:57.802340 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3e5b8c8_a100_4880_a0b9_9c3989d4e739.slice/crio-9f552493910bdb73df860ba3e68ba62d10417ad2cd26090e67bcb0c06153f976 WatchSource:0}: Error finding container 9f552493910bdb73df860ba3e68ba62d10417ad2cd26090e67bcb0c06153f976: Status 404 returned error can't find the container with id 9f552493910bdb73df860ba3e68ba62d10417ad2cd26090e67bcb0c06153f976 Mar 12 18:17:58.341752 master-0 kubenswrapper[7337]: I0312 18:17:58.341695 7337 generic.go:334] "Generic (PLEG): container finished" podID="d3e5b8c8-a100-4880-a0b9-9c3989d4e739" containerID="67222c5dd6dc84922f3b31521e73c46da015094341322a76fa955a30881504a6" exitCode=0 Mar 12 18:17:58.343135 master-0 kubenswrapper[7337]: I0312 18:17:58.341757 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d5tcw" event={"ID":"d3e5b8c8-a100-4880-a0b9-9c3989d4e739","Type":"ContainerDied","Data":"67222c5dd6dc84922f3b31521e73c46da015094341322a76fa955a30881504a6"} Mar 12 18:17:58.343135 master-0 kubenswrapper[7337]: I0312 18:17:58.341801 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d5tcw" event={"ID":"d3e5b8c8-a100-4880-a0b9-9c3989d4e739","Type":"ContainerStarted","Data":"9f552493910bdb73df860ba3e68ba62d10417ad2cd26090e67bcb0c06153f976"} Mar 12 18:17:58.958017 master-0 kubenswrapper[7337]: I0312 18:17:58.957939 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-dt8x2"] Mar 12 18:17:58.959205 master-0 kubenswrapper[7337]: I0312 18:17:58.959159 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-dt8x2" Mar 12 18:17:58.960977 master-0 kubenswrapper[7337]: I0312 18:17:58.960923 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-8762n"] Mar 12 18:17:58.961807 master-0 kubenswrapper[7337]: I0312 18:17:58.961768 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-9f7ld" Mar 12 18:17:58.962016 master-0 kubenswrapper[7337]: I0312 18:17:58.961989 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 12 18:17:58.962321 master-0 kubenswrapper[7337]: I0312 18:17:58.962294 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 12 18:17:58.964290 master-0 kubenswrapper[7337]: I0312 18:17:58.962502 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 12 18:17:58.964290 master-0 kubenswrapper[7337]: I0312 18:17:58.963096 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 12 18:17:58.964290 master-0 kubenswrapper[7337]: I0312 18:17:58.963208 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 12 18:17:58.964290 master-0 kubenswrapper[7337]: I0312 18:17:58.962944 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-8762n" Mar 12 18:17:58.965371 master-0 kubenswrapper[7337]: I0312 18:17:58.965334 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 12 18:17:58.965459 master-0 kubenswrapper[7337]: I0312 18:17:58.965398 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 12 18:17:58.966454 master-0 kubenswrapper[7337]: I0312 18:17:58.965527 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-hsjbb" Mar 12 18:17:58.966454 master-0 kubenswrapper[7337]: I0312 18:17:58.965340 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 12 18:17:58.979611 master-0 kubenswrapper[7337]: I0312 18:17:58.978665 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-vk7lr"] Mar 12 18:17:58.980007 master-0 kubenswrapper[7337]: I0312 18:17:58.979970 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-vk7lr" Mar 12 18:17:58.980137 master-0 kubenswrapper[7337]: I0312 18:17:58.980090 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-8762n"] Mar 12 18:17:58.982630 master-0 kubenswrapper[7337]: I0312 18:17:58.982599 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 12 18:17:58.983016 master-0 kubenswrapper[7337]: I0312 18:17:58.982993 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-8275t" Mar 12 18:17:58.983610 master-0 kubenswrapper[7337]: I0312 18:17:58.983588 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 12 18:17:58.984108 master-0 kubenswrapper[7337]: I0312 18:17:58.984086 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 12 18:17:58.986702 master-0 kubenswrapper[7337]: I0312 18:17:58.986676 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 12 18:17:59.033578 master-0 kubenswrapper[7337]: I0312 18:17:59.032823 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-vk7lr"] Mar 12 18:17:59.050144 master-0 kubenswrapper[7337]: I0312 18:17:59.050105 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b2c6cd11-b1ed-4fed-a4ce-4eee0af20868-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-vk7lr\" (UID: \"b2c6cd11-b1ed-4fed-a4ce-4eee0af20868\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-vk7lr" Mar 12 18:17:59.050389 master-0 kubenswrapper[7337]: I0312 18:17:59.050373 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6gq4\" (UniqueName: \"kubernetes.io/projected/faf32e9b-b44a-45dc-97b3-ec3e753e1345-kube-api-access-n6gq4\") pod \"machine-approver-955fcfb87-dt8x2\" (UID: \"faf32e9b-b44a-45dc-97b3-ec3e753e1345\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-dt8x2" Mar 12 18:17:59.050506 master-0 kubenswrapper[7337]: I0312 18:17:59.050483 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0fb78c61-2051-42e2-8668-fa7404ccac43-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-8762n\" (UID: \"0fb78c61-2051-42e2-8668-fa7404ccac43\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-8762n" Mar 12 18:17:59.050620 master-0 kubenswrapper[7337]: I0312 18:17:59.050607 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsdjs\" (UniqueName: \"kubernetes.io/projected/0fb78c61-2051-42e2-8668-fa7404ccac43-kube-api-access-zsdjs\") pod \"cluster-samples-operator-664cb58b85-8762n\" (UID: \"0fb78c61-2051-42e2-8668-fa7404ccac43\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-8762n" Mar 12 18:17:59.050704 master-0 kubenswrapper[7337]: I0312 18:17:59.050687 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/b2c6cd11-b1ed-4fed-a4ce-4eee0af20868-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-vk7lr\" (UID: \"b2c6cd11-b1ed-4fed-a4ce-4eee0af20868\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-vk7lr" Mar 12 18:17:59.050779 master-0 kubenswrapper[7337]: I0312 18:17:59.050768 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md9dt\" (UniqueName: \"kubernetes.io/projected/b2c6cd11-b1ed-4fed-a4ce-4eee0af20868-kube-api-access-md9dt\") pod \"cloud-credential-operator-55d85b7b47-vk7lr\" (UID: \"b2c6cd11-b1ed-4fed-a4ce-4eee0af20868\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-vk7lr" Mar 12 18:17:59.052189 master-0 kubenswrapper[7337]: I0312 18:17:59.050838 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/faf32e9b-b44a-45dc-97b3-ec3e753e1345-machine-approver-tls\") pod \"machine-approver-955fcfb87-dt8x2\" (UID: \"faf32e9b-b44a-45dc-97b3-ec3e753e1345\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-dt8x2" Mar 12 18:17:59.052318 master-0 kubenswrapper[7337]: I0312 18:17:59.052302 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faf32e9b-b44a-45dc-97b3-ec3e753e1345-config\") pod \"machine-approver-955fcfb87-dt8x2\" (UID: \"faf32e9b-b44a-45dc-97b3-ec3e753e1345\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-dt8x2" Mar 12 18:17:59.052408 master-0 kubenswrapper[7337]: I0312 18:17:59.052394 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/faf32e9b-b44a-45dc-97b3-ec3e753e1345-auth-proxy-config\") pod \"machine-approver-955fcfb87-dt8x2\" (UID: \"faf32e9b-b44a-45dc-97b3-ec3e753e1345\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-dt8x2" Mar 12 18:17:59.078001 master-0 kubenswrapper[7337]: I0312 18:17:59.077966 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c"] Mar 12 18:17:59.080168 master-0 kubenswrapper[7337]: I0312 18:17:59.080138 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-8f89dfddd-m6z6d"] Mar 12 18:17:59.081008 master-0 kubenswrapper[7337]: I0312 18:17:59.080987 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-8f89dfddd-m6z6d" Mar 12 18:17:59.085623 master-0 kubenswrapper[7337]: I0312 18:17:59.083499 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c" Mar 12 18:17:59.087040 master-0 kubenswrapper[7337]: I0312 18:17:59.087011 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 12 18:17:59.087248 master-0 kubenswrapper[7337]: I0312 18:17:59.087113 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 18:17:59.087475 master-0 kubenswrapper[7337]: I0312 18:17:59.087459 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 12 18:17:59.087782 master-0 kubenswrapper[7337]: I0312 18:17:59.087765 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-72pgx" Mar 12 18:17:59.088065 master-0 kubenswrapper[7337]: I0312 18:17:59.088049 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 12 18:17:59.088403 master-0 kubenswrapper[7337]: I0312 18:17:59.088345 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 12 18:17:59.095053 master-0 kubenswrapper[7337]: I0312 18:17:59.090644 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 12 18:17:59.095053 master-0 kubenswrapper[7337]: I0312 18:17:59.091252 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 12 18:17:59.095053 master-0 kubenswrapper[7337]: I0312 18:17:59.091694 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-q4h9m" Mar 12 18:17:59.095053 master-0 kubenswrapper[7337]: I0312 18:17:59.092203 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 12 18:17:59.095053 master-0 kubenswrapper[7337]: I0312 18:17:59.094329 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 12 18:17:59.095486 master-0 kubenswrapper[7337]: I0312 18:17:59.095181 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-88gzm"] Mar 12 18:17:59.096062 master-0 kubenswrapper[7337]: I0312 18:17:59.095854 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-88gzm" Mar 12 18:17:59.099013 master-0 kubenswrapper[7337]: I0312 18:17:59.098976 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-sjkl7" Mar 12 18:17:59.099013 master-0 kubenswrapper[7337]: I0312 18:17:59.098999 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 12 18:17:59.105563 master-0 kubenswrapper[7337]: I0312 18:17:59.104209 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-8f89dfddd-m6z6d"] Mar 12 18:17:59.107911 master-0 kubenswrapper[7337]: I0312 18:17:59.105914 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 12 18:17:59.109156 master-0 kubenswrapper[7337]: I0312 18:17:59.109117 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-88gzm"] Mar 12 18:17:59.152305 master-0 kubenswrapper[7337]: I0312 18:17:59.152231 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-7769569c45-dq2gs"] Mar 12 18:17:59.156546 master-0 kubenswrapper[7337]: I0312 18:17:59.153352 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7769569c45-dq2gs" Mar 12 18:17:59.156546 master-0 kubenswrapper[7337]: I0312 18:17:59.154089 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-84bf6db4f9-gnrzd"] Mar 12 18:17:59.156546 master-0 kubenswrapper[7337]: I0312 18:17:59.154610 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clsd9\" (UniqueName: \"kubernetes.io/projected/f5e09875-4445-4584-94f0-243148307bb0-kube-api-access-clsd9\") pod \"insights-operator-8f89dfddd-m6z6d\" (UID: \"f5e09875-4445-4584-94f0-243148307bb0\") " pod="openshift-insights/insights-operator-8f89dfddd-m6z6d" Mar 12 18:17:59.156546 master-0 kubenswrapper[7337]: I0312 18:17:59.154653 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/b2c6cd11-b1ed-4fed-a4ce-4eee0af20868-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-vk7lr\" (UID: \"b2c6cd11-b1ed-4fed-a4ce-4eee0af20868\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-vk7lr" Mar 12 18:17:59.156546 master-0 kubenswrapper[7337]: I0312 18:17:59.154685 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37392bec-4d79-4a65-bc41-6708d9edab46-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-559568b945-n426c\" (UID: \"37392bec-4d79-4a65-bc41-6708d9edab46\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c" Mar 12 18:17:59.156546 master-0 kubenswrapper[7337]: I0312 18:17:59.154715 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/37392bec-4d79-4a65-bc41-6708d9edab46-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-559568b945-n426c\" (UID: \"37392bec-4d79-4a65-bc41-6708d9edab46\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c" Mar 12 18:17:59.156546 master-0 kubenswrapper[7337]: I0312 18:17:59.154747 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md9dt\" (UniqueName: \"kubernetes.io/projected/b2c6cd11-b1ed-4fed-a4ce-4eee0af20868-kube-api-access-md9dt\") pod \"cloud-credential-operator-55d85b7b47-vk7lr\" (UID: \"b2c6cd11-b1ed-4fed-a4ce-4eee0af20868\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-vk7lr" Mar 12 18:17:59.156546 master-0 kubenswrapper[7337]: I0312 18:17:59.154774 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/37392bec-4d79-4a65-bc41-6708d9edab46-images\") pod \"cluster-cloud-controller-manager-operator-559568b945-n426c\" (UID: \"37392bec-4d79-4a65-bc41-6708d9edab46\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c" Mar 12 18:17:59.156546 master-0 kubenswrapper[7337]: I0312 18:17:59.156383 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-gnrzd" Mar 12 18:17:59.160694 master-0 kubenswrapper[7337]: I0312 18:17:59.159448 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-hm292" Mar 12 18:17:59.164287 master-0 kubenswrapper[7337]: I0312 18:17:59.164234 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 12 18:17:59.165440 master-0 kubenswrapper[7337]: I0312 18:17:59.165413 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-cc9lz" Mar 12 18:17:59.165617 master-0 kubenswrapper[7337]: I0312 18:17:59.165572 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37392bec-4d79-4a65-bc41-6708d9edab46-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-559568b945-n426c\" (UID: \"37392bec-4d79-4a65-bc41-6708d9edab46\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c" Mar 12 18:17:59.165695 master-0 kubenswrapper[7337]: I0312 18:17:59.165670 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/faf32e9b-b44a-45dc-97b3-ec3e753e1345-machine-approver-tls\") pod \"machine-approver-955fcfb87-dt8x2\" (UID: \"faf32e9b-b44a-45dc-97b3-ec3e753e1345\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-dt8x2" Mar 12 18:17:59.165695 master-0 kubenswrapper[7337]: I0312 18:17:59.165693 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 12 18:17:59.165771 master-0 kubenswrapper[7337]: I0312 18:17:59.165729 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faf32e9b-b44a-45dc-97b3-ec3e753e1345-config\") pod \"machine-approver-955fcfb87-dt8x2\" (UID: \"faf32e9b-b44a-45dc-97b3-ec3e753e1345\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-dt8x2" Mar 12 18:17:59.165771 master-0 kubenswrapper[7337]: I0312 18:17:59.165762 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5e09875-4445-4584-94f0-243148307bb0-serving-cert\") pod \"insights-operator-8f89dfddd-m6z6d\" (UID: \"f5e09875-4445-4584-94f0-243148307bb0\") " pod="openshift-insights/insights-operator-8f89dfddd-m6z6d" Mar 12 18:17:59.165833 master-0 kubenswrapper[7337]: I0312 18:17:59.165812 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/faf32e9b-b44a-45dc-97b3-ec3e753e1345-auth-proxy-config\") pod \"machine-approver-955fcfb87-dt8x2\" (UID: \"faf32e9b-b44a-45dc-97b3-ec3e753e1345\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-dt8x2" Mar 12 18:17:59.165864 master-0 kubenswrapper[7337]: I0312 18:17:59.165843 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjz8k\" (UniqueName: \"kubernetes.io/projected/1287cbb9-c9f6-48d2-9fda-f4464074e41b-kube-api-access-hjz8k\") pod \"cluster-storage-operator-6fbfc8dc8f-88gzm\" (UID: \"1287cbb9-c9f6-48d2-9fda-f4464074e41b\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-88gzm" Mar 12 18:17:59.165940 master-0 kubenswrapper[7337]: I0312 18:17:59.165916 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcgtr\" (UniqueName: \"kubernetes.io/projected/37392bec-4d79-4a65-bc41-6708d9edab46-kube-api-access-pcgtr\") pod \"cluster-cloud-controller-manager-operator-559568b945-n426c\" (UID: \"37392bec-4d79-4a65-bc41-6708d9edab46\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c" Mar 12 18:17:59.166006 master-0 kubenswrapper[7337]: I0312 18:17:59.165993 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 12 18:17:59.166270 master-0 kubenswrapper[7337]: I0312 18:17:59.166251 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b2c6cd11-b1ed-4fed-a4ce-4eee0af20868-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-vk7lr\" (UID: \"b2c6cd11-b1ed-4fed-a4ce-4eee0af20868\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-vk7lr" Mar 12 18:17:59.166381 master-0 kubenswrapper[7337]: I0312 18:17:59.166363 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5e09875-4445-4584-94f0-243148307bb0-service-ca-bundle\") pod \"insights-operator-8f89dfddd-m6z6d\" (UID: \"f5e09875-4445-4584-94f0-243148307bb0\") " pod="openshift-insights/insights-operator-8f89dfddd-m6z6d" Mar 12 18:17:59.166549 master-0 kubenswrapper[7337]: I0312 18:17:59.166536 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6gq4\" (UniqueName: \"kubernetes.io/projected/faf32e9b-b44a-45dc-97b3-ec3e753e1345-kube-api-access-n6gq4\") pod \"machine-approver-955fcfb87-dt8x2\" (UID: \"faf32e9b-b44a-45dc-97b3-ec3e753e1345\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-dt8x2" Mar 12 18:17:59.166655 master-0 kubenswrapper[7337]: I0312 18:17:59.166642 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0fb78c61-2051-42e2-8668-fa7404ccac43-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-8762n\" (UID: \"0fb78c61-2051-42e2-8668-fa7404ccac43\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-8762n" Mar 12 18:17:59.166753 master-0 kubenswrapper[7337]: I0312 18:17:59.166740 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5e09875-4445-4584-94f0-243148307bb0-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-m6z6d\" (UID: \"f5e09875-4445-4584-94f0-243148307bb0\") " pod="openshift-insights/insights-operator-8f89dfddd-m6z6d" Mar 12 18:17:59.166863 master-0 kubenswrapper[7337]: I0312 18:17:59.166850 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1287cbb9-c9f6-48d2-9fda-f4464074e41b-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-88gzm\" (UID: \"1287cbb9-c9f6-48d2-9fda-f4464074e41b\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-88gzm" Mar 12 18:17:59.166970 master-0 kubenswrapper[7337]: I0312 18:17:59.166956 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsdjs\" (UniqueName: \"kubernetes.io/projected/0fb78c61-2051-42e2-8668-fa7404ccac43-kube-api-access-zsdjs\") pod \"cluster-samples-operator-664cb58b85-8762n\" (UID: \"0fb78c61-2051-42e2-8668-fa7404ccac43\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-8762n" Mar 12 18:17:59.167055 master-0 kubenswrapper[7337]: I0312 18:17:59.166764 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/faf32e9b-b44a-45dc-97b3-ec3e753e1345-auth-proxy-config\") pod \"machine-approver-955fcfb87-dt8x2\" (UID: \"faf32e9b-b44a-45dc-97b3-ec3e753e1345\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-dt8x2" Mar 12 18:17:59.167138 master-0 kubenswrapper[7337]: I0312 18:17:59.167056 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/f5e09875-4445-4584-94f0-243148307bb0-snapshots\") pod \"insights-operator-8f89dfddd-m6z6d\" (UID: \"f5e09875-4445-4584-94f0-243148307bb0\") " pod="openshift-insights/insights-operator-8f89dfddd-m6z6d" Mar 12 18:17:59.167310 master-0 kubenswrapper[7337]: I0312 18:17:59.166549 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faf32e9b-b44a-45dc-97b3-ec3e753e1345-config\") pod \"machine-approver-955fcfb87-dt8x2\" (UID: \"faf32e9b-b44a-45dc-97b3-ec3e753e1345\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-dt8x2" Mar 12 18:17:59.167438 master-0 kubenswrapper[7337]: I0312 18:17:59.167406 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b2c6cd11-b1ed-4fed-a4ce-4eee0af20868-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-vk7lr\" (UID: \"b2c6cd11-b1ed-4fed-a4ce-4eee0af20868\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-vk7lr" Mar 12 18:17:59.169560 master-0 kubenswrapper[7337]: I0312 18:17:59.169528 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/faf32e9b-b44a-45dc-97b3-ec3e753e1345-machine-approver-tls\") pod \"machine-approver-955fcfb87-dt8x2\" (UID: \"faf32e9b-b44a-45dc-97b3-ec3e753e1345\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-dt8x2" Mar 12 18:17:59.169902 master-0 kubenswrapper[7337]: I0312 18:17:59.169779 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0fb78c61-2051-42e2-8668-fa7404ccac43-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-8762n\" (UID: \"0fb78c61-2051-42e2-8668-fa7404ccac43\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-8762n" Mar 12 18:17:59.169902 master-0 kubenswrapper[7337]: I0312 18:17:59.169841 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-69576476f7-hkfnq"] Mar 12 18:17:59.174189 master-0 kubenswrapper[7337]: I0312 18:17:59.170958 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-hkfnq" Mar 12 18:17:59.174189 master-0 kubenswrapper[7337]: I0312 18:17:59.172627 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg"] Mar 12 18:17:59.174189 master-0 kubenswrapper[7337]: I0312 18:17:59.173187 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/b2c6cd11-b1ed-4fed-a4ce-4eee0af20868-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-vk7lr\" (UID: \"b2c6cd11-b1ed-4fed-a4ce-4eee0af20868\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-vk7lr" Mar 12 18:17:59.174189 master-0 kubenswrapper[7337]: I0312 18:17:59.173587 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 12 18:17:59.174189 master-0 kubenswrapper[7337]: I0312 18:17:59.173782 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 12 18:17:59.174189 master-0 kubenswrapper[7337]: I0312 18:17:59.173904 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-6gh5d" Mar 12 18:17:59.174189 master-0 kubenswrapper[7337]: I0312 18:17:59.174128 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg" Mar 12 18:17:59.178613 master-0 kubenswrapper[7337]: I0312 18:17:59.178561 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 12 18:17:59.178809 master-0 kubenswrapper[7337]: I0312 18:17:59.178729 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-nv88b" Mar 12 18:17:59.178867 master-0 kubenswrapper[7337]: I0312 18:17:59.178839 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 12 18:17:59.180092 master-0 kubenswrapper[7337]: I0312 18:17:59.178979 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 12 18:17:59.180092 master-0 kubenswrapper[7337]: I0312 18:17:59.179145 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 12 18:17:59.180092 master-0 kubenswrapper[7337]: I0312 18:17:59.179337 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 12 18:17:59.183661 master-0 kubenswrapper[7337]: I0312 18:17:59.183604 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb"] Mar 12 18:17:59.184750 master-0 kubenswrapper[7337]: I0312 18:17:59.184734 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" Mar 12 18:17:59.192488 master-0 kubenswrapper[7337]: I0312 18:17:59.192438 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-ssqhn" Mar 12 18:17:59.192782 master-0 kubenswrapper[7337]: I0312 18:17:59.192720 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 12 18:17:59.192871 master-0 kubenswrapper[7337]: I0312 18:17:59.192853 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 12 18:17:59.192976 master-0 kubenswrapper[7337]: I0312 18:17:59.192958 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 12 18:17:59.193115 master-0 kubenswrapper[7337]: I0312 18:17:59.193094 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 12 18:17:59.198689 master-0 kubenswrapper[7337]: I0312 18:17:59.198655 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg"] Mar 12 18:17:59.202240 master-0 kubenswrapper[7337]: I0312 18:17:59.201245 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-7769569c45-dq2gs"] Mar 12 18:17:59.204933 master-0 kubenswrapper[7337]: I0312 18:17:59.203106 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-69576476f7-hkfnq"] Mar 12 18:17:59.216276 master-0 kubenswrapper[7337]: I0312 18:17:59.216225 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb"] Mar 12 18:17:59.219910 master-0 kubenswrapper[7337]: I0312 18:17:59.219808 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-84bf6db4f9-gnrzd"] Mar 12 18:17:59.239658 master-0 kubenswrapper[7337]: I0312 18:17:59.230807 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsdjs\" (UniqueName: \"kubernetes.io/projected/0fb78c61-2051-42e2-8668-fa7404ccac43-kube-api-access-zsdjs\") pod \"cluster-samples-operator-664cb58b85-8762n\" (UID: \"0fb78c61-2051-42e2-8668-fa7404ccac43\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-8762n" Mar 12 18:17:59.240037 master-0 kubenswrapper[7337]: I0312 18:17:59.232970 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6gq4\" (UniqueName: \"kubernetes.io/projected/faf32e9b-b44a-45dc-97b3-ec3e753e1345-kube-api-access-n6gq4\") pod \"machine-approver-955fcfb87-dt8x2\" (UID: \"faf32e9b-b44a-45dc-97b3-ec3e753e1345\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-dt8x2" Mar 12 18:17:59.242124 master-0 kubenswrapper[7337]: I0312 18:17:59.242101 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md9dt\" (UniqueName: \"kubernetes.io/projected/b2c6cd11-b1ed-4fed-a4ce-4eee0af20868-kube-api-access-md9dt\") pod \"cloud-credential-operator-55d85b7b47-vk7lr\" (UID: \"b2c6cd11-b1ed-4fed-a4ce-4eee0af20868\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-vk7lr" Mar 12 18:17:59.253576 master-0 kubenswrapper[7337]: I0312 18:17:59.252625 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc"] Mar 12 18:17:59.253576 master-0 kubenswrapper[7337]: I0312 18:17:59.253365 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc" Mar 12 18:17:59.255139 master-0 kubenswrapper[7337]: I0312 18:17:59.255104 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-g4mx9" Mar 12 18:17:59.255330 master-0 kubenswrapper[7337]: I0312 18:17:59.255316 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.268451 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5e09875-4445-4584-94f0-243148307bb0-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-m6z6d\" (UID: \"f5e09875-4445-4584-94f0-243148307bb0\") " pod="openshift-insights/insights-operator-8f89dfddd-m6z6d" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.268503 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1287cbb9-c9f6-48d2-9fda-f4464074e41b-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-88gzm\" (UID: \"1287cbb9-c9f6-48d2-9fda-f4464074e41b\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-88gzm" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.268556 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/f5e09875-4445-4584-94f0-243148307bb0-snapshots\") pod \"insights-operator-8f89dfddd-m6z6d\" (UID: \"f5e09875-4445-4584-94f0-243148307bb0\") " pod="openshift-insights/insights-operator-8f89dfddd-m6z6d" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.268575 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clsd9\" (UniqueName: \"kubernetes.io/projected/f5e09875-4445-4584-94f0-243148307bb0-kube-api-access-clsd9\") pod \"insights-operator-8f89dfddd-m6z6d\" (UID: \"f5e09875-4445-4584-94f0-243148307bb0\") " pod="openshift-insights/insights-operator-8f89dfddd-m6z6d" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.268593 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37392bec-4d79-4a65-bc41-6708d9edab46-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-559568b945-n426c\" (UID: \"37392bec-4d79-4a65-bc41-6708d9edab46\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.268612 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/604044f4-9b0b-4747-827d-843f3cfa7077-images\") pod \"machine-config-operator-fdb5c78b5-xbfrg\" (UID: \"604044f4-9b0b-4747-827d-843f3cfa7077\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.268635 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/37392bec-4d79-4a65-bc41-6708d9edab46-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-559568b945-n426c\" (UID: \"37392bec-4d79-4a65-bc41-6708d9edab46\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.268656 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e5fb0152-3efd-4000-bce3-fa90b75316ae-images\") pod \"cluster-baremetal-operator-5cdb4c5598-2psgb\" (UID: \"e5fb0152-3efd-4000-bce3-fa90b75316ae\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.268674 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/37392bec-4d79-4a65-bc41-6708d9edab46-images\") pod \"cluster-cloud-controller-manager-operator-559568b945-n426c\" (UID: \"37392bec-4d79-4a65-bc41-6708d9edab46\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.269104 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37392bec-4d79-4a65-bc41-6708d9edab46-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-559568b945-n426c\" (UID: \"37392bec-4d79-4a65-bc41-6708d9edab46\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.269162 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aee40f88-83e4-45c8-8331-969943f9f9aa-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-hkfnq\" (UID: \"aee40f88-83e4-45c8-8331-969943f9f9aa\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-hkfnq" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.269213 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6595\" (UniqueName: \"kubernetes.io/projected/25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327-kube-api-access-x6595\") pod \"multus-admission-controller-7769569c45-dq2gs\" (UID: \"25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327\") " pod="openshift-multus/multus-admission-controller-7769569c45-dq2gs" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.269245 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5fb0152-3efd-4000-bce3-fa90b75316ae-config\") pod \"cluster-baremetal-operator-5cdb4c5598-2psgb\" (UID: \"e5fb0152-3efd-4000-bce3-fa90b75316ae\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.269284 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5e09875-4445-4584-94f0-243148307bb0-serving-cert\") pod \"insights-operator-8f89dfddd-m6z6d\" (UID: \"f5e09875-4445-4584-94f0-243148307bb0\") " pod="openshift-insights/insights-operator-8f89dfddd-m6z6d" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.269334 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjz8k\" (UniqueName: \"kubernetes.io/projected/1287cbb9-c9f6-48d2-9fda-f4464074e41b-kube-api-access-hjz8k\") pod \"cluster-storage-operator-6fbfc8dc8f-88gzm\" (UID: \"1287cbb9-c9f6-48d2-9fda-f4464074e41b\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-88gzm" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.269369 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aee40f88-83e4-45c8-8331-969943f9f9aa-cert\") pod \"cluster-autoscaler-operator-69576476f7-hkfnq\" (UID: \"aee40f88-83e4-45c8-8331-969943f9f9aa\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-hkfnq" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.269406 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4687cf53-55d7-42b7-b24d-e57da3989fd6-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-gnrzd\" (UID: \"4687cf53-55d7-42b7-b24d-e57da3989fd6\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-gnrzd" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.269435 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4687cf53-55d7-42b7-b24d-e57da3989fd6-config\") pod \"machine-api-operator-84bf6db4f9-gnrzd\" (UID: \"4687cf53-55d7-42b7-b24d-e57da3989fd6\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-gnrzd" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.269446 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37392bec-4d79-4a65-bc41-6708d9edab46-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-559568b945-n426c\" (UID: \"37392bec-4d79-4a65-bc41-6708d9edab46\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.269479 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcgtr\" (UniqueName: \"kubernetes.io/projected/37392bec-4d79-4a65-bc41-6708d9edab46-kube-api-access-pcgtr\") pod \"cluster-cloud-controller-manager-operator-559568b945-n426c\" (UID: \"37392bec-4d79-4a65-bc41-6708d9edab46\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.269554 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5e09875-4445-4584-94f0-243148307bb0-service-ca-bundle\") pod \"insights-operator-8f89dfddd-m6z6d\" (UID: \"f5e09875-4445-4584-94f0-243148307bb0\") " pod="openshift-insights/insights-operator-8f89dfddd-m6z6d" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.269582 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th72r\" (UniqueName: \"kubernetes.io/projected/aee40f88-83e4-45c8-8331-969943f9f9aa-kube-api-access-th72r\") pod \"cluster-autoscaler-operator-69576476f7-hkfnq\" (UID: \"aee40f88-83e4-45c8-8331-969943f9f9aa\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-hkfnq" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.269605 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/f5e09875-4445-4584-94f0-243148307bb0-snapshots\") pod \"insights-operator-8f89dfddd-m6z6d\" (UID: \"f5e09875-4445-4584-94f0-243148307bb0\") " pod="openshift-insights/insights-operator-8f89dfddd-m6z6d" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.269609 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68xhl\" (UniqueName: \"kubernetes.io/projected/4687cf53-55d7-42b7-b24d-e57da3989fd6-kube-api-access-68xhl\") pod \"machine-api-operator-84bf6db4f9-gnrzd\" (UID: \"4687cf53-55d7-42b7-b24d-e57da3989fd6\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-gnrzd" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.269985 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37392bec-4d79-4a65-bc41-6708d9edab46-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-559568b945-n426c\" (UID: \"37392bec-4d79-4a65-bc41-6708d9edab46\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.270005 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/37392bec-4d79-4a65-bc41-6708d9edab46-images\") pod \"cluster-cloud-controller-manager-operator-559568b945-n426c\" (UID: \"37392bec-4d79-4a65-bc41-6708d9edab46\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.270266 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e5fb0152-3efd-4000-bce3-fa90b75316ae-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-2psgb\" (UID: \"e5fb0152-3efd-4000-bce3-fa90b75316ae\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.270319 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327-webhook-certs\") pod \"multus-admission-controller-7769569c45-dq2gs\" (UID: \"25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327\") " pod="openshift-multus/multus-admission-controller-7769569c45-dq2gs" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.270353 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/e5fb0152-3efd-4000-bce3-fa90b75316ae-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-2psgb\" (UID: \"e5fb0152-3efd-4000-bce3-fa90b75316ae\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.270400 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/604044f4-9b0b-4747-827d-843f3cfa7077-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-xbfrg\" (UID: \"604044f4-9b0b-4747-827d-843f3cfa7077\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.270425 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqzmm\" (UniqueName: \"kubernetes.io/projected/604044f4-9b0b-4747-827d-843f3cfa7077-kube-api-access-fqzmm\") pod \"machine-config-operator-fdb5c78b5-xbfrg\" (UID: \"604044f4-9b0b-4747-827d-843f3cfa7077\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.270460 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4687cf53-55d7-42b7-b24d-e57da3989fd6-images\") pod \"machine-api-operator-84bf6db4f9-gnrzd\" (UID: \"4687cf53-55d7-42b7-b24d-e57da3989fd6\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-gnrzd" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.270484 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkftr\" (UniqueName: \"kubernetes.io/projected/e5fb0152-3efd-4000-bce3-fa90b75316ae-kube-api-access-pkftr\") pod \"cluster-baremetal-operator-5cdb4c5598-2psgb\" (UID: \"e5fb0152-3efd-4000-bce3-fa90b75316ae\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.270543 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/604044f4-9b0b-4747-827d-843f3cfa7077-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-xbfrg\" (UID: \"604044f4-9b0b-4747-827d-843f3cfa7077\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.270643 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5e09875-4445-4584-94f0-243148307bb0-service-ca-bundle\") pod \"insights-operator-8f89dfddd-m6z6d\" (UID: \"f5e09875-4445-4584-94f0-243148307bb0\") " pod="openshift-insights/insights-operator-8f89dfddd-m6z6d" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.271098 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5e09875-4445-4584-94f0-243148307bb0-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-m6z6d\" (UID: \"f5e09875-4445-4584-94f0-243148307bb0\") " pod="openshift-insights/insights-operator-8f89dfddd-m6z6d" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.274055 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/37392bec-4d79-4a65-bc41-6708d9edab46-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-559568b945-n426c\" (UID: \"37392bec-4d79-4a65-bc41-6708d9edab46\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c" Mar 12 18:17:59.275360 master-0 kubenswrapper[7337]: I0312 18:17:59.275023 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1287cbb9-c9f6-48d2-9fda-f4464074e41b-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-88gzm\" (UID: \"1287cbb9-c9f6-48d2-9fda-f4464074e41b\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-88gzm" Mar 12 18:17:59.283136 master-0 kubenswrapper[7337]: I0312 18:17:59.278617 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc"] Mar 12 18:17:59.300543 master-0 kubenswrapper[7337]: I0312 18:17:59.293326 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5e09875-4445-4584-94f0-243148307bb0-serving-cert\") pod \"insights-operator-8f89dfddd-m6z6d\" (UID: \"f5e09875-4445-4584-94f0-243148307bb0\") " pod="openshift-insights/insights-operator-8f89dfddd-m6z6d" Mar 12 18:17:59.300543 master-0 kubenswrapper[7337]: I0312 18:17:59.295504 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clsd9\" (UniqueName: \"kubernetes.io/projected/f5e09875-4445-4584-94f0-243148307bb0-kube-api-access-clsd9\") pod \"insights-operator-8f89dfddd-m6z6d\" (UID: \"f5e09875-4445-4584-94f0-243148307bb0\") " pod="openshift-insights/insights-operator-8f89dfddd-m6z6d" Mar 12 18:17:59.300543 master-0 kubenswrapper[7337]: I0312 18:17:59.295880 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcgtr\" (UniqueName: \"kubernetes.io/projected/37392bec-4d79-4a65-bc41-6708d9edab46-kube-api-access-pcgtr\") pod \"cluster-cloud-controller-manager-operator-559568b945-n426c\" (UID: \"37392bec-4d79-4a65-bc41-6708d9edab46\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c" Mar 12 18:17:59.300543 master-0 kubenswrapper[7337]: I0312 18:17:59.298884 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjz8k\" (UniqueName: \"kubernetes.io/projected/1287cbb9-c9f6-48d2-9fda-f4464074e41b-kube-api-access-hjz8k\") pod \"cluster-storage-operator-6fbfc8dc8f-88gzm\" (UID: \"1287cbb9-c9f6-48d2-9fda-f4464074e41b\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-88gzm" Mar 12 18:17:59.351648 master-0 kubenswrapper[7337]: I0312 18:17:59.350662 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-dt8x2" Mar 12 18:17:59.355650 master-0 kubenswrapper[7337]: I0312 18:17:59.355613 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d5tcw" event={"ID":"d3e5b8c8-a100-4880-a0b9-9c3989d4e739","Type":"ContainerStarted","Data":"a27087137da15f60f89c47c0f62e286ef1e4ec7252189f88f560af42271ffe59"} Mar 12 18:17:59.369983 master-0 kubenswrapper[7337]: I0312 18:17:59.369944 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-8762n" Mar 12 18:17:59.372071 master-0 kubenswrapper[7337]: I0312 18:17:59.371619 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aee40f88-83e4-45c8-8331-969943f9f9aa-cert\") pod \"cluster-autoscaler-operator-69576476f7-hkfnq\" (UID: \"aee40f88-83e4-45c8-8331-969943f9f9aa\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-hkfnq" Mar 12 18:17:59.372071 master-0 kubenswrapper[7337]: I0312 18:17:59.371669 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4687cf53-55d7-42b7-b24d-e57da3989fd6-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-gnrzd\" (UID: \"4687cf53-55d7-42b7-b24d-e57da3989fd6\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-gnrzd" Mar 12 18:17:59.372071 master-0 kubenswrapper[7337]: I0312 18:17:59.371702 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4687cf53-55d7-42b7-b24d-e57da3989fd6-config\") pod \"machine-api-operator-84bf6db4f9-gnrzd\" (UID: \"4687cf53-55d7-42b7-b24d-e57da3989fd6\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-gnrzd" Mar 12 18:17:59.372071 master-0 kubenswrapper[7337]: I0312 18:17:59.371733 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th72r\" (UniqueName: \"kubernetes.io/projected/aee40f88-83e4-45c8-8331-969943f9f9aa-kube-api-access-th72r\") pod \"cluster-autoscaler-operator-69576476f7-hkfnq\" (UID: \"aee40f88-83e4-45c8-8331-969943f9f9aa\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-hkfnq" Mar 12 18:17:59.372071 master-0 kubenswrapper[7337]: I0312 18:17:59.371760 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68xhl\" (UniqueName: \"kubernetes.io/projected/4687cf53-55d7-42b7-b24d-e57da3989fd6-kube-api-access-68xhl\") pod \"machine-api-operator-84bf6db4f9-gnrzd\" (UID: \"4687cf53-55d7-42b7-b24d-e57da3989fd6\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-gnrzd" Mar 12 18:17:59.372071 master-0 kubenswrapper[7337]: I0312 18:17:59.371784 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e5fb0152-3efd-4000-bce3-fa90b75316ae-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-2psgb\" (UID: \"e5fb0152-3efd-4000-bce3-fa90b75316ae\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" Mar 12 18:17:59.372071 master-0 kubenswrapper[7337]: I0312 18:17:59.371820 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327-webhook-certs\") pod \"multus-admission-controller-7769569c45-dq2gs\" (UID: \"25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327\") " pod="openshift-multus/multus-admission-controller-7769569c45-dq2gs" Mar 12 18:17:59.372071 master-0 kubenswrapper[7337]: I0312 18:17:59.371844 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/e5fb0152-3efd-4000-bce3-fa90b75316ae-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-2psgb\" (UID: \"e5fb0152-3efd-4000-bce3-fa90b75316ae\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" Mar 12 18:17:59.372071 master-0 kubenswrapper[7337]: I0312 18:17:59.371867 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqzmm\" (UniqueName: \"kubernetes.io/projected/604044f4-9b0b-4747-827d-843f3cfa7077-kube-api-access-fqzmm\") pod \"machine-config-operator-fdb5c78b5-xbfrg\" (UID: \"604044f4-9b0b-4747-827d-843f3cfa7077\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg" Mar 12 18:17:59.372071 master-0 kubenswrapper[7337]: I0312 18:17:59.371892 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/604044f4-9b0b-4747-827d-843f3cfa7077-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-xbfrg\" (UID: \"604044f4-9b0b-4747-827d-843f3cfa7077\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg" Mar 12 18:17:59.372071 master-0 kubenswrapper[7337]: I0312 18:17:59.371916 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4687cf53-55d7-42b7-b24d-e57da3989fd6-images\") pod \"machine-api-operator-84bf6db4f9-gnrzd\" (UID: \"4687cf53-55d7-42b7-b24d-e57da3989fd6\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-gnrzd" Mar 12 18:17:59.372071 master-0 kubenswrapper[7337]: I0312 18:17:59.371936 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkftr\" (UniqueName: \"kubernetes.io/projected/e5fb0152-3efd-4000-bce3-fa90b75316ae-kube-api-access-pkftr\") pod \"cluster-baremetal-operator-5cdb4c5598-2psgb\" (UID: \"e5fb0152-3efd-4000-bce3-fa90b75316ae\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" Mar 12 18:17:59.372071 master-0 kubenswrapper[7337]: I0312 18:17:59.371965 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/604044f4-9b0b-4747-827d-843f3cfa7077-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-xbfrg\" (UID: \"604044f4-9b0b-4747-827d-843f3cfa7077\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg" Mar 12 18:17:59.372071 master-0 kubenswrapper[7337]: I0312 18:17:59.371994 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp5gk\" (UniqueName: \"kubernetes.io/projected/b38e7fcd-8f7a-4d4f-8702-7ef205261054-kube-api-access-zp5gk\") pod \"packageserver-694648486f-f89lc\" (UID: \"b38e7fcd-8f7a-4d4f-8702-7ef205261054\") " pod="openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc" Mar 12 18:17:59.372071 master-0 kubenswrapper[7337]: I0312 18:17:59.372027 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b38e7fcd-8f7a-4d4f-8702-7ef205261054-webhook-cert\") pod \"packageserver-694648486f-f89lc\" (UID: \"b38e7fcd-8f7a-4d4f-8702-7ef205261054\") " pod="openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc" Mar 12 18:17:59.372071 master-0 kubenswrapper[7337]: I0312 18:17:59.372050 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b38e7fcd-8f7a-4d4f-8702-7ef205261054-tmpfs\") pod \"packageserver-694648486f-f89lc\" (UID: \"b38e7fcd-8f7a-4d4f-8702-7ef205261054\") " pod="openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc" Mar 12 18:17:59.372071 master-0 kubenswrapper[7337]: I0312 18:17:59.372073 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b38e7fcd-8f7a-4d4f-8702-7ef205261054-apiservice-cert\") pod \"packageserver-694648486f-f89lc\" (UID: \"b38e7fcd-8f7a-4d4f-8702-7ef205261054\") " pod="openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc" Mar 12 18:17:59.373033 master-0 kubenswrapper[7337]: I0312 18:17:59.372108 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/604044f4-9b0b-4747-827d-843f3cfa7077-images\") pod \"machine-config-operator-fdb5c78b5-xbfrg\" (UID: \"604044f4-9b0b-4747-827d-843f3cfa7077\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg" Mar 12 18:17:59.373033 master-0 kubenswrapper[7337]: I0312 18:17:59.372318 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e5fb0152-3efd-4000-bce3-fa90b75316ae-images\") pod \"cluster-baremetal-operator-5cdb4c5598-2psgb\" (UID: \"e5fb0152-3efd-4000-bce3-fa90b75316ae\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" Mar 12 18:17:59.373033 master-0 kubenswrapper[7337]: I0312 18:17:59.372342 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aee40f88-83e4-45c8-8331-969943f9f9aa-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-hkfnq\" (UID: \"aee40f88-83e4-45c8-8331-969943f9f9aa\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-hkfnq" Mar 12 18:17:59.373033 master-0 kubenswrapper[7337]: I0312 18:17:59.372371 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6595\" (UniqueName: \"kubernetes.io/projected/25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327-kube-api-access-x6595\") pod \"multus-admission-controller-7769569c45-dq2gs\" (UID: \"25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327\") " pod="openshift-multus/multus-admission-controller-7769569c45-dq2gs" Mar 12 18:17:59.373033 master-0 kubenswrapper[7337]: I0312 18:17:59.372397 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5fb0152-3efd-4000-bce3-fa90b75316ae-config\") pod \"cluster-baremetal-operator-5cdb4c5598-2psgb\" (UID: \"e5fb0152-3efd-4000-bce3-fa90b75316ae\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" Mar 12 18:17:59.375420 master-0 kubenswrapper[7337]: I0312 18:17:59.375319 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e5fb0152-3efd-4000-bce3-fa90b75316ae-images\") pod \"cluster-baremetal-operator-5cdb4c5598-2psgb\" (UID: \"e5fb0152-3efd-4000-bce3-fa90b75316ae\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" Mar 12 18:17:59.376626 master-0 kubenswrapper[7337]: I0312 18:17:59.376585 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aee40f88-83e4-45c8-8331-969943f9f9aa-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-hkfnq\" (UID: \"aee40f88-83e4-45c8-8331-969943f9f9aa\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-hkfnq" Mar 12 18:17:59.376957 master-0 kubenswrapper[7337]: I0312 18:17:59.376918 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5fb0152-3efd-4000-bce3-fa90b75316ae-config\") pod \"cluster-baremetal-operator-5cdb4c5598-2psgb\" (UID: \"e5fb0152-3efd-4000-bce3-fa90b75316ae\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" Mar 12 18:17:59.377146 master-0 kubenswrapper[7337]: I0312 18:17:59.377075 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4687cf53-55d7-42b7-b24d-e57da3989fd6-images\") pod \"machine-api-operator-84bf6db4f9-gnrzd\" (UID: \"4687cf53-55d7-42b7-b24d-e57da3989fd6\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-gnrzd" Mar 12 18:17:59.379530 master-0 kubenswrapper[7337]: I0312 18:17:59.379415 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-vk7lr" Mar 12 18:17:59.379530 master-0 kubenswrapper[7337]: I0312 18:17:59.379434 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/604044f4-9b0b-4747-827d-843f3cfa7077-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-xbfrg\" (UID: \"604044f4-9b0b-4747-827d-843f3cfa7077\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg" Mar 12 18:17:59.385603 master-0 kubenswrapper[7337]: I0312 18:17:59.380813 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/e5fb0152-3efd-4000-bce3-fa90b75316ae-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-2psgb\" (UID: \"e5fb0152-3efd-4000-bce3-fa90b75316ae\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" Mar 12 18:17:59.385603 master-0 kubenswrapper[7337]: I0312 18:17:59.382006 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e5fb0152-3efd-4000-bce3-fa90b75316ae-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-2psgb\" (UID: \"e5fb0152-3efd-4000-bce3-fa90b75316ae\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" Mar 12 18:17:59.385603 master-0 kubenswrapper[7337]: I0312 18:17:59.382959 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/604044f4-9b0b-4747-827d-843f3cfa7077-images\") pod \"machine-config-operator-fdb5c78b5-xbfrg\" (UID: \"604044f4-9b0b-4747-827d-843f3cfa7077\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg" Mar 12 18:17:59.385603 master-0 kubenswrapper[7337]: I0312 18:17:59.383489 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/604044f4-9b0b-4747-827d-843f3cfa7077-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-xbfrg\" (UID: \"604044f4-9b0b-4747-827d-843f3cfa7077\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg" Mar 12 18:17:59.391876 master-0 kubenswrapper[7337]: I0312 18:17:59.391832 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aee40f88-83e4-45c8-8331-969943f9f9aa-cert\") pod \"cluster-autoscaler-operator-69576476f7-hkfnq\" (UID: \"aee40f88-83e4-45c8-8331-969943f9f9aa\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-hkfnq" Mar 12 18:17:59.394071 master-0 kubenswrapper[7337]: I0312 18:17:59.392835 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4687cf53-55d7-42b7-b24d-e57da3989fd6-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-gnrzd\" (UID: \"4687cf53-55d7-42b7-b24d-e57da3989fd6\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-gnrzd" Mar 12 18:17:59.394071 master-0 kubenswrapper[7337]: I0312 18:17:59.392865 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327-webhook-certs\") pod \"multus-admission-controller-7769569c45-dq2gs\" (UID: \"25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327\") " pod="openshift-multus/multus-admission-controller-7769569c45-dq2gs" Mar 12 18:17:59.394852 master-0 kubenswrapper[7337]: I0312 18:17:59.394506 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4687cf53-55d7-42b7-b24d-e57da3989fd6-config\") pod \"machine-api-operator-84bf6db4f9-gnrzd\" (UID: \"4687cf53-55d7-42b7-b24d-e57da3989fd6\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-gnrzd" Mar 12 18:17:59.399563 master-0 kubenswrapper[7337]: I0312 18:17:59.399433 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68xhl\" (UniqueName: \"kubernetes.io/projected/4687cf53-55d7-42b7-b24d-e57da3989fd6-kube-api-access-68xhl\") pod \"machine-api-operator-84bf6db4f9-gnrzd\" (UID: \"4687cf53-55d7-42b7-b24d-e57da3989fd6\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-gnrzd" Mar 12 18:17:59.400081 master-0 kubenswrapper[7337]: I0312 18:17:59.400032 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6595\" (UniqueName: \"kubernetes.io/projected/25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327-kube-api-access-x6595\") pod \"multus-admission-controller-7769569c45-dq2gs\" (UID: \"25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327\") " pod="openshift-multus/multus-admission-controller-7769569c45-dq2gs" Mar 12 18:17:59.403115 master-0 kubenswrapper[7337]: I0312 18:17:59.401271 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkftr\" (UniqueName: \"kubernetes.io/projected/e5fb0152-3efd-4000-bce3-fa90b75316ae-kube-api-access-pkftr\") pod \"cluster-baremetal-operator-5cdb4c5598-2psgb\" (UID: \"e5fb0152-3efd-4000-bce3-fa90b75316ae\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" Mar 12 18:17:59.405075 master-0 kubenswrapper[7337]: I0312 18:17:59.405001 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqzmm\" (UniqueName: \"kubernetes.io/projected/604044f4-9b0b-4747-827d-843f3cfa7077-kube-api-access-fqzmm\") pod \"machine-config-operator-fdb5c78b5-xbfrg\" (UID: \"604044f4-9b0b-4747-827d-843f3cfa7077\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg" Mar 12 18:17:59.410255 master-0 kubenswrapper[7337]: I0312 18:17:59.410216 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-8f89dfddd-m6z6d" Mar 12 18:17:59.415211 master-0 kubenswrapper[7337]: I0312 18:17:59.415133 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th72r\" (UniqueName: \"kubernetes.io/projected/aee40f88-83e4-45c8-8331-969943f9f9aa-kube-api-access-th72r\") pod \"cluster-autoscaler-operator-69576476f7-hkfnq\" (UID: \"aee40f88-83e4-45c8-8331-969943f9f9aa\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-hkfnq" Mar 12 18:17:59.420374 master-0 kubenswrapper[7337]: I0312 18:17:59.420339 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c" Mar 12 18:17:59.473605 master-0 kubenswrapper[7337]: I0312 18:17:59.473487 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b38e7fcd-8f7a-4d4f-8702-7ef205261054-webhook-cert\") pod \"packageserver-694648486f-f89lc\" (UID: \"b38e7fcd-8f7a-4d4f-8702-7ef205261054\") " pod="openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc" Mar 12 18:17:59.473605 master-0 kubenswrapper[7337]: I0312 18:17:59.473559 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b38e7fcd-8f7a-4d4f-8702-7ef205261054-tmpfs\") pod \"packageserver-694648486f-f89lc\" (UID: \"b38e7fcd-8f7a-4d4f-8702-7ef205261054\") " pod="openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc" Mar 12 18:17:59.473605 master-0 kubenswrapper[7337]: I0312 18:17:59.473580 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b38e7fcd-8f7a-4d4f-8702-7ef205261054-apiservice-cert\") pod \"packageserver-694648486f-f89lc\" (UID: \"b38e7fcd-8f7a-4d4f-8702-7ef205261054\") " pod="openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc" Mar 12 18:17:59.474207 master-0 kubenswrapper[7337]: I0312 18:17:59.474159 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp5gk\" (UniqueName: \"kubernetes.io/projected/b38e7fcd-8f7a-4d4f-8702-7ef205261054-kube-api-access-zp5gk\") pod \"packageserver-694648486f-f89lc\" (UID: \"b38e7fcd-8f7a-4d4f-8702-7ef205261054\") " pod="openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc" Mar 12 18:17:59.477649 master-0 kubenswrapper[7337]: I0312 18:17:59.477387 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b38e7fcd-8f7a-4d4f-8702-7ef205261054-apiservice-cert\") pod \"packageserver-694648486f-f89lc\" (UID: \"b38e7fcd-8f7a-4d4f-8702-7ef205261054\") " pod="openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc" Mar 12 18:17:59.479893 master-0 kubenswrapper[7337]: I0312 18:17:59.478707 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b38e7fcd-8f7a-4d4f-8702-7ef205261054-tmpfs\") pod \"packageserver-694648486f-f89lc\" (UID: \"b38e7fcd-8f7a-4d4f-8702-7ef205261054\") " pod="openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc" Mar 12 18:17:59.479893 master-0 kubenswrapper[7337]: W0312 18:17:59.479181 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37392bec_4d79_4a65_bc41_6708d9edab46.slice/crio-a1a913402509fa3c7cf82cb73e4bb34f7c829dc85a5a92ae7f093684fc4e4156 WatchSource:0}: Error finding container a1a913402509fa3c7cf82cb73e4bb34f7c829dc85a5a92ae7f093684fc4e4156: Status 404 returned error can't find the container with id a1a913402509fa3c7cf82cb73e4bb34f7c829dc85a5a92ae7f093684fc4e4156 Mar 12 18:17:59.484761 master-0 kubenswrapper[7337]: I0312 18:17:59.484143 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b38e7fcd-8f7a-4d4f-8702-7ef205261054-webhook-cert\") pod \"packageserver-694648486f-f89lc\" (UID: \"b38e7fcd-8f7a-4d4f-8702-7ef205261054\") " pod="openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc" Mar 12 18:17:59.494443 master-0 kubenswrapper[7337]: I0312 18:17:59.492910 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp5gk\" (UniqueName: \"kubernetes.io/projected/b38e7fcd-8f7a-4d4f-8702-7ef205261054-kube-api-access-zp5gk\") pod \"packageserver-694648486f-f89lc\" (UID: \"b38e7fcd-8f7a-4d4f-8702-7ef205261054\") " pod="openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc" Mar 12 18:17:59.508207 master-0 kubenswrapper[7337]: I0312 18:17:59.507207 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-88gzm" Mar 12 18:17:59.566095 master-0 kubenswrapper[7337]: I0312 18:17:59.566037 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7769569c45-dq2gs" Mar 12 18:17:59.646537 master-0 kubenswrapper[7337]: I0312 18:17:59.642607 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-gnrzd" Mar 12 18:17:59.655878 master-0 kubenswrapper[7337]: I0312 18:17:59.655787 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-hkfnq" Mar 12 18:17:59.665323 master-0 kubenswrapper[7337]: I0312 18:17:59.664940 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg" Mar 12 18:17:59.693618 master-0 kubenswrapper[7337]: I0312 18:17:59.692935 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" Mar 12 18:17:59.712060 master-0 kubenswrapper[7337]: I0312 18:17:59.711058 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc" Mar 12 18:17:59.834424 master-0 kubenswrapper[7337]: I0312 18:17:59.834378 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-8f89dfddd-m6z6d"] Mar 12 18:17:59.853792 master-0 kubenswrapper[7337]: W0312 18:17:59.853690 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5e09875_4445_4584_94f0_243148307bb0.slice/crio-bf988f8c0a2c5b4133ef3fafc379f42b9d2b5f0585dc6f41596f02be776951fc WatchSource:0}: Error finding container bf988f8c0a2c5b4133ef3fafc379f42b9d2b5f0585dc6f41596f02be776951fc: Status 404 returned error can't find the container with id bf988f8c0a2c5b4133ef3fafc379f42b9d2b5f0585dc6f41596f02be776951fc Mar 12 18:17:59.957563 master-0 kubenswrapper[7337]: I0312 18:17:59.949141 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-8762n"] Mar 12 18:18:00.022253 master-0 kubenswrapper[7337]: I0312 18:18:00.022223 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-vk7lr"] Mar 12 18:18:00.066468 master-0 kubenswrapper[7337]: I0312 18:18:00.066426 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-88gzm"] Mar 12 18:18:00.078245 master-0 kubenswrapper[7337]: I0312 18:18:00.076025 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-7769569c45-dq2gs"] Mar 12 18:18:00.203875 master-0 kubenswrapper[7337]: I0312 18:18:00.201598 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-84bf6db4f9-gnrzd"] Mar 12 18:18:00.206117 master-0 kubenswrapper[7337]: W0312 18:18:00.206074 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4687cf53_55d7_42b7_b24d_e57da3989fd6.slice/crio-a8072978949c36bf7009bf60cefe0dca093e821c0beb2fafb1092bfaa0b6ca78 WatchSource:0}: Error finding container a8072978949c36bf7009bf60cefe0dca093e821c0beb2fafb1092bfaa0b6ca78: Status 404 returned error can't find the container with id a8072978949c36bf7009bf60cefe0dca093e821c0beb2fafb1092bfaa0b6ca78 Mar 12 18:18:00.261272 master-0 kubenswrapper[7337]: I0312 18:18:00.261223 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc"] Mar 12 18:18:00.264292 master-0 kubenswrapper[7337]: I0312 18:18:00.263736 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg"] Mar 12 18:18:00.271216 master-0 kubenswrapper[7337]: I0312 18:18:00.271172 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-69576476f7-hkfnq"] Mar 12 18:18:00.273606 master-0 kubenswrapper[7337]: W0312 18:18:00.273580 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod604044f4_9b0b_4747_827d_843f3cfa7077.slice/crio-17d1a088b8419eadaf38001b6fa832ae43cb7cf4605e77942c3aeacc31e4a82a WatchSource:0}: Error finding container 17d1a088b8419eadaf38001b6fa832ae43cb7cf4605e77942c3aeacc31e4a82a: Status 404 returned error can't find the container with id 17d1a088b8419eadaf38001b6fa832ae43cb7cf4605e77942c3aeacc31e4a82a Mar 12 18:18:00.278629 master-0 kubenswrapper[7337]: W0312 18:18:00.278595 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb38e7fcd_8f7a_4d4f_8702_7ef205261054.slice/crio-bc7646036582f47fdf8d0b7175478e014e467156ca033a5485ffd89d7588c9e5 WatchSource:0}: Error finding container bc7646036582f47fdf8d0b7175478e014e467156ca033a5485ffd89d7588c9e5: Status 404 returned error can't find the container with id bc7646036582f47fdf8d0b7175478e014e467156ca033a5485ffd89d7588c9e5 Mar 12 18:18:00.284165 master-0 kubenswrapper[7337]: W0312 18:18:00.283345 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaee40f88_83e4_45c8_8331_969943f9f9aa.slice/crio-af342b8bd10a5707fb2e78e4192b40fedda1b3166a7a1ba47d9935fc638c9b76 WatchSource:0}: Error finding container af342b8bd10a5707fb2e78e4192b40fedda1b3166a7a1ba47d9935fc638c9b76: Status 404 returned error can't find the container with id af342b8bd10a5707fb2e78e4192b40fedda1b3166a7a1ba47d9935fc638c9b76 Mar 12 18:18:00.364573 master-0 kubenswrapper[7337]: I0312 18:18:00.364493 7337 generic.go:334] "Generic (PLEG): container finished" podID="d3e5b8c8-a100-4880-a0b9-9c3989d4e739" containerID="a27087137da15f60f89c47c0f62e286ef1e4ec7252189f88f560af42271ffe59" exitCode=0 Mar 12 18:18:00.365080 master-0 kubenswrapper[7337]: I0312 18:18:00.364629 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d5tcw" event={"ID":"d3e5b8c8-a100-4880-a0b9-9c3989d4e739","Type":"ContainerDied","Data":"a27087137da15f60f89c47c0f62e286ef1e4ec7252189f88f560af42271ffe59"} Mar 12 18:18:00.366390 master-0 kubenswrapper[7337]: I0312 18:18:00.366334 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-hkfnq" event={"ID":"aee40f88-83e4-45c8-8331-969943f9f9aa","Type":"ContainerStarted","Data":"af342b8bd10a5707fb2e78e4192b40fedda1b3166a7a1ba47d9935fc638c9b76"} Mar 12 18:18:00.369464 master-0 kubenswrapper[7337]: I0312 18:18:00.369419 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-dt8x2" event={"ID":"faf32e9b-b44a-45dc-97b3-ec3e753e1345","Type":"ContainerStarted","Data":"bb2208056930bc65c37d6e81a53b20d981cec3c9043db55312e983db54ff9a15"} Mar 12 18:18:00.369569 master-0 kubenswrapper[7337]: I0312 18:18:00.369469 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-dt8x2" event={"ID":"faf32e9b-b44a-45dc-97b3-ec3e753e1345","Type":"ContainerStarted","Data":"49eef942f52a4b29048be6e581a9902f3d6414019ac59aefe3746d436787a19d"} Mar 12 18:18:00.373429 master-0 kubenswrapper[7337]: I0312 18:18:00.373197 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-8762n" event={"ID":"0fb78c61-2051-42e2-8668-fa7404ccac43","Type":"ContainerStarted","Data":"70abcecf11f5f6f42b55c74bce2244e7addd9a94c042b787c6169811b3dbde3f"} Mar 12 18:18:00.378856 master-0 kubenswrapper[7337]: I0312 18:18:00.378814 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c" event={"ID":"37392bec-4d79-4a65-bc41-6708d9edab46","Type":"ContainerStarted","Data":"a1a913402509fa3c7cf82cb73e4bb34f7c829dc85a5a92ae7f093684fc4e4156"} Mar 12 18:18:00.379994 master-0 kubenswrapper[7337]: I0312 18:18:00.379952 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc" event={"ID":"b38e7fcd-8f7a-4d4f-8702-7ef205261054","Type":"ContainerStarted","Data":"bc7646036582f47fdf8d0b7175478e014e467156ca033a5485ffd89d7588c9e5"} Mar 12 18:18:00.381846 master-0 kubenswrapper[7337]: I0312 18:18:00.381809 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-m6z6d" event={"ID":"f5e09875-4445-4584-94f0-243148307bb0","Type":"ContainerStarted","Data":"bf988f8c0a2c5b4133ef3fafc379f42b9d2b5f0585dc6f41596f02be776951fc"} Mar 12 18:18:00.382738 master-0 kubenswrapper[7337]: I0312 18:18:00.382705 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg" event={"ID":"604044f4-9b0b-4747-827d-843f3cfa7077","Type":"ContainerStarted","Data":"17d1a088b8419eadaf38001b6fa832ae43cb7cf4605e77942c3aeacc31e4a82a"} Mar 12 18:18:00.384001 master-0 kubenswrapper[7337]: I0312 18:18:00.383917 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-gnrzd" event={"ID":"4687cf53-55d7-42b7-b24d-e57da3989fd6","Type":"ContainerStarted","Data":"a8072978949c36bf7009bf60cefe0dca093e821c0beb2fafb1092bfaa0b6ca78"} Mar 12 18:18:00.388665 master-0 kubenswrapper[7337]: I0312 18:18:00.388601 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-88gzm" event={"ID":"1287cbb9-c9f6-48d2-9fda-f4464074e41b","Type":"ContainerStarted","Data":"a3668de3fedf57192290be88e895d89eca099cb587eeab867bde241aeee908bc"} Mar 12 18:18:00.389149 master-0 kubenswrapper[7337]: I0312 18:18:00.389129 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb"] Mar 12 18:18:00.393315 master-0 kubenswrapper[7337]: I0312 18:18:00.393126 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-vk7lr" event={"ID":"b2c6cd11-b1ed-4fed-a4ce-4eee0af20868","Type":"ContainerStarted","Data":"c71ce2f409636ed7b475c201a4a43d47159f4b12127fcaca7b727dfe4389cfa8"} Mar 12 18:18:00.393315 master-0 kubenswrapper[7337]: I0312 18:18:00.393174 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-vk7lr" event={"ID":"b2c6cd11-b1ed-4fed-a4ce-4eee0af20868","Type":"ContainerStarted","Data":"bd825685a2a078da6de9c77b8a86a4456fa5c958068f18e079159355b91a76d4"} Mar 12 18:18:00.397044 master-0 kubenswrapper[7337]: I0312 18:18:00.397011 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-dq2gs" event={"ID":"25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327","Type":"ContainerStarted","Data":"d8368573a80125faaafb84704a42eab10d08f21db89fb0224a3e775974fbecf4"} Mar 12 18:18:00.417740 master-0 kubenswrapper[7337]: W0312 18:18:00.417418 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5fb0152_3efd_4000_bce3_fa90b75316ae.slice/crio-7bb177ed28141c5fad6532f2da685328c613d7795aff40f2cb06337556b42750 WatchSource:0}: Error finding container 7bb177ed28141c5fad6532f2da685328c613d7795aff40f2cb06337556b42750: Status 404 returned error can't find the container with id 7bb177ed28141c5fad6532f2da685328c613d7795aff40f2cb06337556b42750 Mar 12 18:18:01.424275 master-0 kubenswrapper[7337]: I0312 18:18:01.424148 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" event={"ID":"e5fb0152-3efd-4000-bce3-fa90b75316ae","Type":"ContainerStarted","Data":"7bb177ed28141c5fad6532f2da685328c613d7795aff40f2cb06337556b42750"} Mar 12 18:18:01.436303 master-0 kubenswrapper[7337]: I0312 18:18:01.436259 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg" event={"ID":"604044f4-9b0b-4747-827d-843f3cfa7077","Type":"ContainerStarted","Data":"09a075c845add07fc1d0becc35f364343e0d0047824c31a13552b06bca94a657"} Mar 12 18:18:01.436303 master-0 kubenswrapper[7337]: I0312 18:18:01.436310 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg" event={"ID":"604044f4-9b0b-4747-827d-843f3cfa7077","Type":"ContainerStarted","Data":"6b224901428e2ddbe12d7888c29aa663990f99e54eaab842f708f9d3489fa570"} Mar 12 18:18:01.442777 master-0 kubenswrapper[7337]: I0312 18:18:01.442550 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-gnrzd" event={"ID":"4687cf53-55d7-42b7-b24d-e57da3989fd6","Type":"ContainerStarted","Data":"e41e15ebbc1bcab66a62089e44027df6e4e47a0bd6e4742d05a03a1faaacd2a2"} Mar 12 18:18:01.446212 master-0 kubenswrapper[7337]: I0312 18:18:01.446136 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-dq2gs" event={"ID":"25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327","Type":"ContainerStarted","Data":"5216b1a6a990a208fe9fb1ca6c8bf8e88cc83065b779ec2880fa47411cc56376"} Mar 12 18:18:01.447323 master-0 kubenswrapper[7337]: I0312 18:18:01.446214 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-dq2gs" event={"ID":"25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327","Type":"ContainerStarted","Data":"b8da232987168e2b3325dcd24daa40dffeb8aa66d3b13536014080fe37043adf"} Mar 12 18:18:01.455873 master-0 kubenswrapper[7337]: I0312 18:18:01.455760 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc" event={"ID":"b38e7fcd-8f7a-4d4f-8702-7ef205261054","Type":"ContainerStarted","Data":"e22f477535f36786fb2e5d8575401247b265e9b6ac21468804b054bf7277c833"} Mar 12 18:18:01.458487 master-0 kubenswrapper[7337]: I0312 18:18:01.455886 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d5tcw" event={"ID":"d3e5b8c8-a100-4880-a0b9-9c3989d4e739","Type":"ContainerStarted","Data":"af71526d21d4790a2a1168149d5dc33e350f81424e6a709729c12efc8ebe00ad"} Mar 12 18:18:01.458487 master-0 kubenswrapper[7337]: I0312 18:18:01.455924 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc" Mar 12 18:18:01.458487 master-0 kubenswrapper[7337]: I0312 18:18:01.458200 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg" podStartSLOduration=2.458187831 podStartE2EDuration="2.458187831s" podCreationTimestamp="2026-03-12 18:17:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:18:01.457321909 +0000 UTC m=+281.925922886" watchObservedRunningTime="2026-03-12 18:18:01.458187831 +0000 UTC m=+281.926788778" Mar 12 18:18:01.460838 master-0 kubenswrapper[7337]: I0312 18:18:01.460811 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc" Mar 12 18:18:01.462493 master-0 kubenswrapper[7337]: I0312 18:18:01.462453 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-hkfnq" event={"ID":"aee40f88-83e4-45c8-8331-969943f9f9aa","Type":"ContainerStarted","Data":"77ce07a0233947dc7fc831b61d4cdf63c7c7e1c67f78643299356650f5a850da"} Mar 12 18:18:01.480158 master-0 kubenswrapper[7337]: I0312 18:18:01.480082 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-7769569c45-dq2gs" podStartSLOduration=2.480059122 podStartE2EDuration="2.480059122s" podCreationTimestamp="2026-03-12 18:17:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:18:01.479139879 +0000 UTC m=+281.947740856" watchObservedRunningTime="2026-03-12 18:18:01.480059122 +0000 UTC m=+281.948660079" Mar 12 18:18:01.517686 master-0 kubenswrapper[7337]: I0312 18:18:01.512771 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc" podStartSLOduration=2.512756326 podStartE2EDuration="2.512756326s" podCreationTimestamp="2026-03-12 18:17:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:18:01.510447508 +0000 UTC m=+281.979048455" watchObservedRunningTime="2026-03-12 18:18:01.512756326 +0000 UTC m=+281.981357283" Mar 12 18:18:01.520791 master-0 kubenswrapper[7337]: I0312 18:18:01.520752 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-kcpg5"] Mar 12 18:18:01.521024 master-0 kubenswrapper[7337]: I0312 18:18:01.520962 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-8d675b596-kcpg5" podUID="875bdfaa-b0a4-4412-a477-c962844e7057" containerName="multus-admission-controller" containerID="cri-o://017d55d2922541b7f8cf5506cf963835c47653756128a5be4b04a566d5989973" gracePeriod=30 Mar 12 18:18:01.521179 master-0 kubenswrapper[7337]: I0312 18:18:01.521145 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-8d675b596-kcpg5" podUID="875bdfaa-b0a4-4412-a477-c962844e7057" containerName="kube-rbac-proxy" containerID="cri-o://4e982f58af65b335ca3753572817b82f818c04cb442b9fd1ca79a8530ee48175" gracePeriod=30 Mar 12 18:18:01.548926 master-0 kubenswrapper[7337]: I0312 18:18:01.546408 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d5tcw" podStartSLOduration=10.813007355 podStartE2EDuration="13.546387134s" podCreationTimestamp="2026-03-12 18:17:48 +0000 UTC" firstStartedPulling="2026-03-12 18:17:58.343969172 +0000 UTC m=+278.812570129" lastFinishedPulling="2026-03-12 18:18:01.077348961 +0000 UTC m=+281.545949908" observedRunningTime="2026-03-12 18:18:01.540467205 +0000 UTC m=+282.009068162" watchObservedRunningTime="2026-03-12 18:18:01.546387134 +0000 UTC m=+282.014988111" Mar 12 18:18:02.478242 master-0 kubenswrapper[7337]: I0312 18:18:02.478200 7337 generic.go:334] "Generic (PLEG): container finished" podID="875bdfaa-b0a4-4412-a477-c962844e7057" containerID="4e982f58af65b335ca3753572817b82f818c04cb442b9fd1ca79a8530ee48175" exitCode=0 Mar 12 18:18:02.478737 master-0 kubenswrapper[7337]: I0312 18:18:02.478270 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-kcpg5" event={"ID":"875bdfaa-b0a4-4412-a477-c962844e7057","Type":"ContainerDied","Data":"4e982f58af65b335ca3753572817b82f818c04cb442b9fd1ca79a8530ee48175"} Mar 12 18:18:03.528544 master-0 kubenswrapper[7337]: I0312 18:18:03.525930 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-mfv5x"] Mar 12 18:18:03.528544 master-0 kubenswrapper[7337]: I0312 18:18:03.526781 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mfv5x" Mar 12 18:18:03.530533 master-0 kubenswrapper[7337]: I0312 18:18:03.530477 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-vtdm7" Mar 12 18:18:03.534988 master-0 kubenswrapper[7337]: I0312 18:18:03.534952 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 12 18:18:03.672654 master-0 kubenswrapper[7337]: I0312 18:18:03.672593 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/492e9833-4513-4f2f-b865-d05a8973fadc-rootfs\") pod \"machine-config-daemon-mfv5x\" (UID: \"492e9833-4513-4f2f-b865-d05a8973fadc\") " pod="openshift-machine-config-operator/machine-config-daemon-mfv5x" Mar 12 18:18:03.672654 master-0 kubenswrapper[7337]: I0312 18:18:03.672650 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/492e9833-4513-4f2f-b865-d05a8973fadc-proxy-tls\") pod \"machine-config-daemon-mfv5x\" (UID: \"492e9833-4513-4f2f-b865-d05a8973fadc\") " pod="openshift-machine-config-operator/machine-config-daemon-mfv5x" Mar 12 18:18:03.672917 master-0 kubenswrapper[7337]: I0312 18:18:03.672692 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kn2k\" (UniqueName: \"kubernetes.io/projected/492e9833-4513-4f2f-b865-d05a8973fadc-kube-api-access-5kn2k\") pod \"machine-config-daemon-mfv5x\" (UID: \"492e9833-4513-4f2f-b865-d05a8973fadc\") " pod="openshift-machine-config-operator/machine-config-daemon-mfv5x" Mar 12 18:18:03.672917 master-0 kubenswrapper[7337]: I0312 18:18:03.672722 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/492e9833-4513-4f2f-b865-d05a8973fadc-mcd-auth-proxy-config\") pod \"machine-config-daemon-mfv5x\" (UID: \"492e9833-4513-4f2f-b865-d05a8973fadc\") " pod="openshift-machine-config-operator/machine-config-daemon-mfv5x" Mar 12 18:18:03.774150 master-0 kubenswrapper[7337]: I0312 18:18:03.774079 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kn2k\" (UniqueName: \"kubernetes.io/projected/492e9833-4513-4f2f-b865-d05a8973fadc-kube-api-access-5kn2k\") pod \"machine-config-daemon-mfv5x\" (UID: \"492e9833-4513-4f2f-b865-d05a8973fadc\") " pod="openshift-machine-config-operator/machine-config-daemon-mfv5x" Mar 12 18:18:03.774363 master-0 kubenswrapper[7337]: I0312 18:18:03.774268 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/492e9833-4513-4f2f-b865-d05a8973fadc-mcd-auth-proxy-config\") pod \"machine-config-daemon-mfv5x\" (UID: \"492e9833-4513-4f2f-b865-d05a8973fadc\") " pod="openshift-machine-config-operator/machine-config-daemon-mfv5x" Mar 12 18:18:03.774363 master-0 kubenswrapper[7337]: I0312 18:18:03.774357 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/492e9833-4513-4f2f-b865-d05a8973fadc-rootfs\") pod \"machine-config-daemon-mfv5x\" (UID: \"492e9833-4513-4f2f-b865-d05a8973fadc\") " pod="openshift-machine-config-operator/machine-config-daemon-mfv5x" Mar 12 18:18:03.774444 master-0 kubenswrapper[7337]: I0312 18:18:03.774401 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/492e9833-4513-4f2f-b865-d05a8973fadc-proxy-tls\") pod \"machine-config-daemon-mfv5x\" (UID: \"492e9833-4513-4f2f-b865-d05a8973fadc\") " pod="openshift-machine-config-operator/machine-config-daemon-mfv5x" Mar 12 18:18:03.774611 master-0 kubenswrapper[7337]: I0312 18:18:03.774570 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/492e9833-4513-4f2f-b865-d05a8973fadc-rootfs\") pod \"machine-config-daemon-mfv5x\" (UID: \"492e9833-4513-4f2f-b865-d05a8973fadc\") " pod="openshift-machine-config-operator/machine-config-daemon-mfv5x" Mar 12 18:18:03.775503 master-0 kubenswrapper[7337]: I0312 18:18:03.775481 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/492e9833-4513-4f2f-b865-d05a8973fadc-mcd-auth-proxy-config\") pod \"machine-config-daemon-mfv5x\" (UID: \"492e9833-4513-4f2f-b865-d05a8973fadc\") " pod="openshift-machine-config-operator/machine-config-daemon-mfv5x" Mar 12 18:18:03.796286 master-0 kubenswrapper[7337]: I0312 18:18:03.796250 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kn2k\" (UniqueName: \"kubernetes.io/projected/492e9833-4513-4f2f-b865-d05a8973fadc-kube-api-access-5kn2k\") pod \"machine-config-daemon-mfv5x\" (UID: \"492e9833-4513-4f2f-b865-d05a8973fadc\") " pod="openshift-machine-config-operator/machine-config-daemon-mfv5x" Mar 12 18:18:03.796676 master-0 kubenswrapper[7337]: I0312 18:18:03.796645 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/492e9833-4513-4f2f-b865-d05a8973fadc-proxy-tls\") pod \"machine-config-daemon-mfv5x\" (UID: \"492e9833-4513-4f2f-b865-d05a8973fadc\") " pod="openshift-machine-config-operator/machine-config-daemon-mfv5x" Mar 12 18:18:03.858539 master-0 kubenswrapper[7337]: I0312 18:18:03.857811 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mfv5x" Mar 12 18:18:06.433574 master-0 kubenswrapper[7337]: I0312 18:18:06.431540 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-dt8x2"] Mar 12 18:18:07.140473 master-0 kubenswrapper[7337]: I0312 18:18:07.140424 7337 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 12 18:18:07.140776 master-0 kubenswrapper[7337]: I0312 18:18:07.140685 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" containerID="cri-o://46264a9af82e695b6d4dff6d1f55cd807b8535bd3965ef4182b223f7b1e2e6f6" gracePeriod=30 Mar 12 18:18:07.140882 master-0 kubenswrapper[7337]: I0312 18:18:07.140763 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" containerID="cri-o://0b463f9b24d0ab49df3e727d4db7dfa04150b13289e95c0e9bcdb7b146ec69e5" gracePeriod=30 Mar 12 18:18:07.141603 master-0 kubenswrapper[7337]: I0312 18:18:07.141555 7337 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 18:18:07.141892 master-0 kubenswrapper[7337]: E0312 18:18:07.141859 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 18:18:07.141892 master-0 kubenswrapper[7337]: I0312 18:18:07.141879 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 18:18:07.141892 master-0 kubenswrapper[7337]: E0312 18:18:07.141889 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 12 18:18:07.141892 master-0 kubenswrapper[7337]: I0312 18:18:07.141896 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 12 18:18:07.142096 master-0 kubenswrapper[7337]: E0312 18:18:07.141920 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 18:18:07.142096 master-0 kubenswrapper[7337]: I0312 18:18:07.141927 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 18:18:07.142096 master-0 kubenswrapper[7337]: E0312 18:18:07.141935 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 18:18:07.142096 master-0 kubenswrapper[7337]: I0312 18:18:07.141941 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 18:18:07.142096 master-0 kubenswrapper[7337]: I0312 18:18:07.142049 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 18:18:07.142096 master-0 kubenswrapper[7337]: I0312 18:18:07.142065 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 12 18:18:07.142096 master-0 kubenswrapper[7337]: I0312 18:18:07.142077 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 18:18:07.142096 master-0 kubenswrapper[7337]: I0312 18:18:07.142089 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 18:18:07.142096 master-0 kubenswrapper[7337]: I0312 18:18:07.142098 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 18:18:07.142446 master-0 kubenswrapper[7337]: E0312 18:18:07.142222 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 18:18:07.142446 master-0 kubenswrapper[7337]: I0312 18:18:07.142230 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 18:18:07.143339 master-0 kubenswrapper[7337]: I0312 18:18:07.143315 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:18:07.189643 master-0 kubenswrapper[7337]: I0312 18:18:07.189493 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 18:18:07.258588 master-0 kubenswrapper[7337]: I0312 18:18:07.258359 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/39c441a05d91070efc538925475b0a44-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"39c441a05d91070efc538925475b0a44\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:18:07.258588 master-0 kubenswrapper[7337]: I0312 18:18:07.258448 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/39c441a05d91070efc538925475b0a44-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"39c441a05d91070efc538925475b0a44\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:18:07.359445 master-0 kubenswrapper[7337]: I0312 18:18:07.359385 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/39c441a05d91070efc538925475b0a44-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"39c441a05d91070efc538925475b0a44\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:18:07.359739 master-0 kubenswrapper[7337]: I0312 18:18:07.359557 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/39c441a05d91070efc538925475b0a44-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"39c441a05d91070efc538925475b0a44\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:18:07.359739 master-0 kubenswrapper[7337]: I0312 18:18:07.359625 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/39c441a05d91070efc538925475b0a44-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"39c441a05d91070efc538925475b0a44\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:18:07.359811 master-0 kubenswrapper[7337]: I0312 18:18:07.359781 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/39c441a05d91070efc538925475b0a44-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"39c441a05d91070efc538925475b0a44\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:18:07.398804 master-0 kubenswrapper[7337]: I0312 18:18:07.393861 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d5tcw" Mar 12 18:18:07.398804 master-0 kubenswrapper[7337]: I0312 18:18:07.393914 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d5tcw" Mar 12 18:18:07.488650 master-0 kubenswrapper[7337]: I0312 18:18:07.488567 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:18:08.687841 master-0 kubenswrapper[7337]: I0312 18:18:08.685859 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d5tcw" podUID="d3e5b8c8-a100-4880-a0b9-9c3989d4e739" containerName="registry-server" probeResult="failure" output=< Mar 12 18:18:08.687841 master-0 kubenswrapper[7337]: timeout: failed to connect service ":50051" within 1s Mar 12 18:18:08.687841 master-0 kubenswrapper[7337]: > Mar 12 18:18:10.139557 master-0 kubenswrapper[7337]: W0312 18:18:10.139499 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod492e9833_4513_4f2f_b865_d05a8973fadc.slice/crio-17f1eb5b22dadcc1a27bca5d2e41cabae79a53d549f65fc68a87a8776fc86dbf WatchSource:0}: Error finding container 17f1eb5b22dadcc1a27bca5d2e41cabae79a53d549f65fc68a87a8776fc86dbf: Status 404 returned error can't find the container with id 17f1eb5b22dadcc1a27bca5d2e41cabae79a53d549f65fc68a87a8776fc86dbf Mar 12 18:18:10.140165 master-0 kubenswrapper[7337]: W0312 18:18:10.140129 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39c441a05d91070efc538925475b0a44.slice/crio-dd22c21b01ab8567576e92f9b78bcc2934cfd08f8466cc304cfffee656791ad7 WatchSource:0}: Error finding container dd22c21b01ab8567576e92f9b78bcc2934cfd08f8466cc304cfffee656791ad7: Status 404 returned error can't find the container with id dd22c21b01ab8567576e92f9b78bcc2934cfd08f8466cc304cfffee656791ad7 Mar 12 18:18:10.146731 master-0 kubenswrapper[7337]: I0312 18:18:10.146697 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:18:10.196640 master-0 kubenswrapper[7337]: I0312 18:18:10.196609 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 12 18:18:10.196754 master-0 kubenswrapper[7337]: I0312 18:18:10.196701 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 12 18:18:10.196754 master-0 kubenswrapper[7337]: I0312 18:18:10.196737 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 12 18:18:10.196865 master-0 kubenswrapper[7337]: I0312 18:18:10.196820 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs" (OuterVolumeSpecName: "logs") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:18:10.196919 master-0 kubenswrapper[7337]: I0312 18:18:10.196892 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:18:10.197051 master-0 kubenswrapper[7337]: I0312 18:18:10.197025 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 12 18:18:10.197108 master-0 kubenswrapper[7337]: I0312 18:18:10.197054 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 12 18:18:10.197333 master-0 kubenswrapper[7337]: I0312 18:18:10.197319 7337 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Mar 12 18:18:10.197418 master-0 kubenswrapper[7337]: I0312 18:18:10.197334 7337 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:18:10.197418 master-0 kubenswrapper[7337]: I0312 18:18:10.197375 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config" (OuterVolumeSpecName: "config") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:18:10.197418 master-0 kubenswrapper[7337]: I0312 18:18:10.197404 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:18:10.197645 master-0 kubenswrapper[7337]: I0312 18:18:10.197421 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets" (OuterVolumeSpecName: "secrets") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:18:10.298991 master-0 kubenswrapper[7337]: I0312 18:18:10.298836 7337 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") on node \"master-0\" DevicePath \"\"" Mar 12 18:18:10.298991 master-0 kubenswrapper[7337]: I0312 18:18:10.298881 7337 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:18:10.298991 master-0 kubenswrapper[7337]: I0312 18:18:10.298902 7337 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Mar 12 18:18:10.697940 master-0 kubenswrapper[7337]: I0312 18:18:10.697851 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"39c441a05d91070efc538925475b0a44","Type":"ContainerStarted","Data":"dd22c21b01ab8567576e92f9b78bcc2934cfd08f8466cc304cfffee656791ad7"} Mar 12 18:18:10.703204 master-0 kubenswrapper[7337]: I0312 18:18:10.703145 7337 generic.go:334] "Generic (PLEG): container finished" podID="ec8121ea-f6e9-4232-9837-78b278a8cf54" containerID="858ee31a04ea10059b361ef351f3695c906bcd6e4d8c64728b6201ca11a0a592" exitCode=0 Mar 12 18:18:10.703361 master-0 kubenswrapper[7337]: I0312 18:18:10.703269 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"ec8121ea-f6e9-4232-9837-78b278a8cf54","Type":"ContainerDied","Data":"858ee31a04ea10059b361ef351f3695c906bcd6e4d8c64728b6201ca11a0a592"} Mar 12 18:18:10.707320 master-0 kubenswrapper[7337]: I0312 18:18:10.707278 7337 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="0b463f9b24d0ab49df3e727d4db7dfa04150b13289e95c0e9bcdb7b146ec69e5" exitCode=0 Mar 12 18:18:10.707320 master-0 kubenswrapper[7337]: I0312 18:18:10.707312 7337 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="46264a9af82e695b6d4dff6d1f55cd807b8535bd3965ef4182b223f7b1e2e6f6" exitCode=0 Mar 12 18:18:10.707431 master-0 kubenswrapper[7337]: I0312 18:18:10.707330 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 18:18:10.707431 master-0 kubenswrapper[7337]: I0312 18:18:10.707376 7337 scope.go:117] "RemoveContainer" containerID="0b463f9b24d0ab49df3e727d4db7dfa04150b13289e95c0e9bcdb7b146ec69e5" Mar 12 18:18:10.709352 master-0 kubenswrapper[7337]: I0312 18:18:10.709297 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" event={"ID":"e5fb0152-3efd-4000-bce3-fa90b75316ae","Type":"ContainerStarted","Data":"15d14268c6ae0aa2ad20f2093d09d878fa7d62076388ad39f3dddf0c18d45f03"} Mar 12 18:18:10.710727 master-0 kubenswrapper[7337]: I0312 18:18:10.710694 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mfv5x" event={"ID":"492e9833-4513-4f2f-b865-d05a8973fadc","Type":"ContainerStarted","Data":"bf58dffab57c5379166ac3ac7ab2c3da97c30b2478a4188d006674f2fe91ef86"} Mar 12 18:18:10.710727 master-0 kubenswrapper[7337]: I0312 18:18:10.710720 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mfv5x" event={"ID":"492e9833-4513-4f2f-b865-d05a8973fadc","Type":"ContainerStarted","Data":"17f1eb5b22dadcc1a27bca5d2e41cabae79a53d549f65fc68a87a8776fc86dbf"} Mar 12 18:18:10.712475 master-0 kubenswrapper[7337]: I0312 18:18:10.712431 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-88gzm" event={"ID":"1287cbb9-c9f6-48d2-9fda-f4464074e41b","Type":"ContainerStarted","Data":"e2e666f0a94adcd54246dc00858c86b3692c6c83511d2dc769470fc492126c0b"} Mar 12 18:18:10.713874 master-0 kubenswrapper[7337]: I0312 18:18:10.713741 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c" event={"ID":"37392bec-4d79-4a65-bc41-6708d9edab46","Type":"ContainerStarted","Data":"97ba856e9d7794e001e2f28f7bc1d31252ab0e653c5e071322d4171ada719ab3"} Mar 12 18:18:10.753574 master-0 kubenswrapper[7337]: I0312 18:18:10.752790 7337 scope.go:117] "RemoveContainer" containerID="e36200a118a6b9e121bed30c8ddbc2aa00c9c281d2b93d4ca81dd8623b5aea84" Mar 12 18:18:10.785841 master-0 kubenswrapper[7337]: I0312 18:18:10.785432 7337 scope.go:117] "RemoveContainer" containerID="46264a9af82e695b6d4dff6d1f55cd807b8535bd3965ef4182b223f7b1e2e6f6" Mar 12 18:18:10.813854 master-0 kubenswrapper[7337]: I0312 18:18:10.813819 7337 scope.go:117] "RemoveContainer" containerID="0b463f9b24d0ab49df3e727d4db7dfa04150b13289e95c0e9bcdb7b146ec69e5" Mar 12 18:18:10.814605 master-0 kubenswrapper[7337]: E0312 18:18:10.814323 7337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b463f9b24d0ab49df3e727d4db7dfa04150b13289e95c0e9bcdb7b146ec69e5\": container with ID starting with 0b463f9b24d0ab49df3e727d4db7dfa04150b13289e95c0e9bcdb7b146ec69e5 not found: ID does not exist" containerID="0b463f9b24d0ab49df3e727d4db7dfa04150b13289e95c0e9bcdb7b146ec69e5" Mar 12 18:18:10.814605 master-0 kubenswrapper[7337]: I0312 18:18:10.814370 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b463f9b24d0ab49df3e727d4db7dfa04150b13289e95c0e9bcdb7b146ec69e5"} err="failed to get container status \"0b463f9b24d0ab49df3e727d4db7dfa04150b13289e95c0e9bcdb7b146ec69e5\": rpc error: code = NotFound desc = could not find container \"0b463f9b24d0ab49df3e727d4db7dfa04150b13289e95c0e9bcdb7b146ec69e5\": container with ID starting with 0b463f9b24d0ab49df3e727d4db7dfa04150b13289e95c0e9bcdb7b146ec69e5 not found: ID does not exist" Mar 12 18:18:10.814605 master-0 kubenswrapper[7337]: I0312 18:18:10.814408 7337 scope.go:117] "RemoveContainer" containerID="e36200a118a6b9e121bed30c8ddbc2aa00c9c281d2b93d4ca81dd8623b5aea84" Mar 12 18:18:10.814767 master-0 kubenswrapper[7337]: E0312 18:18:10.814739 7337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e36200a118a6b9e121bed30c8ddbc2aa00c9c281d2b93d4ca81dd8623b5aea84\": container with ID starting with e36200a118a6b9e121bed30c8ddbc2aa00c9c281d2b93d4ca81dd8623b5aea84 not found: ID does not exist" containerID="e36200a118a6b9e121bed30c8ddbc2aa00c9c281d2b93d4ca81dd8623b5aea84" Mar 12 18:18:10.814807 master-0 kubenswrapper[7337]: I0312 18:18:10.814774 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e36200a118a6b9e121bed30c8ddbc2aa00c9c281d2b93d4ca81dd8623b5aea84"} err="failed to get container status \"e36200a118a6b9e121bed30c8ddbc2aa00c9c281d2b93d4ca81dd8623b5aea84\": rpc error: code = NotFound desc = could not find container \"e36200a118a6b9e121bed30c8ddbc2aa00c9c281d2b93d4ca81dd8623b5aea84\": container with ID starting with e36200a118a6b9e121bed30c8ddbc2aa00c9c281d2b93d4ca81dd8623b5aea84 not found: ID does not exist" Mar 12 18:18:10.814807 master-0 kubenswrapper[7337]: I0312 18:18:10.814797 7337 scope.go:117] "RemoveContainer" containerID="46264a9af82e695b6d4dff6d1f55cd807b8535bd3965ef4182b223f7b1e2e6f6" Mar 12 18:18:10.815200 master-0 kubenswrapper[7337]: E0312 18:18:10.815178 7337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46264a9af82e695b6d4dff6d1f55cd807b8535bd3965ef4182b223f7b1e2e6f6\": container with ID starting with 46264a9af82e695b6d4dff6d1f55cd807b8535bd3965ef4182b223f7b1e2e6f6 not found: ID does not exist" containerID="46264a9af82e695b6d4dff6d1f55cd807b8535bd3965ef4182b223f7b1e2e6f6" Mar 12 18:18:10.815258 master-0 kubenswrapper[7337]: I0312 18:18:10.815204 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46264a9af82e695b6d4dff6d1f55cd807b8535bd3965ef4182b223f7b1e2e6f6"} err="failed to get container status \"46264a9af82e695b6d4dff6d1f55cd807b8535bd3965ef4182b223f7b1e2e6f6\": rpc error: code = NotFound desc = could not find container \"46264a9af82e695b6d4dff6d1f55cd807b8535bd3965ef4182b223f7b1e2e6f6\": container with ID starting with 46264a9af82e695b6d4dff6d1f55cd807b8535bd3965ef4182b223f7b1e2e6f6 not found: ID does not exist" Mar 12 18:18:10.815258 master-0 kubenswrapper[7337]: I0312 18:18:10.815220 7337 scope.go:117] "RemoveContainer" containerID="0b463f9b24d0ab49df3e727d4db7dfa04150b13289e95c0e9bcdb7b146ec69e5" Mar 12 18:18:10.815662 master-0 kubenswrapper[7337]: I0312 18:18:10.815632 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b463f9b24d0ab49df3e727d4db7dfa04150b13289e95c0e9bcdb7b146ec69e5"} err="failed to get container status \"0b463f9b24d0ab49df3e727d4db7dfa04150b13289e95c0e9bcdb7b146ec69e5\": rpc error: code = NotFound desc = could not find container \"0b463f9b24d0ab49df3e727d4db7dfa04150b13289e95c0e9bcdb7b146ec69e5\": container with ID starting with 0b463f9b24d0ab49df3e727d4db7dfa04150b13289e95c0e9bcdb7b146ec69e5 not found: ID does not exist" Mar 12 18:18:10.815727 master-0 kubenswrapper[7337]: I0312 18:18:10.815661 7337 scope.go:117] "RemoveContainer" containerID="e36200a118a6b9e121bed30c8ddbc2aa00c9c281d2b93d4ca81dd8623b5aea84" Mar 12 18:18:10.815928 master-0 kubenswrapper[7337]: I0312 18:18:10.815906 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e36200a118a6b9e121bed30c8ddbc2aa00c9c281d2b93d4ca81dd8623b5aea84"} err="failed to get container status \"e36200a118a6b9e121bed30c8ddbc2aa00c9c281d2b93d4ca81dd8623b5aea84\": rpc error: code = NotFound desc = could not find container \"e36200a118a6b9e121bed30c8ddbc2aa00c9c281d2b93d4ca81dd8623b5aea84\": container with ID starting with e36200a118a6b9e121bed30c8ddbc2aa00c9c281d2b93d4ca81dd8623b5aea84 not found: ID does not exist" Mar 12 18:18:10.815975 master-0 kubenswrapper[7337]: I0312 18:18:10.815927 7337 scope.go:117] "RemoveContainer" containerID="46264a9af82e695b6d4dff6d1f55cd807b8535bd3965ef4182b223f7b1e2e6f6" Mar 12 18:18:10.816166 master-0 kubenswrapper[7337]: I0312 18:18:10.816127 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46264a9af82e695b6d4dff6d1f55cd807b8535bd3965ef4182b223f7b1e2e6f6"} err="failed to get container status \"46264a9af82e695b6d4dff6d1f55cd807b8535bd3965ef4182b223f7b1e2e6f6\": rpc error: code = NotFound desc = could not find container \"46264a9af82e695b6d4dff6d1f55cd807b8535bd3965ef4182b223f7b1e2e6f6\": container with ID starting with 46264a9af82e695b6d4dff6d1f55cd807b8535bd3965ef4182b223f7b1e2e6f6 not found: ID does not exist" Mar 12 18:18:11.736173 master-0 kubenswrapper[7337]: I0312 18:18:11.735946 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-dt8x2" podUID="faf32e9b-b44a-45dc-97b3-ec3e753e1345" containerName="kube-rbac-proxy" containerID="cri-o://bb2208056930bc65c37d6e81a53b20d981cec3c9043db55312e983db54ff9a15" gracePeriod=30 Mar 12 18:18:11.736173 master-0 kubenswrapper[7337]: I0312 18:18:11.736051 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-dt8x2" podUID="faf32e9b-b44a-45dc-97b3-ec3e753e1345" containerName="machine-approver-controller" containerID="cri-o://ec0d020d1bfb44db224e6aa5a751f545c2147fe4463563cefced5e71080ee2ff" gracePeriod=30 Mar 12 18:18:11.739026 master-0 kubenswrapper[7337]: I0312 18:18:11.738962 7337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f78c05e1499b533b83f091333d61f045" path="/var/lib/kubelet/pods/f78c05e1499b533b83f091333d61f045/volumes" Mar 12 18:18:11.739870 master-0 kubenswrapper[7337]: I0312 18:18:11.739806 7337 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 12 18:18:12.654848 master-0 kubenswrapper[7337]: I0312 18:18:12.649487 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 12 18:18:12.654848 master-0 kubenswrapper[7337]: I0312 18:18:12.649537 7337 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="94ae5387-26c6-488f-b324-1cca9e5875ad" Mar 12 18:18:12.654848 master-0 kubenswrapper[7337]: I0312 18:18:12.649553 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-hkfnq" event={"ID":"aee40f88-83e4-45c8-8331-969943f9f9aa","Type":"ContainerStarted","Data":"d198aa4febf627b07d4a0c44e2dc5af05951362ab02f000d680fdc840303b282"} Mar 12 18:18:12.654848 master-0 kubenswrapper[7337]: I0312 18:18:12.649584 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" event={"ID":"e5fb0152-3efd-4000-bce3-fa90b75316ae","Type":"ContainerStarted","Data":"ad37d00b99c41e0b937f93c7e26c45fd836b497805b3023bc5476a1b344ac9dc"} Mar 12 18:18:12.654848 master-0 kubenswrapper[7337]: I0312 18:18:12.649596 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-dt8x2" event={"ID":"faf32e9b-b44a-45dc-97b3-ec3e753e1345","Type":"ContainerStarted","Data":"ec0d020d1bfb44db224e6aa5a751f545c2147fe4463563cefced5e71080ee2ff"} Mar 12 18:18:12.654848 master-0 kubenswrapper[7337]: I0312 18:18:12.649606 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-8762n" event={"ID":"0fb78c61-2051-42e2-8668-fa7404ccac43","Type":"ContainerStarted","Data":"1c67f5e60a37a984a5a0f63d32e4ce23e28a220e878efca0ccb72c88f3ac3ed1"} Mar 12 18:18:12.654848 master-0 kubenswrapper[7337]: I0312 18:18:12.649615 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-8762n" event={"ID":"0fb78c61-2051-42e2-8668-fa7404ccac43","Type":"ContainerStarted","Data":"06fea286c9b79bdaeb0284065152030d12d5471617111aa1a5fcb8eef805710a"} Mar 12 18:18:12.654848 master-0 kubenswrapper[7337]: I0312 18:18:12.649624 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-m6z6d" event={"ID":"f5e09875-4445-4584-94f0-243148307bb0","Type":"ContainerStarted","Data":"884649b6ab175d156520b4464987d5091cb2f5b03c616dbf98772936bcd6ecbd"} Mar 12 18:18:12.654848 master-0 kubenswrapper[7337]: I0312 18:18:12.649638 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"39c441a05d91070efc538925475b0a44","Type":"ContainerStarted","Data":"a84352e48f1355ad688a8d43acd0737d8ced53bb92d29ec7f76753f1e69e464d"} Mar 12 18:18:12.654848 master-0 kubenswrapper[7337]: I0312 18:18:12.649649 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mfv5x" event={"ID":"492e9833-4513-4f2f-b865-d05a8973fadc","Type":"ContainerStarted","Data":"9c066dc1b69f1534d6a77c5e0a9cd49bb8a405ae6f47803424948ac192b9fa2a"} Mar 12 18:18:12.654848 master-0 kubenswrapper[7337]: I0312 18:18:12.649660 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c" event={"ID":"37392bec-4d79-4a65-bc41-6708d9edab46","Type":"ContainerStarted","Data":"8e15f6e1cf0afd0d4b2ed487045ff3a4fb7861754a2333e311ac74e028e3bf00"} Mar 12 18:18:12.654848 master-0 kubenswrapper[7337]: I0312 18:18:12.651081 7337 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 12 18:18:12.654848 master-0 kubenswrapper[7337]: I0312 18:18:12.651112 7337 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="94ae5387-26c6-488f-b324-1cca9e5875ad" Mar 12 18:18:12.670152 master-0 kubenswrapper[7337]: I0312 18:18:12.663122 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-dt8x2" podStartSLOduration=4.314011878 podStartE2EDuration="14.663099505s" podCreationTimestamp="2026-03-12 18:17:58 +0000 UTC" firstStartedPulling="2026-03-12 18:17:59.768164066 +0000 UTC m=+280.236765023" lastFinishedPulling="2026-03-12 18:18:10.117251703 +0000 UTC m=+290.585852650" observedRunningTime="2026-03-12 18:18:12.661483503 +0000 UTC m=+293.130084450" watchObservedRunningTime="2026-03-12 18:18:12.663099505 +0000 UTC m=+293.131700462" Mar 12 18:18:12.684207 master-0 kubenswrapper[7337]: I0312 18:18:12.683987 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-8f89dfddd-m6z6d" podStartSLOduration=3.410823642 podStartE2EDuration="13.683973039s" podCreationTimestamp="2026-03-12 18:17:59 +0000 UTC" firstStartedPulling="2026-03-12 18:17:59.856279977 +0000 UTC m=+280.324880924" lastFinishedPulling="2026-03-12 18:18:10.129429364 +0000 UTC m=+290.598030321" observedRunningTime="2026-03-12 18:18:12.682298106 +0000 UTC m=+293.150899063" watchObservedRunningTime="2026-03-12 18:18:12.683973039 +0000 UTC m=+293.152573986" Mar 12 18:18:12.722098 master-0 kubenswrapper[7337]: I0312 18:18:12.722023 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-88gzm" podStartSLOduration=3.720742757 podStartE2EDuration="13.722002251s" podCreationTimestamp="2026-03-12 18:17:59 +0000 UTC" firstStartedPulling="2026-03-12 18:18:00.131631159 +0000 UTC m=+280.600232106" lastFinishedPulling="2026-03-12 18:18:10.132890653 +0000 UTC m=+290.601491600" observedRunningTime="2026-03-12 18:18:12.717419294 +0000 UTC m=+293.186020241" watchObservedRunningTime="2026-03-12 18:18:12.722002251 +0000 UTC m=+293.190603198" Mar 12 18:18:12.743585 master-0 kubenswrapper[7337]: I0312 18:18:12.743466 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-hkfnq" podStartSLOduration=4.163673438 podStartE2EDuration="13.74344115s" podCreationTimestamp="2026-03-12 18:17:59 +0000 UTC" firstStartedPulling="2026-03-12 18:18:00.537437409 +0000 UTC m=+281.006038356" lastFinishedPulling="2026-03-12 18:18:10.117205121 +0000 UTC m=+290.585806068" observedRunningTime="2026-03-12 18:18:12.740046583 +0000 UTC m=+293.208647540" watchObservedRunningTime="2026-03-12 18:18:12.74344115 +0000 UTC m=+293.212042107" Mar 12 18:18:12.767848 master-0 kubenswrapper[7337]: I0312 18:18:12.766823 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" podStartSLOduration=4.053679687 podStartE2EDuration="13.766806197s" podCreationTimestamp="2026-03-12 18:17:59 +0000 UTC" firstStartedPulling="2026-03-12 18:18:00.421308742 +0000 UTC m=+280.889909689" lastFinishedPulling="2026-03-12 18:18:10.134435242 +0000 UTC m=+290.603036199" observedRunningTime="2026-03-12 18:18:12.766470199 +0000 UTC m=+293.235071156" watchObservedRunningTime="2026-03-12 18:18:12.766806197 +0000 UTC m=+293.235407144" Mar 12 18:18:12.769044 master-0 kubenswrapper[7337]: I0312 18:18:12.768453 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"39c441a05d91070efc538925475b0a44","Type":"ContainerStarted","Data":"7fb23a2c8c1ff62e8501ccd63993df169d80f53ec586abd8df0866b032126fb5"} Mar 12 18:18:12.792171 master-0 kubenswrapper[7337]: I0312 18:18:12.792073 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-mfv5x" podStartSLOduration=9.792052173 podStartE2EDuration="9.792052173s" podCreationTimestamp="2026-03-12 18:18:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:18:12.787997929 +0000 UTC m=+293.256598896" watchObservedRunningTime="2026-03-12 18:18:12.792052173 +0000 UTC m=+293.260653120" Mar 12 18:18:12.809491 master-0 kubenswrapper[7337]: I0312 18:18:12.808793 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-8762n" podStartSLOduration=4.7009728410000005 podStartE2EDuration="14.808774791s" podCreationTimestamp="2026-03-12 18:17:58 +0000 UTC" firstStartedPulling="2026-03-12 18:18:00.025084903 +0000 UTC m=+280.493685850" lastFinishedPulling="2026-03-12 18:18:10.132886853 +0000 UTC m=+290.601487800" observedRunningTime="2026-03-12 18:18:12.806259776 +0000 UTC m=+293.274860733" watchObservedRunningTime="2026-03-12 18:18:12.808774791 +0000 UTC m=+293.277375738" Mar 12 18:18:16.016303 master-0 kubenswrapper[7337]: I0312 18:18:16.016260 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 18:18:16.074319 master-0 kubenswrapper[7337]: I0312 18:18:16.074239 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec8121ea-f6e9-4232-9837-78b278a8cf54-kube-api-access\") pod \"ec8121ea-f6e9-4232-9837-78b278a8cf54\" (UID: \"ec8121ea-f6e9-4232-9837-78b278a8cf54\") " Mar 12 18:18:16.074501 master-0 kubenswrapper[7337]: I0312 18:18:16.074440 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ec8121ea-f6e9-4232-9837-78b278a8cf54-var-lock\") pod \"ec8121ea-f6e9-4232-9837-78b278a8cf54\" (UID: \"ec8121ea-f6e9-4232-9837-78b278a8cf54\") " Mar 12 18:18:16.074552 master-0 kubenswrapper[7337]: I0312 18:18:16.074519 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec8121ea-f6e9-4232-9837-78b278a8cf54-kubelet-dir\") pod \"ec8121ea-f6e9-4232-9837-78b278a8cf54\" (UID: \"ec8121ea-f6e9-4232-9837-78b278a8cf54\") " Mar 12 18:18:16.074650 master-0 kubenswrapper[7337]: I0312 18:18:16.074600 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec8121ea-f6e9-4232-9837-78b278a8cf54-var-lock" (OuterVolumeSpecName: "var-lock") pod "ec8121ea-f6e9-4232-9837-78b278a8cf54" (UID: "ec8121ea-f6e9-4232-9837-78b278a8cf54"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:18:16.074824 master-0 kubenswrapper[7337]: I0312 18:18:16.074767 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec8121ea-f6e9-4232-9837-78b278a8cf54-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ec8121ea-f6e9-4232-9837-78b278a8cf54" (UID: "ec8121ea-f6e9-4232-9837-78b278a8cf54"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:18:16.074870 master-0 kubenswrapper[7337]: I0312 18:18:16.074799 7337 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ec8121ea-f6e9-4232-9837-78b278a8cf54-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 18:18:16.078276 master-0 kubenswrapper[7337]: I0312 18:18:16.078190 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec8121ea-f6e9-4232-9837-78b278a8cf54-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ec8121ea-f6e9-4232-9837-78b278a8cf54" (UID: "ec8121ea-f6e9-4232-9837-78b278a8cf54"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:18:16.175327 master-0 kubenswrapper[7337]: I0312 18:18:16.175234 7337 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec8121ea-f6e9-4232-9837-78b278a8cf54-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:18:16.175327 master-0 kubenswrapper[7337]: I0312 18:18:16.175292 7337 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec8121ea-f6e9-4232-9837-78b278a8cf54-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 18:18:16.791983 master-0 kubenswrapper[7337]: I0312 18:18:16.791838 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"ec8121ea-f6e9-4232-9837-78b278a8cf54","Type":"ContainerDied","Data":"1fbd8581b67e0a5e29b36d7c0987774ae0aa02a95a0bdf7e572b9e31a319d172"} Mar 12 18:18:16.791983 master-0 kubenswrapper[7337]: I0312 18:18:16.791976 7337 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fbd8581b67e0a5e29b36d7c0987774ae0aa02a95a0bdf7e572b9e31a319d172" Mar 12 18:18:16.791983 master-0 kubenswrapper[7337]: I0312 18:18:16.791942 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 18:18:16.793393 master-0 kubenswrapper[7337]: I0312 18:18:16.793348 7337 generic.go:334] "Generic (PLEG): container finished" podID="faf32e9b-b44a-45dc-97b3-ec3e753e1345" containerID="ec0d020d1bfb44db224e6aa5a751f545c2147fe4463563cefced5e71080ee2ff" exitCode=0 Mar 12 18:18:16.793393 master-0 kubenswrapper[7337]: I0312 18:18:16.793378 7337 generic.go:334] "Generic (PLEG): container finished" podID="faf32e9b-b44a-45dc-97b3-ec3e753e1345" containerID="bb2208056930bc65c37d6e81a53b20d981cec3c9043db55312e983db54ff9a15" exitCode=0 Mar 12 18:18:16.793500 master-0 kubenswrapper[7337]: I0312 18:18:16.793397 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-dt8x2" event={"ID":"faf32e9b-b44a-45dc-97b3-ec3e753e1345","Type":"ContainerDied","Data":"ec0d020d1bfb44db224e6aa5a751f545c2147fe4463563cefced5e71080ee2ff"} Mar 12 18:18:16.793500 master-0 kubenswrapper[7337]: I0312 18:18:16.793418 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-dt8x2" event={"ID":"faf32e9b-b44a-45dc-97b3-ec3e753e1345","Type":"ContainerDied","Data":"bb2208056930bc65c37d6e81a53b20d981cec3c9043db55312e983db54ff9a15"} Mar 12 18:18:17.432637 master-0 kubenswrapper[7337]: I0312 18:18:17.432585 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d5tcw" Mar 12 18:18:17.469841 master-0 kubenswrapper[7337]: I0312 18:18:17.469792 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d5tcw" Mar 12 18:18:18.939333 master-0 kubenswrapper[7337]: I0312 18:18:18.939273 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-dt8x2" Mar 12 18:18:19.113824 master-0 kubenswrapper[7337]: I0312 18:18:19.113779 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/faf32e9b-b44a-45dc-97b3-ec3e753e1345-auth-proxy-config\") pod \"faf32e9b-b44a-45dc-97b3-ec3e753e1345\" (UID: \"faf32e9b-b44a-45dc-97b3-ec3e753e1345\") " Mar 12 18:18:19.113921 master-0 kubenswrapper[7337]: I0312 18:18:19.113838 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/faf32e9b-b44a-45dc-97b3-ec3e753e1345-machine-approver-tls\") pod \"faf32e9b-b44a-45dc-97b3-ec3e753e1345\" (UID: \"faf32e9b-b44a-45dc-97b3-ec3e753e1345\") " Mar 12 18:18:19.113921 master-0 kubenswrapper[7337]: I0312 18:18:19.113884 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6gq4\" (UniqueName: \"kubernetes.io/projected/faf32e9b-b44a-45dc-97b3-ec3e753e1345-kube-api-access-n6gq4\") pod \"faf32e9b-b44a-45dc-97b3-ec3e753e1345\" (UID: \"faf32e9b-b44a-45dc-97b3-ec3e753e1345\") " Mar 12 18:18:19.114019 master-0 kubenswrapper[7337]: I0312 18:18:19.113970 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faf32e9b-b44a-45dc-97b3-ec3e753e1345-config\") pod \"faf32e9b-b44a-45dc-97b3-ec3e753e1345\" (UID: \"faf32e9b-b44a-45dc-97b3-ec3e753e1345\") " Mar 12 18:18:19.114777 master-0 kubenswrapper[7337]: I0312 18:18:19.114739 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/faf32e9b-b44a-45dc-97b3-ec3e753e1345-config" (OuterVolumeSpecName: "config") pod "faf32e9b-b44a-45dc-97b3-ec3e753e1345" (UID: "faf32e9b-b44a-45dc-97b3-ec3e753e1345"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:18:19.114860 master-0 kubenswrapper[7337]: I0312 18:18:19.114795 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/faf32e9b-b44a-45dc-97b3-ec3e753e1345-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "faf32e9b-b44a-45dc-97b3-ec3e753e1345" (UID: "faf32e9b-b44a-45dc-97b3-ec3e753e1345"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:18:19.117050 master-0 kubenswrapper[7337]: I0312 18:18:19.116993 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faf32e9b-b44a-45dc-97b3-ec3e753e1345-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "faf32e9b-b44a-45dc-97b3-ec3e753e1345" (UID: "faf32e9b-b44a-45dc-97b3-ec3e753e1345"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:18:19.117305 master-0 kubenswrapper[7337]: I0312 18:18:19.117268 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faf32e9b-b44a-45dc-97b3-ec3e753e1345-kube-api-access-n6gq4" (OuterVolumeSpecName: "kube-api-access-n6gq4") pod "faf32e9b-b44a-45dc-97b3-ec3e753e1345" (UID: "faf32e9b-b44a-45dc-97b3-ec3e753e1345"). InnerVolumeSpecName "kube-api-access-n6gq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:18:19.215288 master-0 kubenswrapper[7337]: I0312 18:18:19.215207 7337 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/faf32e9b-b44a-45dc-97b3-ec3e753e1345-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:18:19.215288 master-0 kubenswrapper[7337]: I0312 18:18:19.215242 7337 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/faf32e9b-b44a-45dc-97b3-ec3e753e1345-machine-approver-tls\") on node \"master-0\" DevicePath \"\"" Mar 12 18:18:19.215288 master-0 kubenswrapper[7337]: I0312 18:18:19.215251 7337 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6gq4\" (UniqueName: \"kubernetes.io/projected/faf32e9b-b44a-45dc-97b3-ec3e753e1345-kube-api-access-n6gq4\") on node \"master-0\" DevicePath \"\"" Mar 12 18:18:19.215288 master-0 kubenswrapper[7337]: I0312 18:18:19.215260 7337 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faf32e9b-b44a-45dc-97b3-ec3e753e1345-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:18:19.814271 master-0 kubenswrapper[7337]: I0312 18:18:19.814216 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c" event={"ID":"37392bec-4d79-4a65-bc41-6708d9edab46","Type":"ContainerStarted","Data":"8a7ba3f4cb5e48b92420b11647000f18ba847413c3251c8b51519d69bce70dad"} Mar 12 18:18:19.817577 master-0 kubenswrapper[7337]: I0312 18:18:19.817534 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"39c441a05d91070efc538925475b0a44","Type":"ContainerStarted","Data":"64bbe8f8e78fcdf7a8f37094d28682b6c744a6d2ce7b94afbf02202b8aaa42c7"} Mar 12 18:18:19.817641 master-0 kubenswrapper[7337]: I0312 18:18:19.817592 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"39c441a05d91070efc538925475b0a44","Type":"ContainerStarted","Data":"40bb332af0befdec702043170fe44c9cb61f64fd323636de64adc0352f5c7576"} Mar 12 18:18:19.819540 master-0 kubenswrapper[7337]: I0312 18:18:19.819498 7337 scope.go:117] "RemoveContainer" containerID="87d66e1cc29893f39e111a5a2a21953d603c0527dd13bddf2486860762147978" Mar 12 18:18:19.820373 master-0 kubenswrapper[7337]: I0312 18:18:19.820343 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-gnrzd" event={"ID":"4687cf53-55d7-42b7-b24d-e57da3989fd6","Type":"ContainerStarted","Data":"44cc9e8d9a9e1ef528d84dc48de681fe718058fee067d4dfdac76fbd7b702aa8"} Mar 12 18:18:19.822951 master-0 kubenswrapper[7337]: I0312 18:18:19.822922 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-dt8x2" event={"ID":"faf32e9b-b44a-45dc-97b3-ec3e753e1345","Type":"ContainerDied","Data":"49eef942f52a4b29048be6e581a9902f3d6414019ac59aefe3746d436787a19d"} Mar 12 18:18:19.823032 master-0 kubenswrapper[7337]: I0312 18:18:19.822963 7337 scope.go:117] "RemoveContainer" containerID="ec0d020d1bfb44db224e6aa5a751f545c2147fe4463563cefced5e71080ee2ff" Mar 12 18:18:19.823075 master-0 kubenswrapper[7337]: I0312 18:18:19.822961 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-dt8x2" Mar 12 18:18:19.825496 master-0 kubenswrapper[7337]: I0312 18:18:19.825470 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-vk7lr" event={"ID":"b2c6cd11-b1ed-4fed-a4ce-4eee0af20868","Type":"ContainerStarted","Data":"a6c9c17cf2e22d3f50535b91813c38501724b5923c2302d1db2fa34a7ffd3ea6"} Mar 12 18:18:19.846655 master-0 kubenswrapper[7337]: I0312 18:18:19.846553 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c" podStartSLOduration=10.194050917 podStartE2EDuration="20.846501718s" podCreationTimestamp="2026-03-12 18:17:59 +0000 UTC" firstStartedPulling="2026-03-12 18:17:59.483384807 +0000 UTC m=+279.951985754" lastFinishedPulling="2026-03-12 18:18:10.135835608 +0000 UTC m=+290.604436555" observedRunningTime="2026-03-12 18:18:19.844749064 +0000 UTC m=+300.313350011" watchObservedRunningTime="2026-03-12 18:18:19.846501718 +0000 UTC m=+300.315102705" Mar 12 18:18:19.862611 master-0 kubenswrapper[7337]: I0312 18:18:19.862517 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-dt8x2"] Mar 12 18:18:19.867855 master-0 kubenswrapper[7337]: I0312 18:18:19.867789 7337 scope.go:117] "RemoveContainer" containerID="bb2208056930bc65c37d6e81a53b20d981cec3c9043db55312e983db54ff9a15" Mar 12 18:18:19.869815 master-0 kubenswrapper[7337]: I0312 18:18:19.869751 7337 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-dt8x2"] Mar 12 18:18:19.878449 master-0 kubenswrapper[7337]: I0312 18:18:19.878365 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=12.878323042 podStartE2EDuration="12.878323042s" podCreationTimestamp="2026-03-12 18:18:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:18:19.877005689 +0000 UTC m=+300.345606676" watchObservedRunningTime="2026-03-12 18:18:19.878323042 +0000 UTC m=+300.346923999" Mar 12 18:18:19.916204 master-0 kubenswrapper[7337]: I0312 18:18:19.915421 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-gnrzd" podStartSLOduration=2.516180146 podStartE2EDuration="20.915397461s" podCreationTimestamp="2026-03-12 18:17:59 +0000 UTC" firstStartedPulling="2026-03-12 18:18:00.580748001 +0000 UTC m=+281.049348948" lastFinishedPulling="2026-03-12 18:18:18.979965316 +0000 UTC m=+299.448566263" observedRunningTime="2026-03-12 18:18:19.914891218 +0000 UTC m=+300.383492175" watchObservedRunningTime="2026-03-12 18:18:19.915397461 +0000 UTC m=+300.383998418" Mar 12 18:18:19.917441 master-0 kubenswrapper[7337]: I0312 18:18:19.917393 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-vk7lr" podStartSLOduration=3.213093015 podStartE2EDuration="21.917383491s" podCreationTimestamp="2026-03-12 18:17:58 +0000 UTC" firstStartedPulling="2026-03-12 18:18:00.275984118 +0000 UTC m=+280.744585065" lastFinishedPulling="2026-03-12 18:18:18.980274594 +0000 UTC m=+299.448875541" observedRunningTime="2026-03-12 18:18:19.900762146 +0000 UTC m=+300.369363103" watchObservedRunningTime="2026-03-12 18:18:19.917383491 +0000 UTC m=+300.385984448" Mar 12 18:18:21.730860 master-0 kubenswrapper[7337]: I0312 18:18:21.730828 7337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faf32e9b-b44a-45dc-97b3-ec3e753e1345" path="/var/lib/kubelet/pods/faf32e9b-b44a-45dc-97b3-ec3e753e1345/volumes" Mar 12 18:18:24.146918 master-0 kubenswrapper[7337]: I0312 18:18:24.146872 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ljw8b_062f1b21-2ffc-47da-8334-427c3b2a1a90/authentication-operator/0.log" Mar 12 18:18:24.540459 master-0 kubenswrapper[7337]: I0312 18:18:24.540328 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-6cb976c975-4sxlg_fb529297-b3de-4167-a91e-0a63725b3b0f/fix-audit-permissions/0.log" Mar 12 18:18:24.742984 master-0 kubenswrapper[7337]: I0312 18:18:24.742940 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-6cb976c975-4sxlg_fb529297-b3de-4167-a91e-0a63725b3b0f/oauth-apiserver/0.log" Mar 12 18:18:24.943258 master-0 kubenswrapper[7337]: I0312 18:18:24.943221 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-hkfnq_aee40f88-83e4-45c8-8331-969943f9f9aa/kube-rbac-proxy/0.log" Mar 12 18:18:25.142175 master-0 kubenswrapper[7337]: I0312 18:18:25.142089 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-hkfnq_aee40f88-83e4-45c8-8331-969943f9f9aa/cluster-autoscaler-operator/0.log" Mar 12 18:18:25.342556 master-0 kubenswrapper[7337]: I0312 18:18:25.342484 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-2psgb_e5fb0152-3efd-4000-bce3-fa90b75316ae/cluster-baremetal-operator/0.log" Mar 12 18:18:25.543280 master-0 kubenswrapper[7337]: I0312 18:18:25.543197 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-2psgb_e5fb0152-3efd-4000-bce3-fa90b75316ae/baremetal-kube-rbac-proxy/0.log" Mar 12 18:18:25.742034 master-0 kubenswrapper[7337]: I0312 18:18:25.741931 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-zd9gm_34cbf061-4c76-476e-bed9-0a133c744862/control-plane-machine-set-operator/0.log" Mar 12 18:18:25.941197 master-0 kubenswrapper[7337]: I0312 18:18:25.941145 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-gnrzd_4687cf53-55d7-42b7-b24d-e57da3989fd6/kube-rbac-proxy/0.log" Mar 12 18:18:26.142283 master-0 kubenswrapper[7337]: I0312 18:18:26.142226 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-gnrzd_4687cf53-55d7-42b7-b24d-e57da3989fd6/machine-api-operator/0.log" Mar 12 18:18:26.348750 master-0 kubenswrapper[7337]: I0312 18:18:26.348708 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-bfq7b_e697746f-fb9e-4d10-ab61-33c68e62cc0d/etcd-operator/0.log" Mar 12 18:18:26.540756 master-0 kubenswrapper[7337]: I0312 18:18:26.540642 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-bfq7b_e697746f-fb9e-4d10-ab61-33c68e62cc0d/etcd-operator/1.log" Mar 12 18:18:26.743607 master-0 kubenswrapper[7337]: I0312 18:18:26.743565 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_e418d797-2c31-404b-9dc3-251399e42542/installer/0.log" Mar 12 18:18:26.942400 master-0 kubenswrapper[7337]: I0312 18:18:26.942349 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-ddsbn_ab926874-9722-4e65-9084-27b2f9915450/kube-apiserver-operator/0.log" Mar 12 18:18:27.141464 master-0 kubenswrapper[7337]: I0312 18:18:27.141422 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-ddsbn_ab926874-9722-4e65-9084-27b2f9915450/kube-apiserver-operator/1.log" Mar 12 18:18:27.339733 master-0 kubenswrapper[7337]: I0312 18:18:27.339674 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_5f77c8e18b751d90bc0dfe2d4e304050/setup/0.log" Mar 12 18:18:27.489571 master-0 kubenswrapper[7337]: I0312 18:18:27.489487 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:18:27.490339 master-0 kubenswrapper[7337]: I0312 18:18:27.489676 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:18:27.490339 master-0 kubenswrapper[7337]: I0312 18:18:27.489748 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:18:27.490339 master-0 kubenswrapper[7337]: I0312 18:18:27.489780 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:18:27.490339 master-0 kubenswrapper[7337]: I0312 18:18:27.489940 7337 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 12 18:18:27.490339 master-0 kubenswrapper[7337]: I0312 18:18:27.490013 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="39c441a05d91070efc538925475b0a44" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 12 18:18:27.494995 master-0 kubenswrapper[7337]: I0312 18:18:27.494917 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:18:27.544762 master-0 kubenswrapper[7337]: I0312 18:18:27.544702 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_5f77c8e18b751d90bc0dfe2d4e304050/kube-apiserver/0.log" Mar 12 18:18:27.739730 master-0 kubenswrapper[7337]: I0312 18:18:27.739591 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_5f77c8e18b751d90bc0dfe2d4e304050/kube-apiserver-insecure-readyz/0.log" Mar 12 18:18:27.888977 master-0 kubenswrapper[7337]: I0312 18:18:27.888495 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:18:27.939972 master-0 kubenswrapper[7337]: I0312 18:18:27.939923 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_38785e6e-3052-405c-8874-4f295985def5/installer/0.log" Mar 12 18:18:28.145209 master-0 kubenswrapper[7337]: I0312 18:18:28.145142 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_ec8121ea-f6e9-4232-9837-78b278a8cf54/installer/0.log" Mar 12 18:18:28.343412 master-0 kubenswrapper[7337]: I0312 18:18:28.343329 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/kube-controller-manager/0.log" Mar 12 18:18:28.545285 master-0 kubenswrapper[7337]: I0312 18:18:28.545232 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/cluster-policy-controller/0.log" Mar 12 18:18:28.741991 master-0 kubenswrapper[7337]: I0312 18:18:28.741918 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/kube-controller-manager-cert-syncer/0.log" Mar 12 18:18:28.940964 master-0 kubenswrapper[7337]: I0312 18:18:28.940823 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/kube-controller-manager-recovery-controller/0.log" Mar 12 18:18:29.146558 master-0 kubenswrapper[7337]: I0312 18:18:29.146442 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-w7xvp_e720e1d0-5a6d-4b76-8b25-5963e24950f5/kube-controller-manager-operator/0.log" Mar 12 18:18:29.341571 master-0 kubenswrapper[7337]: I0312 18:18:29.341233 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-w7xvp_e720e1d0-5a6d-4b76-8b25-5963e24950f5/kube-controller-manager-operator/1.log" Mar 12 18:18:29.546865 master-0 kubenswrapper[7337]: I0312 18:18:29.546807 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_a1a56802af72ce1aac6b5077f1695ac0/kube-scheduler/0.log" Mar 12 18:18:29.751480 master-0 kubenswrapper[7337]: I0312 18:18:29.751347 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_a1a56802af72ce1aac6b5077f1695ac0/kube-scheduler/1.log" Mar 12 18:18:29.949027 master-0 kubenswrapper[7337]: I0312 18:18:29.948949 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_7542f3f1-23fe-41df-99b9-4324c75d35b7/installer/0.log" Mar 12 18:18:30.144169 master-0 kubenswrapper[7337]: I0312 18:18:30.144112 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5c74bfc494-dpb6k_d4ae1240-e04e-48e9-88df-9f1a53508da7/kube-scheduler-operator-container/0.log" Mar 12 18:18:30.340733 master-0 kubenswrapper[7337]: I0312 18:18:30.340679 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5c74bfc494-dpb6k_d4ae1240-e04e-48e9-88df-9f1a53508da7/kube-scheduler-operator-container/1.log" Mar 12 18:18:30.541992 master-0 kubenswrapper[7337]: I0312 18:18:30.541958 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-799b6db4d7-6qlzz_236f2886-bb69-49a7-9471-36454fd1cbd3/openshift-apiserver-operator/0.log" Mar 12 18:18:30.739592 master-0 kubenswrapper[7337]: I0312 18:18:30.739533 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-799b6db4d7-6qlzz_236f2886-bb69-49a7-9471-36454fd1cbd3/openshift-apiserver-operator/1.log" Mar 12 18:18:30.939444 master-0 kubenswrapper[7337]: I0312 18:18:30.939333 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-5786c989f8-f6jgb_9b41258c-ac1d-4e00-ac5e-732d85441f12/fix-audit-permissions/0.log" Mar 12 18:18:31.141572 master-0 kubenswrapper[7337]: I0312 18:18:31.141535 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-5786c989f8-f6jgb_9b41258c-ac1d-4e00-ac5e-732d85441f12/openshift-apiserver/0.log" Mar 12 18:18:31.344181 master-0 kubenswrapper[7337]: I0312 18:18:31.344149 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-5786c989f8-f6jgb_9b41258c-ac1d-4e00-ac5e-732d85441f12/openshift-apiserver-check-endpoints/0.log" Mar 12 18:18:31.544415 master-0 kubenswrapper[7337]: I0312 18:18:31.544365 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-bfq7b_e697746f-fb9e-4d10-ab61-33c68e62cc0d/etcd-operator/0.log" Mar 12 18:18:31.740006 master-0 kubenswrapper[7337]: I0312 18:18:31.739963 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-bfq7b_e697746f-fb9e-4d10-ab61-33c68e62cc0d/etcd-operator/1.log" Mar 12 18:18:31.839596 master-0 kubenswrapper[7337]: I0312 18:18:31.834637 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-8d675b596-kcpg5_875bdfaa-b0a4-4412-a477-c962844e7057/multus-admission-controller/0.log" Mar 12 18:18:31.839596 master-0 kubenswrapper[7337]: I0312 18:18:31.834948 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-kcpg5" Mar 12 18:18:31.907232 master-0 kubenswrapper[7337]: I0312 18:18:31.907131 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-8d675b596-kcpg5_875bdfaa-b0a4-4412-a477-c962844e7057/multus-admission-controller/0.log" Mar 12 18:18:31.907232 master-0 kubenswrapper[7337]: I0312 18:18:31.907176 7337 generic.go:334] "Generic (PLEG): container finished" podID="875bdfaa-b0a4-4412-a477-c962844e7057" containerID="017d55d2922541b7f8cf5506cf963835c47653756128a5be4b04a566d5989973" exitCode=137 Mar 12 18:18:31.907232 master-0 kubenswrapper[7337]: I0312 18:18:31.907201 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-kcpg5" event={"ID":"875bdfaa-b0a4-4412-a477-c962844e7057","Type":"ContainerDied","Data":"017d55d2922541b7f8cf5506cf963835c47653756128a5be4b04a566d5989973"} Mar 12 18:18:31.907232 master-0 kubenswrapper[7337]: I0312 18:18:31.907227 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-kcpg5" event={"ID":"875bdfaa-b0a4-4412-a477-c962844e7057","Type":"ContainerDied","Data":"04e6b16b49390ef2fd14eeb3200708298f9f8befad96c527fa22cf0d9077e2eb"} Mar 12 18:18:31.907232 master-0 kubenswrapper[7337]: I0312 18:18:31.907228 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-kcpg5" Mar 12 18:18:31.907500 master-0 kubenswrapper[7337]: I0312 18:18:31.907244 7337 scope.go:117] "RemoveContainer" containerID="4e982f58af65b335ca3753572817b82f818c04cb442b9fd1ca79a8530ee48175" Mar 12 18:18:31.920309 master-0 kubenswrapper[7337]: I0312 18:18:31.920272 7337 scope.go:117] "RemoveContainer" containerID="017d55d2922541b7f8cf5506cf963835c47653756128a5be4b04a566d5989973" Mar 12 18:18:31.934144 master-0 kubenswrapper[7337]: I0312 18:18:31.934112 7337 scope.go:117] "RemoveContainer" containerID="4e982f58af65b335ca3753572817b82f818c04cb442b9fd1ca79a8530ee48175" Mar 12 18:18:31.936112 master-0 kubenswrapper[7337]: E0312 18:18:31.936040 7337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e982f58af65b335ca3753572817b82f818c04cb442b9fd1ca79a8530ee48175\": container with ID starting with 4e982f58af65b335ca3753572817b82f818c04cb442b9fd1ca79a8530ee48175 not found: ID does not exist" containerID="4e982f58af65b335ca3753572817b82f818c04cb442b9fd1ca79a8530ee48175" Mar 12 18:18:31.936223 master-0 kubenswrapper[7337]: I0312 18:18:31.936109 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e982f58af65b335ca3753572817b82f818c04cb442b9fd1ca79a8530ee48175"} err="failed to get container status \"4e982f58af65b335ca3753572817b82f818c04cb442b9fd1ca79a8530ee48175\": rpc error: code = NotFound desc = could not find container \"4e982f58af65b335ca3753572817b82f818c04cb442b9fd1ca79a8530ee48175\": container with ID starting with 4e982f58af65b335ca3753572817b82f818c04cb442b9fd1ca79a8530ee48175 not found: ID does not exist" Mar 12 18:18:31.936223 master-0 kubenswrapper[7337]: I0312 18:18:31.936154 7337 scope.go:117] "RemoveContainer" containerID="017d55d2922541b7f8cf5506cf963835c47653756128a5be4b04a566d5989973" Mar 12 18:18:31.936560 master-0 kubenswrapper[7337]: E0312 18:18:31.936491 7337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"017d55d2922541b7f8cf5506cf963835c47653756128a5be4b04a566d5989973\": container with ID starting with 017d55d2922541b7f8cf5506cf963835c47653756128a5be4b04a566d5989973 not found: ID does not exist" containerID="017d55d2922541b7f8cf5506cf963835c47653756128a5be4b04a566d5989973" Mar 12 18:18:31.936618 master-0 kubenswrapper[7337]: I0312 18:18:31.936576 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"017d55d2922541b7f8cf5506cf963835c47653756128a5be4b04a566d5989973"} err="failed to get container status \"017d55d2922541b7f8cf5506cf963835c47653756128a5be4b04a566d5989973\": rpc error: code = NotFound desc = could not find container \"017d55d2922541b7f8cf5506cf963835c47653756128a5be4b04a566d5989973\": container with ID starting with 017d55d2922541b7f8cf5506cf963835c47653756128a5be4b04a566d5989973 not found: ID does not exist" Mar 12 18:18:31.944723 master-0 kubenswrapper[7337]: I0312 18:18:31.944665 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-7d9c49f57b-pslh7_47850839-bb4b-41e9-ac31-f1cabbb4926d/catalog-operator/0.log" Mar 12 18:18:31.994122 master-0 kubenswrapper[7337]: I0312 18:18:31.994057 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs\") pod \"875bdfaa-b0a4-4412-a477-c962844e7057\" (UID: \"875bdfaa-b0a4-4412-a477-c962844e7057\") " Mar 12 18:18:31.994315 master-0 kubenswrapper[7337]: I0312 18:18:31.994155 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2skd\" (UniqueName: \"kubernetes.io/projected/875bdfaa-b0a4-4412-a477-c962844e7057-kube-api-access-l2skd\") pod \"875bdfaa-b0a4-4412-a477-c962844e7057\" (UID: \"875bdfaa-b0a4-4412-a477-c962844e7057\") " Mar 12 18:18:32.002258 master-0 kubenswrapper[7337]: I0312 18:18:31.997554 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/875bdfaa-b0a4-4412-a477-c962844e7057-kube-api-access-l2skd" (OuterVolumeSpecName: "kube-api-access-l2skd") pod "875bdfaa-b0a4-4412-a477-c962844e7057" (UID: "875bdfaa-b0a4-4412-a477-c962844e7057"). InnerVolumeSpecName "kube-api-access-l2skd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:18:32.002258 master-0 kubenswrapper[7337]: I0312 18:18:31.997943 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "875bdfaa-b0a4-4412-a477-c962844e7057" (UID: "875bdfaa-b0a4-4412-a477-c962844e7057"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:18:32.095272 master-0 kubenswrapper[7337]: I0312 18:18:32.095169 7337 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/875bdfaa-b0a4-4412-a477-c962844e7057-webhook-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:18:32.095272 master-0 kubenswrapper[7337]: I0312 18:18:32.095241 7337 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2skd\" (UniqueName: \"kubernetes.io/projected/875bdfaa-b0a4-4412-a477-c962844e7057-kube-api-access-l2skd\") on node \"master-0\" DevicePath \"\"" Mar 12 18:18:32.144714 master-0 kubenswrapper[7337]: I0312 18:18:32.144318 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-d64cfc9db-npt4r_d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27/olm-operator/0.log" Mar 12 18:18:32.250847 master-0 kubenswrapper[7337]: I0312 18:18:32.250759 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-kcpg5"] Mar 12 18:18:32.253179 master-0 kubenswrapper[7337]: I0312 18:18:32.253130 7337 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-kcpg5"] Mar 12 18:18:32.340592 master-0 kubenswrapper[7337]: I0312 18:18:32.340502 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-kwv7s_51eb717b-d11f-4bc3-8df6-deb51d5889f3/kube-rbac-proxy/0.log" Mar 12 18:18:32.546403 master-0 kubenswrapper[7337]: I0312 18:18:32.546362 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-kwv7s_51eb717b-d11f-4bc3-8df6-deb51d5889f3/package-server-manager/0.log" Mar 12 18:18:32.744248 master-0 kubenswrapper[7337]: I0312 18:18:32.744194 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-694648486f-f89lc_b38e7fcd-8f7a-4d4f-8702-7ef205261054/packageserver/0.log" Mar 12 18:18:33.731923 master-0 kubenswrapper[7337]: I0312 18:18:33.730563 7337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="875bdfaa-b0a4-4412-a477-c962844e7057" path="/var/lib/kubelet/pods/875bdfaa-b0a4-4412-a477-c962844e7057/volumes" Mar 12 18:18:37.490396 master-0 kubenswrapper[7337]: I0312 18:18:37.490320 7337 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 12 18:18:37.490962 master-0 kubenswrapper[7337]: I0312 18:18:37.490409 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="39c441a05d91070efc538925475b0a44" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 12 18:18:46.739614 master-0 kubenswrapper[7337]: I0312 18:18:46.739553 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 12 18:18:47.493870 master-0 kubenswrapper[7337]: I0312 18:18:47.493811 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:18:47.501620 master-0 kubenswrapper[7337]: I0312 18:18:47.501574 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:18:47.553943 master-0 kubenswrapper[7337]: I0312 18:18:47.553844 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=1.553820548 podStartE2EDuration="1.553820548s" podCreationTimestamp="2026-03-12 18:18:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:18:47.552787852 +0000 UTC m=+328.021388889" watchObservedRunningTime="2026-03-12 18:18:47.553820548 +0000 UTC m=+328.022421525" Mar 12 18:19:03.123916 master-0 kubenswrapper[7337]: I0312 18:19:03.123579 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c"] Mar 12 18:19:03.123916 master-0 kubenswrapper[7337]: I0312 18:19:03.123849 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c" podUID="37392bec-4d79-4a65-bc41-6708d9edab46" containerName="cluster-cloud-controller-manager" containerID="cri-o://97ba856e9d7794e001e2f28f7bc1d31252ab0e653c5e071322d4171ada719ab3" gracePeriod=30 Mar 12 18:19:03.124414 master-0 kubenswrapper[7337]: I0312 18:19:03.124195 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c" podUID="37392bec-4d79-4a65-bc41-6708d9edab46" containerName="kube-rbac-proxy" containerID="cri-o://8a7ba3f4cb5e48b92420b11647000f18ba847413c3251c8b51519d69bce70dad" gracePeriod=30 Mar 12 18:19:03.124414 master-0 kubenswrapper[7337]: I0312 18:19:03.124244 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c" podUID="37392bec-4d79-4a65-bc41-6708d9edab46" containerName="config-sync-controllers" containerID="cri-o://8e15f6e1cf0afd0d4b2ed487045ff3a4fb7861754a2333e311ac74e028e3bf00" gracePeriod=30 Mar 12 18:19:03.125442 master-0 kubenswrapper[7337]: I0312 18:19:03.125397 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7"] Mar 12 18:19:03.127266 master-0 kubenswrapper[7337]: E0312 18:19:03.126863 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec8121ea-f6e9-4232-9837-78b278a8cf54" containerName="installer" Mar 12 18:19:03.127266 master-0 kubenswrapper[7337]: I0312 18:19:03.126883 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec8121ea-f6e9-4232-9837-78b278a8cf54" containerName="installer" Mar 12 18:19:03.127266 master-0 kubenswrapper[7337]: E0312 18:19:03.126894 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faf32e9b-b44a-45dc-97b3-ec3e753e1345" containerName="machine-approver-controller" Mar 12 18:19:03.127266 master-0 kubenswrapper[7337]: I0312 18:19:03.126900 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf32e9b-b44a-45dc-97b3-ec3e753e1345" containerName="machine-approver-controller" Mar 12 18:19:03.127266 master-0 kubenswrapper[7337]: E0312 18:19:03.126913 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faf32e9b-b44a-45dc-97b3-ec3e753e1345" containerName="kube-rbac-proxy" Mar 12 18:19:03.127266 master-0 kubenswrapper[7337]: I0312 18:19:03.126919 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf32e9b-b44a-45dc-97b3-ec3e753e1345" containerName="kube-rbac-proxy" Mar 12 18:19:03.127266 master-0 kubenswrapper[7337]: E0312 18:19:03.126937 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="875bdfaa-b0a4-4412-a477-c962844e7057" containerName="multus-admission-controller" Mar 12 18:19:03.127266 master-0 kubenswrapper[7337]: I0312 18:19:03.126943 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="875bdfaa-b0a4-4412-a477-c962844e7057" containerName="multus-admission-controller" Mar 12 18:19:03.127266 master-0 kubenswrapper[7337]: E0312 18:19:03.126951 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="875bdfaa-b0a4-4412-a477-c962844e7057" containerName="kube-rbac-proxy" Mar 12 18:19:03.127266 master-0 kubenswrapper[7337]: I0312 18:19:03.126956 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="875bdfaa-b0a4-4412-a477-c962844e7057" containerName="kube-rbac-proxy" Mar 12 18:19:03.127266 master-0 kubenswrapper[7337]: I0312 18:19:03.127046 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="875bdfaa-b0a4-4412-a477-c962844e7057" containerName="multus-admission-controller" Mar 12 18:19:03.127266 master-0 kubenswrapper[7337]: I0312 18:19:03.127059 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="875bdfaa-b0a4-4412-a477-c962844e7057" containerName="kube-rbac-proxy" Mar 12 18:19:03.127266 master-0 kubenswrapper[7337]: I0312 18:19:03.127068 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="faf32e9b-b44a-45dc-97b3-ec3e753e1345" containerName="kube-rbac-proxy" Mar 12 18:19:03.127266 master-0 kubenswrapper[7337]: I0312 18:19:03.127079 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="faf32e9b-b44a-45dc-97b3-ec3e753e1345" containerName="machine-approver-controller" Mar 12 18:19:03.127266 master-0 kubenswrapper[7337]: I0312 18:19:03.127091 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec8121ea-f6e9-4232-9837-78b278a8cf54" containerName="installer" Mar 12 18:19:03.130641 master-0 kubenswrapper[7337]: I0312 18:19:03.130481 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7" Mar 12 18:19:03.133659 master-0 kubenswrapper[7337]: I0312 18:19:03.133627 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 12 18:19:03.133810 master-0 kubenswrapper[7337]: I0312 18:19:03.133790 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-9f7ld" Mar 12 18:19:03.134260 master-0 kubenswrapper[7337]: I0312 18:19:03.134227 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 12 18:19:03.134333 master-0 kubenswrapper[7337]: I0312 18:19:03.133898 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 12 18:19:03.134410 master-0 kubenswrapper[7337]: I0312 18:19:03.133937 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 12 18:19:03.134486 master-0 kubenswrapper[7337]: I0312 18:19:03.133986 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 12 18:19:03.224169 master-0 kubenswrapper[7337]: I0312 18:19:03.224126 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/030160af-c915-4f00-903a-1c4b5c2b719a-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-4w5z7\" (UID: \"030160af-c915-4f00-903a-1c4b5c2b719a\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7" Mar 12 18:19:03.224169 master-0 kubenswrapper[7337]: I0312 18:19:03.224174 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/030160af-c915-4f00-903a-1c4b5c2b719a-config\") pod \"machine-approver-754bdc9f9d-4w5z7\" (UID: \"030160af-c915-4f00-903a-1c4b5c2b719a\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7" Mar 12 18:19:03.224384 master-0 kubenswrapper[7337]: I0312 18:19:03.224215 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p4dz\" (UniqueName: \"kubernetes.io/projected/030160af-c915-4f00-903a-1c4b5c2b719a-kube-api-access-9p4dz\") pod \"machine-approver-754bdc9f9d-4w5z7\" (UID: \"030160af-c915-4f00-903a-1c4b5c2b719a\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7" Mar 12 18:19:03.224384 master-0 kubenswrapper[7337]: I0312 18:19:03.224265 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/030160af-c915-4f00-903a-1c4b5c2b719a-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-4w5z7\" (UID: \"030160af-c915-4f00-903a-1c4b5c2b719a\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7" Mar 12 18:19:03.304741 master-0 kubenswrapper[7337]: I0312 18:19:03.304697 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c" Mar 12 18:19:03.325438 master-0 kubenswrapper[7337]: I0312 18:19:03.325402 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/030160af-c915-4f00-903a-1c4b5c2b719a-config\") pod \"machine-approver-754bdc9f9d-4w5z7\" (UID: \"030160af-c915-4f00-903a-1c4b5c2b719a\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7" Mar 12 18:19:03.325698 master-0 kubenswrapper[7337]: I0312 18:19:03.325684 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p4dz\" (UniqueName: \"kubernetes.io/projected/030160af-c915-4f00-903a-1c4b5c2b719a-kube-api-access-9p4dz\") pod \"machine-approver-754bdc9f9d-4w5z7\" (UID: \"030160af-c915-4f00-903a-1c4b5c2b719a\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7" Mar 12 18:19:03.325839 master-0 kubenswrapper[7337]: I0312 18:19:03.325826 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/030160af-c915-4f00-903a-1c4b5c2b719a-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-4w5z7\" (UID: \"030160af-c915-4f00-903a-1c4b5c2b719a\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7" Mar 12 18:19:03.326407 master-0 kubenswrapper[7337]: I0312 18:19:03.326392 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/030160af-c915-4f00-903a-1c4b5c2b719a-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-4w5z7\" (UID: \"030160af-c915-4f00-903a-1c4b5c2b719a\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7" Mar 12 18:19:03.326802 master-0 kubenswrapper[7337]: I0312 18:19:03.326238 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/030160af-c915-4f00-903a-1c4b5c2b719a-config\") pod \"machine-approver-754bdc9f9d-4w5z7\" (UID: \"030160af-c915-4f00-903a-1c4b5c2b719a\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7" Mar 12 18:19:03.327100 master-0 kubenswrapper[7337]: I0312 18:19:03.327085 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/030160af-c915-4f00-903a-1c4b5c2b719a-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-4w5z7\" (UID: \"030160af-c915-4f00-903a-1c4b5c2b719a\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7" Mar 12 18:19:03.328987 master-0 kubenswrapper[7337]: I0312 18:19:03.328955 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/030160af-c915-4f00-903a-1c4b5c2b719a-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-4w5z7\" (UID: \"030160af-c915-4f00-903a-1c4b5c2b719a\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7" Mar 12 18:19:03.347933 master-0 kubenswrapper[7337]: I0312 18:19:03.347893 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p4dz\" (UniqueName: \"kubernetes.io/projected/030160af-c915-4f00-903a-1c4b5c2b719a-kube-api-access-9p4dz\") pod \"machine-approver-754bdc9f9d-4w5z7\" (UID: \"030160af-c915-4f00-903a-1c4b5c2b719a\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7" Mar 12 18:19:03.427297 master-0 kubenswrapper[7337]: I0312 18:19:03.427192 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37392bec-4d79-4a65-bc41-6708d9edab46-auth-proxy-config\") pod \"37392bec-4d79-4a65-bc41-6708d9edab46\" (UID: \"37392bec-4d79-4a65-bc41-6708d9edab46\") " Mar 12 18:19:03.428327 master-0 kubenswrapper[7337]: I0312 18:19:03.428187 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37392bec-4d79-4a65-bc41-6708d9edab46-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "37392bec-4d79-4a65-bc41-6708d9edab46" (UID: "37392bec-4d79-4a65-bc41-6708d9edab46"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:19:03.428406 master-0 kubenswrapper[7337]: I0312 18:19:03.428387 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcgtr\" (UniqueName: \"kubernetes.io/projected/37392bec-4d79-4a65-bc41-6708d9edab46-kube-api-access-pcgtr\") pod \"37392bec-4d79-4a65-bc41-6708d9edab46\" (UID: \"37392bec-4d79-4a65-bc41-6708d9edab46\") " Mar 12 18:19:03.428760 master-0 kubenswrapper[7337]: I0312 18:19:03.428739 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/37392bec-4d79-4a65-bc41-6708d9edab46-cloud-controller-manager-operator-tls\") pod \"37392bec-4d79-4a65-bc41-6708d9edab46\" (UID: \"37392bec-4d79-4a65-bc41-6708d9edab46\") " Mar 12 18:19:03.428817 master-0 kubenswrapper[7337]: I0312 18:19:03.428785 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37392bec-4d79-4a65-bc41-6708d9edab46-host-etc-kube\") pod \"37392bec-4d79-4a65-bc41-6708d9edab46\" (UID: \"37392bec-4d79-4a65-bc41-6708d9edab46\") " Mar 12 18:19:03.428817 master-0 kubenswrapper[7337]: I0312 18:19:03.428804 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/37392bec-4d79-4a65-bc41-6708d9edab46-images\") pod \"37392bec-4d79-4a65-bc41-6708d9edab46\" (UID: \"37392bec-4d79-4a65-bc41-6708d9edab46\") " Mar 12 18:19:03.428999 master-0 kubenswrapper[7337]: I0312 18:19:03.428966 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37392bec-4d79-4a65-bc41-6708d9edab46-host-etc-kube" (OuterVolumeSpecName: "host-etc-kube") pod "37392bec-4d79-4a65-bc41-6708d9edab46" (UID: "37392bec-4d79-4a65-bc41-6708d9edab46"). InnerVolumeSpecName "host-etc-kube". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:19:03.429273 master-0 kubenswrapper[7337]: I0312 18:19:03.429259 7337 reconciler_common.go:293] "Volume detached for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37392bec-4d79-4a65-bc41-6708d9edab46-host-etc-kube\") on node \"master-0\" DevicePath \"\"" Mar 12 18:19:03.429354 master-0 kubenswrapper[7337]: I0312 18:19:03.429311 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37392bec-4d79-4a65-bc41-6708d9edab46-images" (OuterVolumeSpecName: "images") pod "37392bec-4d79-4a65-bc41-6708d9edab46" (UID: "37392bec-4d79-4a65-bc41-6708d9edab46"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:19:03.429400 master-0 kubenswrapper[7337]: I0312 18:19:03.429326 7337 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37392bec-4d79-4a65-bc41-6708d9edab46-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:19:03.431759 master-0 kubenswrapper[7337]: I0312 18:19:03.431733 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37392bec-4d79-4a65-bc41-6708d9edab46-kube-api-access-pcgtr" (OuterVolumeSpecName: "kube-api-access-pcgtr") pod "37392bec-4d79-4a65-bc41-6708d9edab46" (UID: "37392bec-4d79-4a65-bc41-6708d9edab46"). InnerVolumeSpecName "kube-api-access-pcgtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:19:03.431953 master-0 kubenswrapper[7337]: I0312 18:19:03.431928 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37392bec-4d79-4a65-bc41-6708d9edab46-cloud-controller-manager-operator-tls" (OuterVolumeSpecName: "cloud-controller-manager-operator-tls") pod "37392bec-4d79-4a65-bc41-6708d9edab46" (UID: "37392bec-4d79-4a65-bc41-6708d9edab46"). InnerVolumeSpecName "cloud-controller-manager-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:19:03.530124 master-0 kubenswrapper[7337]: I0312 18:19:03.530050 7337 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/37392bec-4d79-4a65-bc41-6708d9edab46-images\") on node \"master-0\" DevicePath \"\"" Mar 12 18:19:03.530124 master-0 kubenswrapper[7337]: I0312 18:19:03.530087 7337 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcgtr\" (UniqueName: \"kubernetes.io/projected/37392bec-4d79-4a65-bc41-6708d9edab46-kube-api-access-pcgtr\") on node \"master-0\" DevicePath \"\"" Mar 12 18:19:03.530124 master-0 kubenswrapper[7337]: I0312 18:19:03.530099 7337 reconciler_common.go:293] "Volume detached for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/37392bec-4d79-4a65-bc41-6708d9edab46-cloud-controller-manager-operator-tls\") on node \"master-0\" DevicePath \"\"" Mar 12 18:19:03.542433 master-0 kubenswrapper[7337]: I0312 18:19:03.542391 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7" Mar 12 18:19:03.558785 master-0 kubenswrapper[7337]: W0312 18:19:03.558745 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod030160af_c915_4f00_903a_1c4b5c2b719a.slice/crio-f61feb293ec886a1805e62ec052aa4ca410bad475cca6977bbcf9b16b205a3fd WatchSource:0}: Error finding container f61feb293ec886a1805e62ec052aa4ca410bad475cca6977bbcf9b16b205a3fd: Status 404 returned error can't find the container with id f61feb293ec886a1805e62ec052aa4ca410bad475cca6977bbcf9b16b205a3fd Mar 12 18:19:04.121689 master-0 kubenswrapper[7337]: I0312 18:19:04.121633 7337 generic.go:334] "Generic (PLEG): container finished" podID="37392bec-4d79-4a65-bc41-6708d9edab46" containerID="8a7ba3f4cb5e48b92420b11647000f18ba847413c3251c8b51519d69bce70dad" exitCode=0 Mar 12 18:19:04.121689 master-0 kubenswrapper[7337]: I0312 18:19:04.121668 7337 generic.go:334] "Generic (PLEG): container finished" podID="37392bec-4d79-4a65-bc41-6708d9edab46" containerID="8e15f6e1cf0afd0d4b2ed487045ff3a4fb7861754a2333e311ac74e028e3bf00" exitCode=0 Mar 12 18:19:04.121689 master-0 kubenswrapper[7337]: I0312 18:19:04.121678 7337 generic.go:334] "Generic (PLEG): container finished" podID="37392bec-4d79-4a65-bc41-6708d9edab46" containerID="97ba856e9d7794e001e2f28f7bc1d31252ab0e653c5e071322d4171ada719ab3" exitCode=0 Mar 12 18:19:04.121982 master-0 kubenswrapper[7337]: I0312 18:19:04.121722 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c" event={"ID":"37392bec-4d79-4a65-bc41-6708d9edab46","Type":"ContainerDied","Data":"8a7ba3f4cb5e48b92420b11647000f18ba847413c3251c8b51519d69bce70dad"} Mar 12 18:19:04.121982 master-0 kubenswrapper[7337]: I0312 18:19:04.121753 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c" event={"ID":"37392bec-4d79-4a65-bc41-6708d9edab46","Type":"ContainerDied","Data":"8e15f6e1cf0afd0d4b2ed487045ff3a4fb7861754a2333e311ac74e028e3bf00"} Mar 12 18:19:04.121982 master-0 kubenswrapper[7337]: I0312 18:19:04.121768 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c" event={"ID":"37392bec-4d79-4a65-bc41-6708d9edab46","Type":"ContainerDied","Data":"97ba856e9d7794e001e2f28f7bc1d31252ab0e653c5e071322d4171ada719ab3"} Mar 12 18:19:04.121982 master-0 kubenswrapper[7337]: I0312 18:19:04.121780 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c" event={"ID":"37392bec-4d79-4a65-bc41-6708d9edab46","Type":"ContainerDied","Data":"a1a913402509fa3c7cf82cb73e4bb34f7c829dc85a5a92ae7f093684fc4e4156"} Mar 12 18:19:04.121982 master-0 kubenswrapper[7337]: I0312 18:19:04.121800 7337 scope.go:117] "RemoveContainer" containerID="8a7ba3f4cb5e48b92420b11647000f18ba847413c3251c8b51519d69bce70dad" Mar 12 18:19:04.121982 master-0 kubenswrapper[7337]: I0312 18:19:04.121907 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c" Mar 12 18:19:04.126733 master-0 kubenswrapper[7337]: I0312 18:19:04.126677 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7" event={"ID":"030160af-c915-4f00-903a-1c4b5c2b719a","Type":"ContainerStarted","Data":"896aa2273ca1ba7df7cc5c10fd0e284e882d24c2714f3848133288e9eccfa795"} Mar 12 18:19:04.127120 master-0 kubenswrapper[7337]: I0312 18:19:04.126741 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7" event={"ID":"030160af-c915-4f00-903a-1c4b5c2b719a","Type":"ContainerStarted","Data":"7dbb7087c854062085b34fe56f49d903d74fce4d244c99bcfb8659e8d70f1a26"} Mar 12 18:19:04.127120 master-0 kubenswrapper[7337]: I0312 18:19:04.126756 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7" event={"ID":"030160af-c915-4f00-903a-1c4b5c2b719a","Type":"ContainerStarted","Data":"f61feb293ec886a1805e62ec052aa4ca410bad475cca6977bbcf9b16b205a3fd"} Mar 12 18:19:04.139002 master-0 kubenswrapper[7337]: I0312 18:19:04.138858 7337 scope.go:117] "RemoveContainer" containerID="8e15f6e1cf0afd0d4b2ed487045ff3a4fb7861754a2333e311ac74e028e3bf00" Mar 12 18:19:04.163718 master-0 kubenswrapper[7337]: I0312 18:19:04.163633 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7" podStartSLOduration=1.163608731 podStartE2EDuration="1.163608731s" podCreationTimestamp="2026-03-12 18:19:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:19:04.160416079 +0000 UTC m=+344.629017046" watchObservedRunningTime="2026-03-12 18:19:04.163608731 +0000 UTC m=+344.632209678" Mar 12 18:19:04.179225 master-0 kubenswrapper[7337]: I0312 18:19:04.178948 7337 scope.go:117] "RemoveContainer" containerID="97ba856e9d7794e001e2f28f7bc1d31252ab0e653c5e071322d4171ada719ab3" Mar 12 18:19:04.188126 master-0 kubenswrapper[7337]: I0312 18:19:04.188087 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c"] Mar 12 18:19:04.193827 master-0 kubenswrapper[7337]: I0312 18:19:04.193717 7337 scope.go:117] "RemoveContainer" containerID="8a7ba3f4cb5e48b92420b11647000f18ba847413c3251c8b51519d69bce70dad" Mar 12 18:19:04.194205 master-0 kubenswrapper[7337]: E0312 18:19:04.194171 7337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a7ba3f4cb5e48b92420b11647000f18ba847413c3251c8b51519d69bce70dad\": container with ID starting with 8a7ba3f4cb5e48b92420b11647000f18ba847413c3251c8b51519d69bce70dad not found: ID does not exist" containerID="8a7ba3f4cb5e48b92420b11647000f18ba847413c3251c8b51519d69bce70dad" Mar 12 18:19:04.194299 master-0 kubenswrapper[7337]: I0312 18:19:04.194214 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a7ba3f4cb5e48b92420b11647000f18ba847413c3251c8b51519d69bce70dad"} err="failed to get container status \"8a7ba3f4cb5e48b92420b11647000f18ba847413c3251c8b51519d69bce70dad\": rpc error: code = NotFound desc = could not find container \"8a7ba3f4cb5e48b92420b11647000f18ba847413c3251c8b51519d69bce70dad\": container with ID starting with 8a7ba3f4cb5e48b92420b11647000f18ba847413c3251c8b51519d69bce70dad not found: ID does not exist" Mar 12 18:19:04.194299 master-0 kubenswrapper[7337]: I0312 18:19:04.194243 7337 scope.go:117] "RemoveContainer" containerID="8e15f6e1cf0afd0d4b2ed487045ff3a4fb7861754a2333e311ac74e028e3bf00" Mar 12 18:19:04.194693 master-0 kubenswrapper[7337]: E0312 18:19:04.194586 7337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e15f6e1cf0afd0d4b2ed487045ff3a4fb7861754a2333e311ac74e028e3bf00\": container with ID starting with 8e15f6e1cf0afd0d4b2ed487045ff3a4fb7861754a2333e311ac74e028e3bf00 not found: ID does not exist" containerID="8e15f6e1cf0afd0d4b2ed487045ff3a4fb7861754a2333e311ac74e028e3bf00" Mar 12 18:19:04.194693 master-0 kubenswrapper[7337]: I0312 18:19:04.194630 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e15f6e1cf0afd0d4b2ed487045ff3a4fb7861754a2333e311ac74e028e3bf00"} err="failed to get container status \"8e15f6e1cf0afd0d4b2ed487045ff3a4fb7861754a2333e311ac74e028e3bf00\": rpc error: code = NotFound desc = could not find container \"8e15f6e1cf0afd0d4b2ed487045ff3a4fb7861754a2333e311ac74e028e3bf00\": container with ID starting with 8e15f6e1cf0afd0d4b2ed487045ff3a4fb7861754a2333e311ac74e028e3bf00 not found: ID does not exist" Mar 12 18:19:04.194693 master-0 kubenswrapper[7337]: I0312 18:19:04.194656 7337 scope.go:117] "RemoveContainer" containerID="97ba856e9d7794e001e2f28f7bc1d31252ab0e653c5e071322d4171ada719ab3" Mar 12 18:19:04.195318 master-0 kubenswrapper[7337]: E0312 18:19:04.195093 7337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97ba856e9d7794e001e2f28f7bc1d31252ab0e653c5e071322d4171ada719ab3\": container with ID starting with 97ba856e9d7794e001e2f28f7bc1d31252ab0e653c5e071322d4171ada719ab3 not found: ID does not exist" containerID="97ba856e9d7794e001e2f28f7bc1d31252ab0e653c5e071322d4171ada719ab3" Mar 12 18:19:04.195318 master-0 kubenswrapper[7337]: I0312 18:19:04.195122 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97ba856e9d7794e001e2f28f7bc1d31252ab0e653c5e071322d4171ada719ab3"} err="failed to get container status \"97ba856e9d7794e001e2f28f7bc1d31252ab0e653c5e071322d4171ada719ab3\": rpc error: code = NotFound desc = could not find container \"97ba856e9d7794e001e2f28f7bc1d31252ab0e653c5e071322d4171ada719ab3\": container with ID starting with 97ba856e9d7794e001e2f28f7bc1d31252ab0e653c5e071322d4171ada719ab3 not found: ID does not exist" Mar 12 18:19:04.195318 master-0 kubenswrapper[7337]: I0312 18:19:04.195177 7337 scope.go:117] "RemoveContainer" containerID="8a7ba3f4cb5e48b92420b11647000f18ba847413c3251c8b51519d69bce70dad" Mar 12 18:19:04.195503 master-0 kubenswrapper[7337]: I0312 18:19:04.195429 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a7ba3f4cb5e48b92420b11647000f18ba847413c3251c8b51519d69bce70dad"} err="failed to get container status \"8a7ba3f4cb5e48b92420b11647000f18ba847413c3251c8b51519d69bce70dad\": rpc error: code = NotFound desc = could not find container \"8a7ba3f4cb5e48b92420b11647000f18ba847413c3251c8b51519d69bce70dad\": container with ID starting with 8a7ba3f4cb5e48b92420b11647000f18ba847413c3251c8b51519d69bce70dad not found: ID does not exist" Mar 12 18:19:04.195503 master-0 kubenswrapper[7337]: I0312 18:19:04.195448 7337 scope.go:117] "RemoveContainer" containerID="8e15f6e1cf0afd0d4b2ed487045ff3a4fb7861754a2333e311ac74e028e3bf00" Mar 12 18:19:04.195790 master-0 kubenswrapper[7337]: I0312 18:19:04.195709 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e15f6e1cf0afd0d4b2ed487045ff3a4fb7861754a2333e311ac74e028e3bf00"} err="failed to get container status \"8e15f6e1cf0afd0d4b2ed487045ff3a4fb7861754a2333e311ac74e028e3bf00\": rpc error: code = NotFound desc = could not find container \"8e15f6e1cf0afd0d4b2ed487045ff3a4fb7861754a2333e311ac74e028e3bf00\": container with ID starting with 8e15f6e1cf0afd0d4b2ed487045ff3a4fb7861754a2333e311ac74e028e3bf00 not found: ID does not exist" Mar 12 18:19:04.195790 master-0 kubenswrapper[7337]: I0312 18:19:04.195738 7337 scope.go:117] "RemoveContainer" containerID="97ba856e9d7794e001e2f28f7bc1d31252ab0e653c5e071322d4171ada719ab3" Mar 12 18:19:04.196040 master-0 kubenswrapper[7337]: I0312 18:19:04.195972 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97ba856e9d7794e001e2f28f7bc1d31252ab0e653c5e071322d4171ada719ab3"} err="failed to get container status \"97ba856e9d7794e001e2f28f7bc1d31252ab0e653c5e071322d4171ada719ab3\": rpc error: code = NotFound desc = could not find container \"97ba856e9d7794e001e2f28f7bc1d31252ab0e653c5e071322d4171ada719ab3\": container with ID starting with 97ba856e9d7794e001e2f28f7bc1d31252ab0e653c5e071322d4171ada719ab3 not found: ID does not exist" Mar 12 18:19:04.196040 master-0 kubenswrapper[7337]: I0312 18:19:04.195996 7337 scope.go:117] "RemoveContainer" containerID="8a7ba3f4cb5e48b92420b11647000f18ba847413c3251c8b51519d69bce70dad" Mar 12 18:19:04.196278 master-0 kubenswrapper[7337]: I0312 18:19:04.196254 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a7ba3f4cb5e48b92420b11647000f18ba847413c3251c8b51519d69bce70dad"} err="failed to get container status \"8a7ba3f4cb5e48b92420b11647000f18ba847413c3251c8b51519d69bce70dad\": rpc error: code = NotFound desc = could not find container \"8a7ba3f4cb5e48b92420b11647000f18ba847413c3251c8b51519d69bce70dad\": container with ID starting with 8a7ba3f4cb5e48b92420b11647000f18ba847413c3251c8b51519d69bce70dad not found: ID does not exist" Mar 12 18:19:04.196278 master-0 kubenswrapper[7337]: I0312 18:19:04.196276 7337 scope.go:117] "RemoveContainer" containerID="8e15f6e1cf0afd0d4b2ed487045ff3a4fb7861754a2333e311ac74e028e3bf00" Mar 12 18:19:04.196396 master-0 kubenswrapper[7337]: I0312 18:19:04.196296 7337 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-n426c"] Mar 12 18:19:04.196768 master-0 kubenswrapper[7337]: I0312 18:19:04.196570 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e15f6e1cf0afd0d4b2ed487045ff3a4fb7861754a2333e311ac74e028e3bf00"} err="failed to get container status \"8e15f6e1cf0afd0d4b2ed487045ff3a4fb7861754a2333e311ac74e028e3bf00\": rpc error: code = NotFound desc = could not find container \"8e15f6e1cf0afd0d4b2ed487045ff3a4fb7861754a2333e311ac74e028e3bf00\": container with ID starting with 8e15f6e1cf0afd0d4b2ed487045ff3a4fb7861754a2333e311ac74e028e3bf00 not found: ID does not exist" Mar 12 18:19:04.196768 master-0 kubenswrapper[7337]: I0312 18:19:04.196624 7337 scope.go:117] "RemoveContainer" containerID="97ba856e9d7794e001e2f28f7bc1d31252ab0e653c5e071322d4171ada719ab3" Mar 12 18:19:04.197410 master-0 kubenswrapper[7337]: I0312 18:19:04.197382 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97ba856e9d7794e001e2f28f7bc1d31252ab0e653c5e071322d4171ada719ab3"} err="failed to get container status \"97ba856e9d7794e001e2f28f7bc1d31252ab0e653c5e071322d4171ada719ab3\": rpc error: code = NotFound desc = could not find container \"97ba856e9d7794e001e2f28f7bc1d31252ab0e653c5e071322d4171ada719ab3\": container with ID starting with 97ba856e9d7794e001e2f28f7bc1d31252ab0e653c5e071322d4171ada719ab3 not found: ID does not exist" Mar 12 18:19:04.239719 master-0 kubenswrapper[7337]: I0312 18:19:04.239577 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl"] Mar 12 18:19:04.239927 master-0 kubenswrapper[7337]: E0312 18:19:04.239788 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37392bec-4d79-4a65-bc41-6708d9edab46" containerName="cluster-cloud-controller-manager" Mar 12 18:19:04.239927 master-0 kubenswrapper[7337]: I0312 18:19:04.239802 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="37392bec-4d79-4a65-bc41-6708d9edab46" containerName="cluster-cloud-controller-manager" Mar 12 18:19:04.239927 master-0 kubenswrapper[7337]: E0312 18:19:04.239812 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37392bec-4d79-4a65-bc41-6708d9edab46" containerName="config-sync-controllers" Mar 12 18:19:04.239927 master-0 kubenswrapper[7337]: I0312 18:19:04.239818 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="37392bec-4d79-4a65-bc41-6708d9edab46" containerName="config-sync-controllers" Mar 12 18:19:04.239927 master-0 kubenswrapper[7337]: E0312 18:19:04.239832 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37392bec-4d79-4a65-bc41-6708d9edab46" containerName="kube-rbac-proxy" Mar 12 18:19:04.239927 master-0 kubenswrapper[7337]: I0312 18:19:04.239838 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="37392bec-4d79-4a65-bc41-6708d9edab46" containerName="kube-rbac-proxy" Mar 12 18:19:04.239927 master-0 kubenswrapper[7337]: I0312 18:19:04.239922 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="37392bec-4d79-4a65-bc41-6708d9edab46" containerName="kube-rbac-proxy" Mar 12 18:19:04.240190 master-0 kubenswrapper[7337]: I0312 18:19:04.239937 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="37392bec-4d79-4a65-bc41-6708d9edab46" containerName="cluster-cloud-controller-manager" Mar 12 18:19:04.240190 master-0 kubenswrapper[7337]: I0312 18:19:04.239948 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="37392bec-4d79-4a65-bc41-6708d9edab46" containerName="config-sync-controllers" Mar 12 18:19:04.240711 master-0 kubenswrapper[7337]: I0312 18:19:04.240686 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" Mar 12 18:19:04.242805 master-0 kubenswrapper[7337]: I0312 18:19:04.242761 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 18:19:04.248439 master-0 kubenswrapper[7337]: I0312 18:19:04.248392 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-72pgx" Mar 12 18:19:04.248439 master-0 kubenswrapper[7337]: I0312 18:19:04.248426 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 12 18:19:04.248606 master-0 kubenswrapper[7337]: I0312 18:19:04.248580 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 12 18:19:04.250586 master-0 kubenswrapper[7337]: I0312 18:19:04.250557 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 12 18:19:04.250702 master-0 kubenswrapper[7337]: I0312 18:19:04.250602 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 12 18:19:04.339882 master-0 kubenswrapper[7337]: I0312 18:19:04.339808 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4wsx\" (UniqueName: \"kubernetes.io/projected/ee4c1949-96b4-4444-9675-9df1d46f681e-kube-api-access-x4wsx\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-z9srl\" (UID: \"ee4c1949-96b4-4444-9675-9df1d46f681e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" Mar 12 18:19:04.339882 master-0 kubenswrapper[7337]: I0312 18:19:04.339851 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee4c1949-96b4-4444-9675-9df1d46f681e-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-z9srl\" (UID: \"ee4c1949-96b4-4444-9675-9df1d46f681e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" Mar 12 18:19:04.340273 master-0 kubenswrapper[7337]: I0312 18:19:04.339924 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee4c1949-96b4-4444-9675-9df1d46f681e-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-z9srl\" (UID: \"ee4c1949-96b4-4444-9675-9df1d46f681e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" Mar 12 18:19:04.340273 master-0 kubenswrapper[7337]: I0312 18:19:04.339953 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/ee4c1949-96b4-4444-9675-9df1d46f681e-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-z9srl\" (UID: \"ee4c1949-96b4-4444-9675-9df1d46f681e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" Mar 12 18:19:04.340273 master-0 kubenswrapper[7337]: I0312 18:19:04.339978 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ee4c1949-96b4-4444-9675-9df1d46f681e-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-z9srl\" (UID: \"ee4c1949-96b4-4444-9675-9df1d46f681e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" Mar 12 18:19:04.441352 master-0 kubenswrapper[7337]: I0312 18:19:04.441277 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4wsx\" (UniqueName: \"kubernetes.io/projected/ee4c1949-96b4-4444-9675-9df1d46f681e-kube-api-access-x4wsx\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-z9srl\" (UID: \"ee4c1949-96b4-4444-9675-9df1d46f681e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" Mar 12 18:19:04.442201 master-0 kubenswrapper[7337]: I0312 18:19:04.441432 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee4c1949-96b4-4444-9675-9df1d46f681e-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-z9srl\" (UID: \"ee4c1949-96b4-4444-9675-9df1d46f681e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" Mar 12 18:19:04.442201 master-0 kubenswrapper[7337]: I0312 18:19:04.441547 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee4c1949-96b4-4444-9675-9df1d46f681e-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-z9srl\" (UID: \"ee4c1949-96b4-4444-9675-9df1d46f681e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" Mar 12 18:19:04.442201 master-0 kubenswrapper[7337]: I0312 18:19:04.441578 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/ee4c1949-96b4-4444-9675-9df1d46f681e-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-z9srl\" (UID: \"ee4c1949-96b4-4444-9675-9df1d46f681e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" Mar 12 18:19:04.442201 master-0 kubenswrapper[7337]: I0312 18:19:04.441615 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ee4c1949-96b4-4444-9675-9df1d46f681e-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-z9srl\" (UID: \"ee4c1949-96b4-4444-9675-9df1d46f681e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" Mar 12 18:19:04.442201 master-0 kubenswrapper[7337]: I0312 18:19:04.441837 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/ee4c1949-96b4-4444-9675-9df1d46f681e-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-z9srl\" (UID: \"ee4c1949-96b4-4444-9675-9df1d46f681e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" Mar 12 18:19:04.442944 master-0 kubenswrapper[7337]: I0312 18:19:04.442443 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee4c1949-96b4-4444-9675-9df1d46f681e-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-z9srl\" (UID: \"ee4c1949-96b4-4444-9675-9df1d46f681e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" Mar 12 18:19:04.442944 master-0 kubenswrapper[7337]: I0312 18:19:04.442895 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ee4c1949-96b4-4444-9675-9df1d46f681e-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-z9srl\" (UID: \"ee4c1949-96b4-4444-9675-9df1d46f681e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" Mar 12 18:19:04.444212 master-0 kubenswrapper[7337]: I0312 18:19:04.444169 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee4c1949-96b4-4444-9675-9df1d46f681e-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-z9srl\" (UID: \"ee4c1949-96b4-4444-9675-9df1d46f681e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" Mar 12 18:19:04.463189 master-0 kubenswrapper[7337]: I0312 18:19:04.463132 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4wsx\" (UniqueName: \"kubernetes.io/projected/ee4c1949-96b4-4444-9675-9df1d46f681e-kube-api-access-x4wsx\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-z9srl\" (UID: \"ee4c1949-96b4-4444-9675-9df1d46f681e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" Mar 12 18:19:04.561127 master-0 kubenswrapper[7337]: I0312 18:19:04.560707 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" Mar 12 18:19:04.581083 master-0 kubenswrapper[7337]: W0312 18:19:04.581038 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee4c1949_96b4_4444_9675_9df1d46f681e.slice/crio-dd6da0ee34b8892cb152f5bedf147197e5f0cc5b453d7c1a52f8c962aaace2e8 WatchSource:0}: Error finding container dd6da0ee34b8892cb152f5bedf147197e5f0cc5b453d7c1a52f8c962aaace2e8: Status 404 returned error can't find the container with id dd6da0ee34b8892cb152f5bedf147197e5f0cc5b453d7c1a52f8c962aaace2e8 Mar 12 18:19:05.138022 master-0 kubenswrapper[7337]: I0312 18:19:05.137920 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" event={"ID":"ee4c1949-96b4-4444-9675-9df1d46f681e","Type":"ContainerStarted","Data":"488293f6a0a5ffc939b73e8e291035b18dd6b6d9c6030cee524df83362585aa5"} Mar 12 18:19:05.138022 master-0 kubenswrapper[7337]: I0312 18:19:05.137970 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" event={"ID":"ee4c1949-96b4-4444-9675-9df1d46f681e","Type":"ContainerStarted","Data":"55f44f89a0ddfa17022efb42d5b69490ffb4f27463e27a43d9ad2629d1fed3e4"} Mar 12 18:19:05.138022 master-0 kubenswrapper[7337]: I0312 18:19:05.137984 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" event={"ID":"ee4c1949-96b4-4444-9675-9df1d46f681e","Type":"ContainerStarted","Data":"dd6da0ee34b8892cb152f5bedf147197e5f0cc5b453d7c1a52f8c962aaace2e8"} Mar 12 18:19:05.731322 master-0 kubenswrapper[7337]: I0312 18:19:05.731254 7337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37392bec-4d79-4a65-bc41-6708d9edab46" path="/var/lib/kubelet/pods/37392bec-4d79-4a65-bc41-6708d9edab46/volumes" Mar 12 18:19:06.051076 master-0 kubenswrapper[7337]: I0312 18:19:06.051004 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-ff46b7bdf-k5hsv"] Mar 12 18:19:06.051942 master-0 kubenswrapper[7337]: I0312 18:19:06.051909 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-k5hsv" Mar 12 18:19:06.055687 master-0 kubenswrapper[7337]: I0312 18:19:06.055649 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-fbn8j" Mar 12 18:19:06.055895 master-0 kubenswrapper[7337]: I0312 18:19:06.055875 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 12 18:19:06.069548 master-0 kubenswrapper[7337]: I0312 18:19:06.069470 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-ff46b7bdf-k5hsv"] Mar 12 18:19:06.147641 master-0 kubenswrapper[7337]: I0312 18:19:06.147589 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" event={"ID":"ee4c1949-96b4-4444-9675-9df1d46f681e","Type":"ContainerStarted","Data":"6e50e1058b13a67d2053d91414b746e0753a5a7ce280058cb370a31e4acd3999"} Mar 12 18:19:06.163421 master-0 kubenswrapper[7337]: I0312 18:19:06.163362 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5lf8\" (UniqueName: \"kubernetes.io/projected/ee55b576-6b8d-4217-b5a7-93b023a1e885-kube-api-access-j5lf8\") pod \"machine-config-controller-ff46b7bdf-k5hsv\" (UID: \"ee55b576-6b8d-4217-b5a7-93b023a1e885\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-k5hsv" Mar 12 18:19:06.163640 master-0 kubenswrapper[7337]: I0312 18:19:06.163436 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee55b576-6b8d-4217-b5a7-93b023a1e885-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-k5hsv\" (UID: \"ee55b576-6b8d-4217-b5a7-93b023a1e885\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-k5hsv" Mar 12 18:19:06.163640 master-0 kubenswrapper[7337]: I0312 18:19:06.163477 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ee55b576-6b8d-4217-b5a7-93b023a1e885-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-k5hsv\" (UID: \"ee55b576-6b8d-4217-b5a7-93b023a1e885\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-k5hsv" Mar 12 18:19:06.165566 master-0 kubenswrapper[7337]: I0312 18:19:06.165481 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" podStartSLOduration=2.16546285 podStartE2EDuration="2.16546285s" podCreationTimestamp="2026-03-12 18:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:19:06.162873474 +0000 UTC m=+346.631474421" watchObservedRunningTime="2026-03-12 18:19:06.16546285 +0000 UTC m=+346.634063807" Mar 12 18:19:06.265362 master-0 kubenswrapper[7337]: I0312 18:19:06.265304 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5lf8\" (UniqueName: \"kubernetes.io/projected/ee55b576-6b8d-4217-b5a7-93b023a1e885-kube-api-access-j5lf8\") pod \"machine-config-controller-ff46b7bdf-k5hsv\" (UID: \"ee55b576-6b8d-4217-b5a7-93b023a1e885\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-k5hsv" Mar 12 18:19:06.266103 master-0 kubenswrapper[7337]: I0312 18:19:06.265803 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee55b576-6b8d-4217-b5a7-93b023a1e885-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-k5hsv\" (UID: \"ee55b576-6b8d-4217-b5a7-93b023a1e885\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-k5hsv" Mar 12 18:19:06.266792 master-0 kubenswrapper[7337]: I0312 18:19:06.266679 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee55b576-6b8d-4217-b5a7-93b023a1e885-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-k5hsv\" (UID: \"ee55b576-6b8d-4217-b5a7-93b023a1e885\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-k5hsv" Mar 12 18:19:06.266858 master-0 kubenswrapper[7337]: I0312 18:19:06.266816 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ee55b576-6b8d-4217-b5a7-93b023a1e885-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-k5hsv\" (UID: \"ee55b576-6b8d-4217-b5a7-93b023a1e885\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-k5hsv" Mar 12 18:19:06.272294 master-0 kubenswrapper[7337]: I0312 18:19:06.272247 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ee55b576-6b8d-4217-b5a7-93b023a1e885-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-k5hsv\" (UID: \"ee55b576-6b8d-4217-b5a7-93b023a1e885\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-k5hsv" Mar 12 18:19:06.291899 master-0 kubenswrapper[7337]: I0312 18:19:06.291836 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5lf8\" (UniqueName: \"kubernetes.io/projected/ee55b576-6b8d-4217-b5a7-93b023a1e885-kube-api-access-j5lf8\") pod \"machine-config-controller-ff46b7bdf-k5hsv\" (UID: \"ee55b576-6b8d-4217-b5a7-93b023a1e885\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-k5hsv" Mar 12 18:19:06.370074 master-0 kubenswrapper[7337]: I0312 18:19:06.369966 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-k5hsv" Mar 12 18:19:06.785297 master-0 kubenswrapper[7337]: I0312 18:19:06.785193 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-ff46b7bdf-k5hsv"] Mar 12 18:19:07.122568 master-0 kubenswrapper[7337]: I0312 18:19:07.122436 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-7c67b67d47-g4dkj"] Mar 12 18:19:07.123579 master-0 kubenswrapper[7337]: I0312 18:19:07.123545 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-g4dkj" Mar 12 18:19:07.128673 master-0 kubenswrapper[7337]: I0312 18:19:07.128633 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-tzgs9"] Mar 12 18:19:07.129582 master-0 kubenswrapper[7337]: I0312 18:19:07.129561 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-tzgs9" Mar 12 18:19:07.133041 master-0 kubenswrapper[7337]: I0312 18:19:07.132940 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 12 18:19:07.133436 master-0 kubenswrapper[7337]: I0312 18:19:07.133406 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-79f8cd6fdd-79bhf"] Mar 12 18:19:07.134556 master-0 kubenswrapper[7337]: I0312 18:19:07.134532 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:19:07.135720 master-0 kubenswrapper[7337]: I0312 18:19:07.135692 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 12 18:19:07.136250 master-0 kubenswrapper[7337]: I0312 18:19:07.136208 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 12 18:19:07.137475 master-0 kubenswrapper[7337]: I0312 18:19:07.137436 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 12 18:19:07.137567 master-0 kubenswrapper[7337]: I0312 18:19:07.137504 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 12 18:19:07.137567 master-0 kubenswrapper[7337]: I0312 18:19:07.137557 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 12 18:19:07.137975 master-0 kubenswrapper[7337]: I0312 18:19:07.137932 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 12 18:19:07.139533 master-0 kubenswrapper[7337]: I0312 18:19:07.139465 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-7c67b67d47-g4dkj"] Mar 12 18:19:07.152887 master-0 kubenswrapper[7337]: I0312 18:19:07.152780 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-tzgs9"] Mar 12 18:19:07.155399 master-0 kubenswrapper[7337]: I0312 18:19:07.155361 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-k5hsv" event={"ID":"ee55b576-6b8d-4217-b5a7-93b023a1e885","Type":"ContainerStarted","Data":"bd07a833bd13c326bf2e9cf669af94cace2c5cebcc486e7cbe2dc53fa4661cab"} Mar 12 18:19:07.155399 master-0 kubenswrapper[7337]: I0312 18:19:07.155398 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-k5hsv" event={"ID":"ee55b576-6b8d-4217-b5a7-93b023a1e885","Type":"ContainerStarted","Data":"89bb79332bd90451e3ead353cdd4b75a60a28b818e1dea0d61fbaca6becd66b9"} Mar 12 18:19:07.155729 master-0 kubenswrapper[7337]: I0312 18:19:07.155409 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-k5hsv" event={"ID":"ee55b576-6b8d-4217-b5a7-93b023a1e885","Type":"ContainerStarted","Data":"49a13810e28c69eccfa523be3ac0813defa610868fd3abaf3cd37d9177c29502"} Mar 12 18:19:07.192321 master-0 kubenswrapper[7337]: I0312 18:19:07.188305 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/518ffff8-8119-41be-8b76-ce49d5751254-metrics-certs\") pod \"router-default-79f8cd6fdd-79bhf\" (UID: \"518ffff8-8119-41be-8b76-ce49d5751254\") " pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:19:07.192321 master-0 kubenswrapper[7337]: I0312 18:19:07.188378 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/52d3dc87-0bf8-4a62-a6fc-0ffa6060c6a8-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-tzgs9\" (UID: \"52d3dc87-0bf8-4a62-a6fc-0ffa6060c6a8\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-tzgs9" Mar 12 18:19:07.192321 master-0 kubenswrapper[7337]: I0312 18:19:07.188701 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/518ffff8-8119-41be-8b76-ce49d5751254-service-ca-bundle\") pod \"router-default-79f8cd6fdd-79bhf\" (UID: \"518ffff8-8119-41be-8b76-ce49d5751254\") " pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:19:07.192321 master-0 kubenswrapper[7337]: I0312 18:19:07.188753 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n4d5\" (UniqueName: \"kubernetes.io/projected/b648b6de-59a6-42da-84e2-77ea0264ae25-kube-api-access-7n4d5\") pod \"network-check-source-7c67b67d47-g4dkj\" (UID: \"b648b6de-59a6-42da-84e2-77ea0264ae25\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-g4dkj" Mar 12 18:19:07.192321 master-0 kubenswrapper[7337]: I0312 18:19:07.188774 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/518ffff8-8119-41be-8b76-ce49d5751254-default-certificate\") pod \"router-default-79f8cd6fdd-79bhf\" (UID: \"518ffff8-8119-41be-8b76-ce49d5751254\") " pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:19:07.192321 master-0 kubenswrapper[7337]: I0312 18:19:07.188825 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4glbr\" (UniqueName: \"kubernetes.io/projected/518ffff8-8119-41be-8b76-ce49d5751254-kube-api-access-4glbr\") pod \"router-default-79f8cd6fdd-79bhf\" (UID: \"518ffff8-8119-41be-8b76-ce49d5751254\") " pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:19:07.192321 master-0 kubenswrapper[7337]: I0312 18:19:07.188881 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/518ffff8-8119-41be-8b76-ce49d5751254-stats-auth\") pod \"router-default-79f8cd6fdd-79bhf\" (UID: \"518ffff8-8119-41be-8b76-ce49d5751254\") " pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:19:07.202607 master-0 kubenswrapper[7337]: I0312 18:19:07.202457 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-k5hsv" podStartSLOduration=1.202436592 podStartE2EDuration="1.202436592s" podCreationTimestamp="2026-03-12 18:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:19:07.202388701 +0000 UTC m=+347.670989658" watchObservedRunningTime="2026-03-12 18:19:07.202436592 +0000 UTC m=+347.671037539" Mar 12 18:19:07.290935 master-0 kubenswrapper[7337]: I0312 18:19:07.290819 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/52d3dc87-0bf8-4a62-a6fc-0ffa6060c6a8-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-tzgs9\" (UID: \"52d3dc87-0bf8-4a62-a6fc-0ffa6060c6a8\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-tzgs9" Mar 12 18:19:07.290935 master-0 kubenswrapper[7337]: I0312 18:19:07.290889 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/518ffff8-8119-41be-8b76-ce49d5751254-service-ca-bundle\") pod \"router-default-79f8cd6fdd-79bhf\" (UID: \"518ffff8-8119-41be-8b76-ce49d5751254\") " pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:19:07.291173 master-0 kubenswrapper[7337]: I0312 18:19:07.290930 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n4d5\" (UniqueName: \"kubernetes.io/projected/b648b6de-59a6-42da-84e2-77ea0264ae25-kube-api-access-7n4d5\") pod \"network-check-source-7c67b67d47-g4dkj\" (UID: \"b648b6de-59a6-42da-84e2-77ea0264ae25\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-g4dkj" Mar 12 18:19:07.291438 master-0 kubenswrapper[7337]: I0312 18:19:07.291390 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/518ffff8-8119-41be-8b76-ce49d5751254-default-certificate\") pod \"router-default-79f8cd6fdd-79bhf\" (UID: \"518ffff8-8119-41be-8b76-ce49d5751254\") " pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:19:07.291578 master-0 kubenswrapper[7337]: I0312 18:19:07.291545 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4glbr\" (UniqueName: \"kubernetes.io/projected/518ffff8-8119-41be-8b76-ce49d5751254-kube-api-access-4glbr\") pod \"router-default-79f8cd6fdd-79bhf\" (UID: \"518ffff8-8119-41be-8b76-ce49d5751254\") " pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:19:07.291763 master-0 kubenswrapper[7337]: I0312 18:19:07.291717 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/518ffff8-8119-41be-8b76-ce49d5751254-stats-auth\") pod \"router-default-79f8cd6fdd-79bhf\" (UID: \"518ffff8-8119-41be-8b76-ce49d5751254\") " pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:19:07.291811 master-0 kubenswrapper[7337]: I0312 18:19:07.291795 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/518ffff8-8119-41be-8b76-ce49d5751254-metrics-certs\") pod \"router-default-79f8cd6fdd-79bhf\" (UID: \"518ffff8-8119-41be-8b76-ce49d5751254\") " pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:19:07.291987 master-0 kubenswrapper[7337]: I0312 18:19:07.291959 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/518ffff8-8119-41be-8b76-ce49d5751254-service-ca-bundle\") pod \"router-default-79f8cd6fdd-79bhf\" (UID: \"518ffff8-8119-41be-8b76-ce49d5751254\") " pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:19:07.294263 master-0 kubenswrapper[7337]: I0312 18:19:07.294210 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/52d3dc87-0bf8-4a62-a6fc-0ffa6060c6a8-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-tzgs9\" (UID: \"52d3dc87-0bf8-4a62-a6fc-0ffa6060c6a8\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-tzgs9" Mar 12 18:19:07.295194 master-0 kubenswrapper[7337]: I0312 18:19:07.295141 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/518ffff8-8119-41be-8b76-ce49d5751254-default-certificate\") pod \"router-default-79f8cd6fdd-79bhf\" (UID: \"518ffff8-8119-41be-8b76-ce49d5751254\") " pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:19:07.295700 master-0 kubenswrapper[7337]: I0312 18:19:07.295357 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/518ffff8-8119-41be-8b76-ce49d5751254-stats-auth\") pod \"router-default-79f8cd6fdd-79bhf\" (UID: \"518ffff8-8119-41be-8b76-ce49d5751254\") " pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:19:07.295700 master-0 kubenswrapper[7337]: I0312 18:19:07.295606 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/518ffff8-8119-41be-8b76-ce49d5751254-metrics-certs\") pod \"router-default-79f8cd6fdd-79bhf\" (UID: \"518ffff8-8119-41be-8b76-ce49d5751254\") " pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:19:07.307103 master-0 kubenswrapper[7337]: I0312 18:19:07.307073 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n4d5\" (UniqueName: \"kubernetes.io/projected/b648b6de-59a6-42da-84e2-77ea0264ae25-kube-api-access-7n4d5\") pod \"network-check-source-7c67b67d47-g4dkj\" (UID: \"b648b6de-59a6-42da-84e2-77ea0264ae25\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-g4dkj" Mar 12 18:19:07.318101 master-0 kubenswrapper[7337]: I0312 18:19:07.318062 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4glbr\" (UniqueName: \"kubernetes.io/projected/518ffff8-8119-41be-8b76-ce49d5751254-kube-api-access-4glbr\") pod \"router-default-79f8cd6fdd-79bhf\" (UID: \"518ffff8-8119-41be-8b76-ce49d5751254\") " pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:19:07.445412 master-0 kubenswrapper[7337]: I0312 18:19:07.445346 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-g4dkj" Mar 12 18:19:07.471625 master-0 kubenswrapper[7337]: I0312 18:19:07.471561 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-tzgs9" Mar 12 18:19:07.489309 master-0 kubenswrapper[7337]: I0312 18:19:07.489226 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:19:07.523263 master-0 kubenswrapper[7337]: W0312 18:19:07.522012 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod518ffff8_8119_41be_8b76_ce49d5751254.slice/crio-6a236b16d484393f57933afd10e1f7dd4dd1ef7cdb2b760e6b6520b399e5ee85 WatchSource:0}: Error finding container 6a236b16d484393f57933afd10e1f7dd4dd1ef7cdb2b760e6b6520b399e5ee85: Status 404 returned error can't find the container with id 6a236b16d484393f57933afd10e1f7dd4dd1ef7cdb2b760e6b6520b399e5ee85 Mar 12 18:19:07.525821 master-0 kubenswrapper[7337]: I0312 18:19:07.525804 7337 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 18:19:07.915981 master-0 kubenswrapper[7337]: I0312 18:19:07.915825 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-7c67b67d47-g4dkj"] Mar 12 18:19:07.923662 master-0 kubenswrapper[7337]: I0312 18:19:07.923621 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-tzgs9"] Mar 12 18:19:07.925068 master-0 kubenswrapper[7337]: W0312 18:19:07.925009 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb648b6de_59a6_42da_84e2_77ea0264ae25.slice/crio-473fbb88708371a6bebda6f8bc6fbc876db715f79766d57af954db0a509d99f7 WatchSource:0}: Error finding container 473fbb88708371a6bebda6f8bc6fbc876db715f79766d57af954db0a509d99f7: Status 404 returned error can't find the container with id 473fbb88708371a6bebda6f8bc6fbc876db715f79766d57af954db0a509d99f7 Mar 12 18:19:07.926091 master-0 kubenswrapper[7337]: W0312 18:19:07.926051 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52d3dc87_0bf8_4a62_a6fc_0ffa6060c6a8.slice/crio-477c2eb598b783cdff738fbc37cca5c05334e3fbafc9bde3daf1f7428b823f9e WatchSource:0}: Error finding container 477c2eb598b783cdff738fbc37cca5c05334e3fbafc9bde3daf1f7428b823f9e: Status 404 returned error can't find the container with id 477c2eb598b783cdff738fbc37cca5c05334e3fbafc9bde3daf1f7428b823f9e Mar 12 18:19:08.096552 master-0 kubenswrapper[7337]: I0312 18:19:08.096019 7337 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 18:19:08.162179 master-0 kubenswrapper[7337]: I0312 18:19:08.162099 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-g4dkj" event={"ID":"b648b6de-59a6-42da-84e2-77ea0264ae25","Type":"ContainerStarted","Data":"cc4cf3fdeaf4f0b39a371db6c37e19fcd48bc89ea7a8e2bd736d5e1802e21e1d"} Mar 12 18:19:08.162179 master-0 kubenswrapper[7337]: I0312 18:19:08.162154 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-g4dkj" event={"ID":"b648b6de-59a6-42da-84e2-77ea0264ae25","Type":"ContainerStarted","Data":"473fbb88708371a6bebda6f8bc6fbc876db715f79766d57af954db0a509d99f7"} Mar 12 18:19:08.164613 master-0 kubenswrapper[7337]: I0312 18:19:08.164574 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" event={"ID":"518ffff8-8119-41be-8b76-ce49d5751254","Type":"ContainerStarted","Data":"6a236b16d484393f57933afd10e1f7dd4dd1ef7cdb2b760e6b6520b399e5ee85"} Mar 12 18:19:08.166724 master-0 kubenswrapper[7337]: I0312 18:19:08.166656 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-tzgs9" event={"ID":"52d3dc87-0bf8-4a62-a6fc-0ffa6060c6a8","Type":"ContainerStarted","Data":"477c2eb598b783cdff738fbc37cca5c05334e3fbafc9bde3daf1f7428b823f9e"} Mar 12 18:19:09.743675 master-0 kubenswrapper[7337]: I0312 18:19:09.743601 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-g4dkj" podStartSLOduration=419.743581815 podStartE2EDuration="6m59.743581815s" podCreationTimestamp="2026-03-12 18:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:19:08.183942546 +0000 UTC m=+348.652543523" watchObservedRunningTime="2026-03-12 18:19:09.743581815 +0000 UTC m=+350.212182772" Mar 12 18:19:10.178285 master-0 kubenswrapper[7337]: I0312 18:19:10.178234 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" event={"ID":"518ffff8-8119-41be-8b76-ce49d5751254","Type":"ContainerStarted","Data":"41887ee13b262a1bb752082a6699313d133920e514fab8c14a0b03b0f36c3f44"} Mar 12 18:19:10.179509 master-0 kubenswrapper[7337]: I0312 18:19:10.179480 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-tzgs9" event={"ID":"52d3dc87-0bf8-4a62-a6fc-0ffa6060c6a8","Type":"ContainerStarted","Data":"29b15d5479d95508cec2db4d56a770b9b1f7030f461db9aed52919e442adbdfd"} Mar 12 18:19:10.180382 master-0 kubenswrapper[7337]: I0312 18:19:10.180338 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-tzgs9" Mar 12 18:19:10.187267 master-0 kubenswrapper[7337]: I0312 18:19:10.187228 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-tzgs9" Mar 12 18:19:10.202988 master-0 kubenswrapper[7337]: I0312 18:19:10.202911 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podStartSLOduration=319.815913224 podStartE2EDuration="5m22.202892283s" podCreationTimestamp="2026-03-12 18:13:48 +0000 UTC" firstStartedPulling="2026-03-12 18:19:07.525762182 +0000 UTC m=+347.994363129" lastFinishedPulling="2026-03-12 18:19:09.912741241 +0000 UTC m=+350.381342188" observedRunningTime="2026-03-12 18:19:10.19925049 +0000 UTC m=+350.667851457" watchObservedRunningTime="2026-03-12 18:19:10.202892283 +0000 UTC m=+350.671493230" Mar 12 18:19:10.216914 master-0 kubenswrapper[7337]: I0312 18:19:10.216847 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-tzgs9" podStartSLOduration=297.227359928 podStartE2EDuration="4m59.21682947s" podCreationTimestamp="2026-03-12 18:14:11 +0000 UTC" firstStartedPulling="2026-03-12 18:19:07.928986075 +0000 UTC m=+348.397587042" lastFinishedPulling="2026-03-12 18:19:09.918455607 +0000 UTC m=+350.387056584" observedRunningTime="2026-03-12 18:19:10.215692131 +0000 UTC m=+350.684293098" watchObservedRunningTime="2026-03-12 18:19:10.21682947 +0000 UTC m=+350.685430417" Mar 12 18:19:10.447867 master-0 kubenswrapper[7337]: I0312 18:19:10.447803 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx"] Mar 12 18:19:10.448661 master-0 kubenswrapper[7337]: I0312 18:19:10.448597 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx" Mar 12 18:19:10.450895 master-0 kubenswrapper[7337]: I0312 18:19:10.450861 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 12 18:19:10.451010 master-0 kubenswrapper[7337]: I0312 18:19:10.450972 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 12 18:19:10.451010 master-0 kubenswrapper[7337]: I0312 18:19:10.450997 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 12 18:19:10.451079 master-0 kubenswrapper[7337]: I0312 18:19:10.451043 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-pp56m" Mar 12 18:19:10.469267 master-0 kubenswrapper[7337]: I0312 18:19:10.469219 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx"] Mar 12 18:19:10.491790 master-0 kubenswrapper[7337]: I0312 18:19:10.491407 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:19:10.494256 master-0 kubenswrapper[7337]: I0312 18:19:10.494212 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:10.494256 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:10.494256 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:10.494256 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:10.494427 master-0 kubenswrapper[7337]: I0312 18:19:10.494277 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:10.523561 master-0 kubenswrapper[7337]: I0312 18:19:10.522896 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-2jzxq"] Mar 12 18:19:10.523745 master-0 kubenswrapper[7337]: I0312 18:19:10.523689 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2jzxq" Mar 12 18:19:10.525690 master-0 kubenswrapper[7337]: I0312 18:19:10.525659 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-djr46" Mar 12 18:19:10.526861 master-0 kubenswrapper[7337]: I0312 18:19:10.525995 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 12 18:19:10.526861 master-0 kubenswrapper[7337]: I0312 18:19:10.526173 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 12 18:19:10.548158 master-0 kubenswrapper[7337]: I0312 18:19:10.548104 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h65dg\" (UniqueName: \"kubernetes.io/projected/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6-kube-api-access-h65dg\") pod \"machine-config-server-2jzxq\" (UID: \"4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6\") " pod="openshift-machine-config-operator/machine-config-server-2jzxq" Mar 12 18:19:10.548158 master-0 kubenswrapper[7337]: I0312 18:19:10.548148 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/41c1bd85-369e-4341-9e80-8b4b248b5572-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-qs7tx\" (UID: \"41c1bd85-369e-4341-9e80-8b4b248b5572\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx" Mar 12 18:19:10.548393 master-0 kubenswrapper[7337]: I0312 18:19:10.548228 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/41c1bd85-369e-4341-9e80-8b4b248b5572-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-qs7tx\" (UID: \"41c1bd85-369e-4341-9e80-8b4b248b5572\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx" Mar 12 18:19:10.548393 master-0 kubenswrapper[7337]: I0312 18:19:10.548304 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6-node-bootstrap-token\") pod \"machine-config-server-2jzxq\" (UID: \"4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6\") " pod="openshift-machine-config-operator/machine-config-server-2jzxq" Mar 12 18:19:10.548393 master-0 kubenswrapper[7337]: I0312 18:19:10.548322 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6-certs\") pod \"machine-config-server-2jzxq\" (UID: \"4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6\") " pod="openshift-machine-config-operator/machine-config-server-2jzxq" Mar 12 18:19:10.548393 master-0 kubenswrapper[7337]: I0312 18:19:10.548354 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7pjn\" (UniqueName: \"kubernetes.io/projected/41c1bd85-369e-4341-9e80-8b4b248b5572-kube-api-access-q7pjn\") pod \"prometheus-operator-5ff8674d55-qs7tx\" (UID: \"41c1bd85-369e-4341-9e80-8b4b248b5572\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx" Mar 12 18:19:10.548393 master-0 kubenswrapper[7337]: I0312 18:19:10.548374 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/41c1bd85-369e-4341-9e80-8b4b248b5572-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-qs7tx\" (UID: \"41c1bd85-369e-4341-9e80-8b4b248b5572\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx" Mar 12 18:19:10.649181 master-0 kubenswrapper[7337]: I0312 18:19:10.649122 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h65dg\" (UniqueName: \"kubernetes.io/projected/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6-kube-api-access-h65dg\") pod \"machine-config-server-2jzxq\" (UID: \"4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6\") " pod="openshift-machine-config-operator/machine-config-server-2jzxq" Mar 12 18:19:10.649181 master-0 kubenswrapper[7337]: I0312 18:19:10.649170 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/41c1bd85-369e-4341-9e80-8b4b248b5572-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-qs7tx\" (UID: \"41c1bd85-369e-4341-9e80-8b4b248b5572\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx" Mar 12 18:19:10.649427 master-0 kubenswrapper[7337]: I0312 18:19:10.649222 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/41c1bd85-369e-4341-9e80-8b4b248b5572-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-qs7tx\" (UID: \"41c1bd85-369e-4341-9e80-8b4b248b5572\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx" Mar 12 18:19:10.649500 master-0 kubenswrapper[7337]: I0312 18:19:10.649419 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6-certs\") pod \"machine-config-server-2jzxq\" (UID: \"4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6\") " pod="openshift-machine-config-operator/machine-config-server-2jzxq" Mar 12 18:19:10.650132 master-0 kubenswrapper[7337]: I0312 18:19:10.649960 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6-node-bootstrap-token\") pod \"machine-config-server-2jzxq\" (UID: \"4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6\") " pod="openshift-machine-config-operator/machine-config-server-2jzxq" Mar 12 18:19:10.650215 master-0 kubenswrapper[7337]: I0312 18:19:10.650198 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7pjn\" (UniqueName: \"kubernetes.io/projected/41c1bd85-369e-4341-9e80-8b4b248b5572-kube-api-access-q7pjn\") pod \"prometheus-operator-5ff8674d55-qs7tx\" (UID: \"41c1bd85-369e-4341-9e80-8b4b248b5572\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx" Mar 12 18:19:10.650492 master-0 kubenswrapper[7337]: I0312 18:19:10.650425 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/41c1bd85-369e-4341-9e80-8b4b248b5572-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-qs7tx\" (UID: \"41c1bd85-369e-4341-9e80-8b4b248b5572\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx" Mar 12 18:19:10.652987 master-0 kubenswrapper[7337]: I0312 18:19:10.652961 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6-certs\") pod \"machine-config-server-2jzxq\" (UID: \"4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6\") " pod="openshift-machine-config-operator/machine-config-server-2jzxq" Mar 12 18:19:10.654560 master-0 kubenswrapper[7337]: I0312 18:19:10.653648 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/41c1bd85-369e-4341-9e80-8b4b248b5572-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-qs7tx\" (UID: \"41c1bd85-369e-4341-9e80-8b4b248b5572\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx" Mar 12 18:19:10.654560 master-0 kubenswrapper[7337]: I0312 18:19:10.653647 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/41c1bd85-369e-4341-9e80-8b4b248b5572-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-qs7tx\" (UID: \"41c1bd85-369e-4341-9e80-8b4b248b5572\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx" Mar 12 18:19:10.654560 master-0 kubenswrapper[7337]: I0312 18:19:10.653669 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6-node-bootstrap-token\") pod \"machine-config-server-2jzxq\" (UID: \"4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6\") " pod="openshift-machine-config-operator/machine-config-server-2jzxq" Mar 12 18:19:10.654560 master-0 kubenswrapper[7337]: I0312 18:19:10.653881 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/41c1bd85-369e-4341-9e80-8b4b248b5572-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-qs7tx\" (UID: \"41c1bd85-369e-4341-9e80-8b4b248b5572\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx" Mar 12 18:19:10.670797 master-0 kubenswrapper[7337]: I0312 18:19:10.666610 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h65dg\" (UniqueName: \"kubernetes.io/projected/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6-kube-api-access-h65dg\") pod \"machine-config-server-2jzxq\" (UID: \"4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6\") " pod="openshift-machine-config-operator/machine-config-server-2jzxq" Mar 12 18:19:10.673555 master-0 kubenswrapper[7337]: I0312 18:19:10.673501 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7pjn\" (UniqueName: \"kubernetes.io/projected/41c1bd85-369e-4341-9e80-8b4b248b5572-kube-api-access-q7pjn\") pod \"prometheus-operator-5ff8674d55-qs7tx\" (UID: \"41c1bd85-369e-4341-9e80-8b4b248b5572\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx" Mar 12 18:19:10.765294 master-0 kubenswrapper[7337]: I0312 18:19:10.765168 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx" Mar 12 18:19:10.843531 master-0 kubenswrapper[7337]: I0312 18:19:10.843463 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2jzxq" Mar 12 18:19:11.185556 master-0 kubenswrapper[7337]: I0312 18:19:11.185479 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2jzxq" event={"ID":"4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6","Type":"ContainerStarted","Data":"086d489635466231287ff5fd72cc5f0e732cf50aed7eafb1a18d6cb6ef005f60"} Mar 12 18:19:11.185556 master-0 kubenswrapper[7337]: I0312 18:19:11.185557 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2jzxq" event={"ID":"4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6","Type":"ContainerStarted","Data":"5a8691e7dd271734f7d5ba67a7c54479d001ddcb15882ee789f7857b1fdecfe2"} Mar 12 18:19:11.192594 master-0 kubenswrapper[7337]: I0312 18:19:11.192555 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx"] Mar 12 18:19:11.196411 master-0 kubenswrapper[7337]: W0312 18:19:11.196352 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41c1bd85_369e_4341_9e80_8b4b248b5572.slice/crio-66ebd53076fd17791334e64546a2f2ecb5fadc07eb32a382f94ef82be445ec00 WatchSource:0}: Error finding container 66ebd53076fd17791334e64546a2f2ecb5fadc07eb32a382f94ef82be445ec00: Status 404 returned error can't find the container with id 66ebd53076fd17791334e64546a2f2ecb5fadc07eb32a382f94ef82be445ec00 Mar 12 18:19:11.206877 master-0 kubenswrapper[7337]: I0312 18:19:11.206810 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-2jzxq" podStartSLOduration=1.206789782 podStartE2EDuration="1.206789782s" podCreationTimestamp="2026-03-12 18:19:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:19:11.205875209 +0000 UTC m=+351.674476166" watchObservedRunningTime="2026-03-12 18:19:11.206789782 +0000 UTC m=+351.675390729" Mar 12 18:19:11.492502 master-0 kubenswrapper[7337]: I0312 18:19:11.492354 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:11.492502 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:11.492502 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:11.492502 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:11.492502 master-0 kubenswrapper[7337]: I0312 18:19:11.492430 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:12.204217 master-0 kubenswrapper[7337]: I0312 18:19:12.204150 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx" event={"ID":"41c1bd85-369e-4341-9e80-8b4b248b5572","Type":"ContainerStarted","Data":"66ebd53076fd17791334e64546a2f2ecb5fadc07eb32a382f94ef82be445ec00"} Mar 12 18:19:12.492590 master-0 kubenswrapper[7337]: I0312 18:19:12.492452 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:12.492590 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:12.492590 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:12.492590 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:12.492590 master-0 kubenswrapper[7337]: I0312 18:19:12.492554 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:13.210924 master-0 kubenswrapper[7337]: I0312 18:19:13.210886 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx" event={"ID":"41c1bd85-369e-4341-9e80-8b4b248b5572","Type":"ContainerStarted","Data":"c49b932a5716b8eb28443b8d18118aee3e2b591b7f30c086c4c431cc4d7b0949"} Mar 12 18:19:13.211425 master-0 kubenswrapper[7337]: I0312 18:19:13.211409 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx" event={"ID":"41c1bd85-369e-4341-9e80-8b4b248b5572","Type":"ContainerStarted","Data":"cdac2cea22cc66328d8d2e4113c02fabead899bf33f5ea2d0661aa8d96c2eacf"} Mar 12 18:19:13.227639 master-0 kubenswrapper[7337]: I0312 18:19:13.227584 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx" podStartSLOduration=1.632339634 podStartE2EDuration="3.227567141s" podCreationTimestamp="2026-03-12 18:19:10 +0000 UTC" firstStartedPulling="2026-03-12 18:19:11.197965796 +0000 UTC m=+351.666566743" lastFinishedPulling="2026-03-12 18:19:12.793193303 +0000 UTC m=+353.261794250" observedRunningTime="2026-03-12 18:19:13.226100264 +0000 UTC m=+353.694701221" watchObservedRunningTime="2026-03-12 18:19:13.227567141 +0000 UTC m=+353.696168098" Mar 12 18:19:13.492289 master-0 kubenswrapper[7337]: I0312 18:19:13.491857 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:13.492289 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:13.492289 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:13.492289 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:13.492289 master-0 kubenswrapper[7337]: I0312 18:19:13.491920 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:14.499327 master-0 kubenswrapper[7337]: I0312 18:19:14.499228 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:14.499327 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:14.499327 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:14.499327 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:14.505386 master-0 kubenswrapper[7337]: I0312 18:19:14.503141 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:14.793739 master-0 kubenswrapper[7337]: I0312 18:19:14.793688 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-74cc79fd76-f59x9"] Mar 12 18:19:14.795100 master-0 kubenswrapper[7337]: I0312 18:19:14.795049 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-f59x9" Mar 12 18:19:14.798422 master-0 kubenswrapper[7337]: I0312 18:19:14.798387 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-w4bj7" Mar 12 18:19:14.798820 master-0 kubenswrapper[7337]: I0312 18:19:14.798806 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 12 18:19:14.799093 master-0 kubenswrapper[7337]: I0312 18:19:14.799077 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 12 18:19:14.811620 master-0 kubenswrapper[7337]: I0312 18:19:14.811585 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-74cc79fd76-f59x9"] Mar 12 18:19:14.824723 master-0 kubenswrapper[7337]: I0312 18:19:14.824688 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq"] Mar 12 18:19:14.856970 master-0 kubenswrapper[7337]: I0312 18:19:14.850952 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq"] Mar 12 18:19:14.856970 master-0 kubenswrapper[7337]: I0312 18:19:14.851067 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:19:14.856970 master-0 kubenswrapper[7337]: I0312 18:19:14.853492 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 12 18:19:14.856970 master-0 kubenswrapper[7337]: I0312 18:19:14.854673 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 12 18:19:14.856970 master-0 kubenswrapper[7337]: I0312 18:19:14.854261 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 12 18:19:14.857898 master-0 kubenswrapper[7337]: I0312 18:19:14.857864 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-6v462"] Mar 12 18:19:14.859292 master-0 kubenswrapper[7337]: I0312 18:19:14.859269 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:19:14.859483 master-0 kubenswrapper[7337]: I0312 18:19:14.859457 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-8nmsp" Mar 12 18:19:14.861103 master-0 kubenswrapper[7337]: I0312 18:19:14.861075 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 12 18:19:14.861899 master-0 kubenswrapper[7337]: I0312 18:19:14.861867 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 12 18:19:14.862842 master-0 kubenswrapper[7337]: I0312 18:19:14.862817 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-ffmfp" Mar 12 18:19:14.911606 master-0 kubenswrapper[7337]: I0312 18:19:14.911560 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/78c13011-7a79-445f-807c-4f5e75643549-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-f59x9\" (UID: \"78c13011-7a79-445f-807c-4f5e75643549\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-f59x9" Mar 12 18:19:14.911776 master-0 kubenswrapper[7337]: I0312 18:19:14.911646 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmntw\" (UniqueName: \"kubernetes.io/projected/78c13011-7a79-445f-807c-4f5e75643549-kube-api-access-bmntw\") pod \"openshift-state-metrics-74cc79fd76-f59x9\" (UID: \"78c13011-7a79-445f-807c-4f5e75643549\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-f59x9" Mar 12 18:19:14.911776 master-0 kubenswrapper[7337]: I0312 18:19:14.911732 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/78c13011-7a79-445f-807c-4f5e75643549-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-f59x9\" (UID: \"78c13011-7a79-445f-807c-4f5e75643549\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-f59x9" Mar 12 18:19:14.911776 master-0 kubenswrapper[7337]: I0312 18:19:14.911762 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/78c13011-7a79-445f-807c-4f5e75643549-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-f59x9\" (UID: \"78c13011-7a79-445f-807c-4f5e75643549\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-f59x9" Mar 12 18:19:15.012827 master-0 kubenswrapper[7337]: I0312 18:19:15.012776 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/3d77a98a-0176-4924-81d3-8e9890852b38-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:19:15.012827 master-0 kubenswrapper[7337]: I0312 18:19:15.012835 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/3d77a98a-0176-4924-81d3-8e9890852b38-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:19:15.013307 master-0 kubenswrapper[7337]: I0312 18:19:15.012967 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52svc\" (UniqueName: \"kubernetes.io/projected/adb0dbbf-458d-46f5-b236-d4904e125418-kube-api-access-52svc\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:19:15.013489 master-0 kubenswrapper[7337]: I0312 18:19:15.013467 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f72ng\" (UniqueName: \"kubernetes.io/projected/3d77a98a-0176-4924-81d3-8e9890852b38-kube-api-access-f72ng\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:19:15.013554 master-0 kubenswrapper[7337]: I0312 18:19:15.013527 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/adb0dbbf-458d-46f5-b236-d4904e125418-sys\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:19:15.013591 master-0 kubenswrapper[7337]: I0312 18:19:15.013581 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/adb0dbbf-458d-46f5-b236-d4904e125418-node-exporter-tls\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:19:15.013639 master-0 kubenswrapper[7337]: I0312 18:19:15.013613 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmntw\" (UniqueName: \"kubernetes.io/projected/78c13011-7a79-445f-807c-4f5e75643549-kube-api-access-bmntw\") pod \"openshift-state-metrics-74cc79fd76-f59x9\" (UID: \"78c13011-7a79-445f-807c-4f5e75643549\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-f59x9" Mar 12 18:19:15.013674 master-0 kubenswrapper[7337]: I0312 18:19:15.013644 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3d77a98a-0176-4924-81d3-8e9890852b38-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:19:15.013707 master-0 kubenswrapper[7337]: I0312 18:19:15.013688 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/adb0dbbf-458d-46f5-b236-d4904e125418-root\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:19:15.013838 master-0 kubenswrapper[7337]: I0312 18:19:15.013798 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/adb0dbbf-458d-46f5-b236-d4904e125418-node-exporter-textfile\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:19:15.013942 master-0 kubenswrapper[7337]: I0312 18:19:15.013924 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/78c13011-7a79-445f-807c-4f5e75643549-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-f59x9\" (UID: \"78c13011-7a79-445f-807c-4f5e75643549\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-f59x9" Mar 12 18:19:15.013976 master-0 kubenswrapper[7337]: I0312 18:19:15.013953 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3d77a98a-0176-4924-81d3-8e9890852b38-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:19:15.014015 master-0 kubenswrapper[7337]: I0312 18:19:15.013980 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/adb0dbbf-458d-46f5-b236-d4904e125418-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:19:15.014015 master-0 kubenswrapper[7337]: I0312 18:19:15.014007 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/78c13011-7a79-445f-807c-4f5e75643549-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-f59x9\" (UID: \"78c13011-7a79-445f-807c-4f5e75643549\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-f59x9" Mar 12 18:19:15.014090 master-0 kubenswrapper[7337]: I0312 18:19:15.014074 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3d77a98a-0176-4924-81d3-8e9890852b38-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:19:15.014124 master-0 kubenswrapper[7337]: I0312 18:19:15.014096 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/adb0dbbf-458d-46f5-b236-d4904e125418-metrics-client-ca\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:19:15.014161 master-0 kubenswrapper[7337]: I0312 18:19:15.014126 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/adb0dbbf-458d-46f5-b236-d4904e125418-node-exporter-wtmp\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:19:15.014161 master-0 kubenswrapper[7337]: I0312 18:19:15.014152 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/78c13011-7a79-445f-807c-4f5e75643549-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-f59x9\" (UID: \"78c13011-7a79-445f-807c-4f5e75643549\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-f59x9" Mar 12 18:19:15.015632 master-0 kubenswrapper[7337]: I0312 18:19:15.015608 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/78c13011-7a79-445f-807c-4f5e75643549-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-f59x9\" (UID: \"78c13011-7a79-445f-807c-4f5e75643549\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-f59x9" Mar 12 18:19:15.019101 master-0 kubenswrapper[7337]: I0312 18:19:15.018971 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/78c13011-7a79-445f-807c-4f5e75643549-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-f59x9\" (UID: \"78c13011-7a79-445f-807c-4f5e75643549\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-f59x9" Mar 12 18:19:15.019320 master-0 kubenswrapper[7337]: I0312 18:19:15.019290 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/78c13011-7a79-445f-807c-4f5e75643549-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-f59x9\" (UID: \"78c13011-7a79-445f-807c-4f5e75643549\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-f59x9" Mar 12 18:19:15.033486 master-0 kubenswrapper[7337]: I0312 18:19:15.033442 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmntw\" (UniqueName: \"kubernetes.io/projected/78c13011-7a79-445f-807c-4f5e75643549-kube-api-access-bmntw\") pod \"openshift-state-metrics-74cc79fd76-f59x9\" (UID: \"78c13011-7a79-445f-807c-4f5e75643549\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-f59x9" Mar 12 18:19:15.115190 master-0 kubenswrapper[7337]: I0312 18:19:15.115048 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/adb0dbbf-458d-46f5-b236-d4904e125418-metrics-client-ca\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:19:15.115411 master-0 kubenswrapper[7337]: I0312 18:19:15.115392 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3d77a98a-0176-4924-81d3-8e9890852b38-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:19:15.115701 master-0 kubenswrapper[7337]: I0312 18:19:15.115629 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/adb0dbbf-458d-46f5-b236-d4904e125418-node-exporter-wtmp\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:19:15.115815 master-0 kubenswrapper[7337]: I0312 18:19:15.115789 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/3d77a98a-0176-4924-81d3-8e9890852b38-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:19:15.115895 master-0 kubenswrapper[7337]: I0312 18:19:15.115833 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/3d77a98a-0176-4924-81d3-8e9890852b38-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:19:15.115977 master-0 kubenswrapper[7337]: I0312 18:19:15.115953 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52svc\" (UniqueName: \"kubernetes.io/projected/adb0dbbf-458d-46f5-b236-d4904e125418-kube-api-access-52svc\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:19:15.116032 master-0 kubenswrapper[7337]: I0312 18:19:15.116009 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f72ng\" (UniqueName: \"kubernetes.io/projected/3d77a98a-0176-4924-81d3-8e9890852b38-kube-api-access-f72ng\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:19:15.116072 master-0 kubenswrapper[7337]: I0312 18:19:15.116023 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/adb0dbbf-458d-46f5-b236-d4904e125418-metrics-client-ca\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:19:15.116120 master-0 kubenswrapper[7337]: I0312 18:19:15.116098 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/adb0dbbf-458d-46f5-b236-d4904e125418-sys\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:19:15.116192 master-0 kubenswrapper[7337]: I0312 18:19:15.115964 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/adb0dbbf-458d-46f5-b236-d4904e125418-node-exporter-wtmp\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:19:15.116304 master-0 kubenswrapper[7337]: I0312 18:19:15.116240 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/adb0dbbf-458d-46f5-b236-d4904e125418-node-exporter-tls\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:19:15.116474 master-0 kubenswrapper[7337]: E0312 18:19:15.116341 7337 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Mar 12 18:19:15.116548 master-0 kubenswrapper[7337]: I0312 18:19:15.116504 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/3d77a98a-0176-4924-81d3-8e9890852b38-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:19:15.116605 master-0 kubenswrapper[7337]: I0312 18:19:15.116525 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/adb0dbbf-458d-46f5-b236-d4904e125418-sys\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:19:15.116605 master-0 kubenswrapper[7337]: I0312 18:19:15.116430 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3d77a98a-0176-4924-81d3-8e9890852b38-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:19:15.116605 master-0 kubenswrapper[7337]: E0312 18:19:15.116531 7337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adb0dbbf-458d-46f5-b236-d4904e125418-node-exporter-tls podName:adb0dbbf-458d-46f5-b236-d4904e125418 nodeName:}" failed. No retries permitted until 2026-03-12 18:19:15.616482056 +0000 UTC m=+356.085083003 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/adb0dbbf-458d-46f5-b236-d4904e125418-node-exporter-tls") pod "node-exporter-6v462" (UID: "adb0dbbf-458d-46f5-b236-d4904e125418") : secret "node-exporter-tls" not found Mar 12 18:19:15.116761 master-0 kubenswrapper[7337]: I0312 18:19:15.116636 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/adb0dbbf-458d-46f5-b236-d4904e125418-root\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:19:15.116761 master-0 kubenswrapper[7337]: I0312 18:19:15.116702 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/adb0dbbf-458d-46f5-b236-d4904e125418-node-exporter-textfile\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:19:15.116761 master-0 kubenswrapper[7337]: I0312 18:19:15.116723 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/adb0dbbf-458d-46f5-b236-d4904e125418-root\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:19:15.116878 master-0 kubenswrapper[7337]: I0312 18:19:15.116752 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3d77a98a-0176-4924-81d3-8e9890852b38-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:19:15.116878 master-0 kubenswrapper[7337]: I0312 18:19:15.116791 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/3d77a98a-0176-4924-81d3-8e9890852b38-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:19:15.116878 master-0 kubenswrapper[7337]: I0312 18:19:15.116807 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/adb0dbbf-458d-46f5-b236-d4904e125418-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:19:15.117064 master-0 kubenswrapper[7337]: I0312 18:19:15.117036 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/adb0dbbf-458d-46f5-b236-d4904e125418-node-exporter-textfile\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:19:15.118215 master-0 kubenswrapper[7337]: I0312 18:19:15.117678 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3d77a98a-0176-4924-81d3-8e9890852b38-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:19:15.120288 master-0 kubenswrapper[7337]: I0312 18:19:15.120233 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3d77a98a-0176-4924-81d3-8e9890852b38-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:19:15.120451 master-0 kubenswrapper[7337]: I0312 18:19:15.120318 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3d77a98a-0176-4924-81d3-8e9890852b38-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:19:15.121767 master-0 kubenswrapper[7337]: I0312 18:19:15.121745 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-f59x9" Mar 12 18:19:15.124018 master-0 kubenswrapper[7337]: I0312 18:19:15.123987 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/adb0dbbf-458d-46f5-b236-d4904e125418-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:19:15.136412 master-0 kubenswrapper[7337]: I0312 18:19:15.136011 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f72ng\" (UniqueName: \"kubernetes.io/projected/3d77a98a-0176-4924-81d3-8e9890852b38-kube-api-access-f72ng\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:19:15.138929 master-0 kubenswrapper[7337]: I0312 18:19:15.138868 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52svc\" (UniqueName: \"kubernetes.io/projected/adb0dbbf-458d-46f5-b236-d4904e125418-kube-api-access-52svc\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:19:15.181595 master-0 kubenswrapper[7337]: I0312 18:19:15.179772 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:19:15.491859 master-0 kubenswrapper[7337]: I0312 18:19:15.491754 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:15.491859 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:15.491859 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:15.491859 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:15.491859 master-0 kubenswrapper[7337]: I0312 18:19:15.491819 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:15.583041 master-0 kubenswrapper[7337]: I0312 18:19:15.582986 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-74cc79fd76-f59x9"] Mar 12 18:19:15.589642 master-0 kubenswrapper[7337]: W0312 18:19:15.589405 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78c13011_7a79_445f_807c_4f5e75643549.slice/crio-c8f19a12a173a3644a8c884e60505576df72aa86c56475b9cb55da771d09977f WatchSource:0}: Error finding container c8f19a12a173a3644a8c884e60505576df72aa86c56475b9cb55da771d09977f: Status 404 returned error can't find the container with id c8f19a12a173a3644a8c884e60505576df72aa86c56475b9cb55da771d09977f Mar 12 18:19:15.622217 master-0 kubenswrapper[7337]: I0312 18:19:15.622163 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/adb0dbbf-458d-46f5-b236-d4904e125418-node-exporter-tls\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:19:15.625113 master-0 kubenswrapper[7337]: I0312 18:19:15.625058 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/adb0dbbf-458d-46f5-b236-d4904e125418-node-exporter-tls\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:19:15.647735 master-0 kubenswrapper[7337]: I0312 18:19:15.647689 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq"] Mar 12 18:19:15.655955 master-0 kubenswrapper[7337]: W0312 18:19:15.655915 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d77a98a_0176_4924_81d3_8e9890852b38.slice/crio-9198031b5b07e64ace92b4c83419c05c8903b54c6277e82aff6a36c6cdfe7576 WatchSource:0}: Error finding container 9198031b5b07e64ace92b4c83419c05c8903b54c6277e82aff6a36c6cdfe7576: Status 404 returned error can't find the container with id 9198031b5b07e64ace92b4c83419c05c8903b54c6277e82aff6a36c6cdfe7576 Mar 12 18:19:15.811670 master-0 kubenswrapper[7337]: I0312 18:19:15.811630 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:19:15.834950 master-0 kubenswrapper[7337]: W0312 18:19:15.834899 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadb0dbbf_458d_46f5_b236_d4904e125418.slice/crio-f58312bc5c3e22538ea35107690dd2a543db5a56cd4a19ebaf6640fbb1518551 WatchSource:0}: Error finding container f58312bc5c3e22538ea35107690dd2a543db5a56cd4a19ebaf6640fbb1518551: Status 404 returned error can't find the container with id f58312bc5c3e22538ea35107690dd2a543db5a56cd4a19ebaf6640fbb1518551 Mar 12 18:19:16.025713 master-0 kubenswrapper[7337]: E0312 18:19:16.025582 7337 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = determining manifest MIME type for docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ef4b76f6b989bf3e802d22aff457a019d9c232f0ea8d927ac6ce2d854fe48d7: Manifest does not match provided manifest digest sha256:3ef4b76f6b989bf3e802d22aff457a019d9c232f0ea8d927ac6ce2d854fe48d7" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ef4b76f6b989bf3e802d22aff457a019d9c232f0ea8d927ac6ce2d854fe48d7" Mar 12 18:19:16.025896 master-0 kubenswrapper[7337]: E0312 18:19:16.025754 7337 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 18:19:16.025896 master-0 kubenswrapper[7337]: container &Container{Name:kube-state-metrics,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ef4b76f6b989bf3e802d22aff457a019d9c232f0ea8d927ac6ce2d854fe48d7,Command:[],Args:[--host=127.0.0.1 --port=8081 --telemetry-host=127.0.0.1 --telemetry-port=8082 --metric-denylist= Mar 12 18:19:16.025896 master-0 kubenswrapper[7337]: ^kube_secret_labels$, Mar 12 18:19:16.025896 master-0 kubenswrapper[7337]: ^kube_.+_annotations$, Mar 12 18:19:16.025896 master-0 kubenswrapper[7337]: ^kube_customresource_.+_annotations_info$, Mar 12 18:19:16.025896 master-0 kubenswrapper[7337]: ^kube_customresource_.+_labels_info$ Mar 12 18:19:16.025896 master-0 kubenswrapper[7337]: --metric-labels-allowlist=pods=[*],nodes=[*],namespaces=[*],persistentvolumes=[*],persistentvolumeclaims=[*],poddisruptionbudgets=[*] --metric-denylist= Mar 12 18:19:16.025896 master-0 kubenswrapper[7337]: ^kube_.+_created$, Mar 12 18:19:16.025896 master-0 kubenswrapper[7337]: ^kube_.+_metadata_resource_version$, Mar 12 18:19:16.025896 master-0 kubenswrapper[7337]: ^kube_replicaset_metadata_generation$, Mar 12 18:19:16.025896 master-0 kubenswrapper[7337]: ^kube_replicaset_status_observed_generation$, Mar 12 18:19:16.025896 master-0 kubenswrapper[7337]: ^kube_pod_restart_policy$, Mar 12 18:19:16.025896 master-0 kubenswrapper[7337]: ^kube_pod_init_container_status_terminated$, Mar 12 18:19:16.025896 master-0 kubenswrapper[7337]: ^kube_pod_init_container_status_running$, Mar 12 18:19:16.025896 master-0 kubenswrapper[7337]: ^kube_pod_container_status_terminated$, Mar 12 18:19:16.025896 master-0 kubenswrapper[7337]: ^kube_pod_container_status_running$, Mar 12 18:19:16.025896 master-0 kubenswrapper[7337]: ^kube_pod_completion_time$, Mar 12 18:19:16.025896 master-0 kubenswrapper[7337]: ^kube_pod_status_scheduled$ Mar 12 18:19:16.025896 master-0 kubenswrapper[7337]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{2 -3} {} 2m DecimalSI},memory: {{83886080 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:volume-directive-shadow,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-state-metrics-custom-resource-state-configmap,ReadOnly:true,MountPath:/etc/kube-state-metrics,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f72ng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000460000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-68b88f8cb5-sqdhq_openshift-monitoring(3d77a98a-0176-4924-81d3-8e9890852b38): ErrImagePull: determining manifest MIME type for docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ef4b76f6b989bf3e802d22aff457a019d9c232f0ea8d927ac6ce2d854fe48d7: Manifest does not match provided manifest digest sha256:3ef4b76f6b989bf3e802d22aff457a019d9c232f0ea8d927ac6ce2d854fe48d7 Mar 12 18:19:16.025896 master-0 kubenswrapper[7337]: > logger="UnhandledError" Mar 12 18:19:16.230598 master-0 kubenswrapper[7337]: I0312 18:19:16.230562 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" event={"ID":"3d77a98a-0176-4924-81d3-8e9890852b38","Type":"ContainerStarted","Data":"d867523375d8bb30b99cbd5f865041d55a6c75ed3061e93edf542fcbd0a2b8a5"} Mar 12 18:19:16.230740 master-0 kubenswrapper[7337]: I0312 18:19:16.230606 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" event={"ID":"3d77a98a-0176-4924-81d3-8e9890852b38","Type":"ContainerStarted","Data":"9198031b5b07e64ace92b4c83419c05c8903b54c6277e82aff6a36c6cdfe7576"} Mar 12 18:19:16.232966 master-0 kubenswrapper[7337]: I0312 18:19:16.232918 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-f59x9" event={"ID":"78c13011-7a79-445f-807c-4f5e75643549","Type":"ContainerStarted","Data":"e99009df0846a06c6972b568ac7512dd08e30ed476aa621957e7361610d0fe4c"} Mar 12 18:19:16.233049 master-0 kubenswrapper[7337]: I0312 18:19:16.232970 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-f59x9" event={"ID":"78c13011-7a79-445f-807c-4f5e75643549","Type":"ContainerStarted","Data":"d1209b953e0544cb7801141fbd050e44de1d0ed19634b1324bbcece82bc6d8a9"} Mar 12 18:19:16.233049 master-0 kubenswrapper[7337]: I0312 18:19:16.232983 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-f59x9" event={"ID":"78c13011-7a79-445f-807c-4f5e75643549","Type":"ContainerStarted","Data":"c8f19a12a173a3644a8c884e60505576df72aa86c56475b9cb55da771d09977f"} Mar 12 18:19:16.233844 master-0 kubenswrapper[7337]: I0312 18:19:16.233785 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-6v462" event={"ID":"adb0dbbf-458d-46f5-b236-d4904e125418","Type":"ContainerStarted","Data":"f58312bc5c3e22538ea35107690dd2a543db5a56cd4a19ebaf6640fbb1518551"} Mar 12 18:19:16.299774 master-0 kubenswrapper[7337]: E0312 18:19:16.299326 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"determining manifest MIME type for docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ef4b76f6b989bf3e802d22aff457a019d9c232f0ea8d927ac6ce2d854fe48d7: Manifest does not match provided manifest digest sha256:3ef4b76f6b989bf3e802d22aff457a019d9c232f0ea8d927ac6ce2d854fe48d7\"" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" podUID="3d77a98a-0176-4924-81d3-8e9890852b38" Mar 12 18:19:16.492638 master-0 kubenswrapper[7337]: I0312 18:19:16.492585 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:16.492638 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:16.492638 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:16.492638 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:16.492638 master-0 kubenswrapper[7337]: I0312 18:19:16.492633 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:17.243369 master-0 kubenswrapper[7337]: I0312 18:19:17.242880 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-6v462" event={"ID":"adb0dbbf-458d-46f5-b236-d4904e125418","Type":"ContainerStarted","Data":"5f50f1609b5be6488fded6489aafefb9e65bb7d7fb19c94cf58285ac54471228"} Mar 12 18:19:17.247396 master-0 kubenswrapper[7337]: I0312 18:19:17.247333 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" event={"ID":"3d77a98a-0176-4924-81d3-8e9890852b38","Type":"ContainerStarted","Data":"643749e6502f03caa18a67549fef99baf29972a29d1d8735ef5aabca834ad270"} Mar 12 18:19:17.248918 master-0 kubenswrapper[7337]: E0312 18:19:17.248704 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ef4b76f6b989bf3e802d22aff457a019d9c232f0ea8d927ac6ce2d854fe48d7\\\"\"" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" podUID="3d77a98a-0176-4924-81d3-8e9890852b38" Mar 12 18:19:17.489689 master-0 kubenswrapper[7337]: I0312 18:19:17.489636 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:19:17.491597 master-0 kubenswrapper[7337]: I0312 18:19:17.491556 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:17.491597 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:17.491597 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:17.491597 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:17.491756 master-0 kubenswrapper[7337]: I0312 18:19:17.491605 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:18.255291 master-0 kubenswrapper[7337]: I0312 18:19:18.255182 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-f59x9" event={"ID":"78c13011-7a79-445f-807c-4f5e75643549","Type":"ContainerStarted","Data":"417c804cdc18545687cbfdf69fb752254f4ba94d8637c2cf9c98bb0bd643376e"} Mar 12 18:19:18.257434 master-0 kubenswrapper[7337]: I0312 18:19:18.257389 7337 generic.go:334] "Generic (PLEG): container finished" podID="adb0dbbf-458d-46f5-b236-d4904e125418" containerID="5f50f1609b5be6488fded6489aafefb9e65bb7d7fb19c94cf58285ac54471228" exitCode=0 Mar 12 18:19:18.257542 master-0 kubenswrapper[7337]: I0312 18:19:18.257454 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-6v462" event={"ID":"adb0dbbf-458d-46f5-b236-d4904e125418","Type":"ContainerDied","Data":"5f50f1609b5be6488fded6489aafefb9e65bb7d7fb19c94cf58285ac54471228"} Mar 12 18:19:18.259021 master-0 kubenswrapper[7337]: E0312 18:19:18.258924 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ef4b76f6b989bf3e802d22aff457a019d9c232f0ea8d927ac6ce2d854fe48d7\\\"\"" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" podUID="3d77a98a-0176-4924-81d3-8e9890852b38" Mar 12 18:19:18.284364 master-0 kubenswrapper[7337]: I0312 18:19:18.284236 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-f59x9" podStartSLOduration=2.784306912 podStartE2EDuration="4.284205897s" podCreationTimestamp="2026-03-12 18:19:14 +0000 UTC" firstStartedPulling="2026-03-12 18:19:15.804654433 +0000 UTC m=+356.273255380" lastFinishedPulling="2026-03-12 18:19:17.304553418 +0000 UTC m=+357.773154365" observedRunningTime="2026-03-12 18:19:18.278658575 +0000 UTC m=+358.747259592" watchObservedRunningTime="2026-03-12 18:19:18.284205897 +0000 UTC m=+358.752806884" Mar 12 18:19:18.492108 master-0 kubenswrapper[7337]: I0312 18:19:18.492063 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:18.492108 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:18.492108 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:18.492108 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:18.492417 master-0 kubenswrapper[7337]: I0312 18:19:18.492124 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:19.268551 master-0 kubenswrapper[7337]: I0312 18:19:19.268440 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-6v462" event={"ID":"adb0dbbf-458d-46f5-b236-d4904e125418","Type":"ContainerStarted","Data":"ebfedbbd1f21515081701e5570e5b13b023ac507d1a9b6ca80aa1277135c0860"} Mar 12 18:19:19.269500 master-0 kubenswrapper[7337]: I0312 18:19:19.268568 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-6v462" event={"ID":"adb0dbbf-458d-46f5-b236-d4904e125418","Type":"ContainerStarted","Data":"54c51bb2a44b94e019235a57851048d91dfa365ad3bcfed6023f72aca7b4d3f3"} Mar 12 18:19:19.299488 master-0 kubenswrapper[7337]: I0312 18:19:19.299401 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-6v462" podStartSLOduration=4.182506454 podStartE2EDuration="5.299380765s" podCreationTimestamp="2026-03-12 18:19:14 +0000 UTC" firstStartedPulling="2026-03-12 18:19:15.836624061 +0000 UTC m=+356.305225008" lastFinishedPulling="2026-03-12 18:19:16.953498372 +0000 UTC m=+357.422099319" observedRunningTime="2026-03-12 18:19:19.290766954 +0000 UTC m=+359.759367931" watchObservedRunningTime="2026-03-12 18:19:19.299380765 +0000 UTC m=+359.767981722" Mar 12 18:19:19.493109 master-0 kubenswrapper[7337]: I0312 18:19:19.493028 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:19.493109 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:19.493109 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:19.493109 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:19.493404 master-0 kubenswrapper[7337]: I0312 18:19:19.493136 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:20.071495 master-0 kubenswrapper[7337]: I0312 18:19:20.071426 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-d597fb65b-cc7cs"] Mar 12 18:19:20.072591 master-0 kubenswrapper[7337]: I0312 18:19:20.072543 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" Mar 12 18:19:20.074752 master-0 kubenswrapper[7337]: I0312 18:19:20.074701 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 12 18:19:20.078883 master-0 kubenswrapper[7337]: I0312 18:19:20.078838 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-zmj7s" Mar 12 18:19:20.079066 master-0 kubenswrapper[7337]: I0312 18:19:20.079034 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 12 18:19:20.079205 master-0 kubenswrapper[7337]: I0312 18:19:20.079179 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 12 18:19:20.083502 master-0 kubenswrapper[7337]: I0312 18:19:20.083465 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 12 18:19:20.084090 master-0 kubenswrapper[7337]: I0312 18:19:20.084064 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 12 18:19:20.084420 master-0 kubenswrapper[7337]: I0312 18:19:20.084387 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 12 18:19:20.096935 master-0 kubenswrapper[7337]: I0312 18:19:20.096879 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-d597fb65b-cc7cs"] Mar 12 18:19:20.185025 master-0 kubenswrapper[7337]: I0312 18:19:20.184914 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-telemeter-client-tls\") pod \"telemeter-client-d597fb65b-cc7cs\" (UID: \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\") " pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" Mar 12 18:19:20.185025 master-0 kubenswrapper[7337]: I0312 18:19:20.184974 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-secret-telemeter-client\") pod \"telemeter-client-d597fb65b-cc7cs\" (UID: \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\") " pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" Mar 12 18:19:20.185025 master-0 kubenswrapper[7337]: I0312 18:19:20.185004 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-metrics-client-ca\") pod \"telemeter-client-d597fb65b-cc7cs\" (UID: \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\") " pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" Mar 12 18:19:20.185245 master-0 kubenswrapper[7337]: I0312 18:19:20.185034 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-serving-certs-ca-bundle\") pod \"telemeter-client-d597fb65b-cc7cs\" (UID: \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\") " pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" Mar 12 18:19:20.185245 master-0 kubenswrapper[7337]: I0312 18:19:20.185061 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-d597fb65b-cc7cs\" (UID: \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\") " pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" Mar 12 18:19:20.185245 master-0 kubenswrapper[7337]: I0312 18:19:20.185083 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-federate-client-tls\") pod \"telemeter-client-d597fb65b-cc7cs\" (UID: \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\") " pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" Mar 12 18:19:20.185245 master-0 kubenswrapper[7337]: I0312 18:19:20.185103 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6drrk\" (UniqueName: \"kubernetes.io/projected/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-kube-api-access-6drrk\") pod \"telemeter-client-d597fb65b-cc7cs\" (UID: \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\") " pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" Mar 12 18:19:20.185245 master-0 kubenswrapper[7337]: I0312 18:19:20.185125 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-telemeter-trusted-ca-bundle\") pod \"telemeter-client-d597fb65b-cc7cs\" (UID: \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\") " pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" Mar 12 18:19:20.242018 master-0 kubenswrapper[7337]: I0312 18:19:20.241972 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-5784dff469-l5d64"] Mar 12 18:19:20.242788 master-0 kubenswrapper[7337]: I0312 18:19:20.242766 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:19:20.244451 master-0 kubenswrapper[7337]: I0312 18:19:20.244408 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-mbtwq" Mar 12 18:19:20.244647 master-0 kubenswrapper[7337]: I0312 18:19:20.244474 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 12 18:19:20.244720 master-0 kubenswrapper[7337]: I0312 18:19:20.244698 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 12 18:19:20.244832 master-0 kubenswrapper[7337]: I0312 18:19:20.244807 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-7hcn2cdka018u" Mar 12 18:19:20.245381 master-0 kubenswrapper[7337]: I0312 18:19:20.245352 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 12 18:19:20.245805 master-0 kubenswrapper[7337]: I0312 18:19:20.245745 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 12 18:19:20.264286 master-0 kubenswrapper[7337]: I0312 18:19:20.264233 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-5784dff469-l5d64"] Mar 12 18:19:20.286409 master-0 kubenswrapper[7337]: I0312 18:19:20.286369 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-telemeter-trusted-ca-bundle\") pod \"telemeter-client-d597fb65b-cc7cs\" (UID: \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\") " pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" Mar 12 18:19:20.286866 master-0 kubenswrapper[7337]: I0312 18:19:20.286436 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-telemeter-client-tls\") pod \"telemeter-client-d597fb65b-cc7cs\" (UID: \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\") " pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" Mar 12 18:19:20.286866 master-0 kubenswrapper[7337]: I0312 18:19:20.286474 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-secret-telemeter-client\") pod \"telemeter-client-d597fb65b-cc7cs\" (UID: \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\") " pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" Mar 12 18:19:20.286866 master-0 kubenswrapper[7337]: I0312 18:19:20.286500 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-metrics-client-ca\") pod \"telemeter-client-d597fb65b-cc7cs\" (UID: \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\") " pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" Mar 12 18:19:20.286866 master-0 kubenswrapper[7337]: I0312 18:19:20.286543 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-serving-certs-ca-bundle\") pod \"telemeter-client-d597fb65b-cc7cs\" (UID: \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\") " pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" Mar 12 18:19:20.286866 master-0 kubenswrapper[7337]: I0312 18:19:20.286568 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-d597fb65b-cc7cs\" (UID: \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\") " pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" Mar 12 18:19:20.286866 master-0 kubenswrapper[7337]: I0312 18:19:20.286588 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-federate-client-tls\") pod \"telemeter-client-d597fb65b-cc7cs\" (UID: \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\") " pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" Mar 12 18:19:20.286866 master-0 kubenswrapper[7337]: I0312 18:19:20.286605 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6drrk\" (UniqueName: \"kubernetes.io/projected/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-kube-api-access-6drrk\") pod \"telemeter-client-d597fb65b-cc7cs\" (UID: \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\") " pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" Mar 12 18:19:20.288110 master-0 kubenswrapper[7337]: I0312 18:19:20.288089 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-telemeter-trusted-ca-bundle\") pod \"telemeter-client-d597fb65b-cc7cs\" (UID: \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\") " pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" Mar 12 18:19:20.289849 master-0 kubenswrapper[7337]: I0312 18:19:20.289580 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-metrics-client-ca\") pod \"telemeter-client-d597fb65b-cc7cs\" (UID: \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\") " pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" Mar 12 18:19:20.291192 master-0 kubenswrapper[7337]: I0312 18:19:20.291129 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-telemeter-client-tls\") pod \"telemeter-client-d597fb65b-cc7cs\" (UID: \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\") " pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" Mar 12 18:19:20.291900 master-0 kubenswrapper[7337]: I0312 18:19:20.291862 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-secret-telemeter-client\") pod \"telemeter-client-d597fb65b-cc7cs\" (UID: \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\") " pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" Mar 12 18:19:20.292287 master-0 kubenswrapper[7337]: I0312 18:19:20.292261 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-d597fb65b-cc7cs\" (UID: \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\") " pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" Mar 12 18:19:20.292998 master-0 kubenswrapper[7337]: I0312 18:19:20.292973 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-federate-client-tls\") pod \"telemeter-client-d597fb65b-cc7cs\" (UID: \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\") " pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" Mar 12 18:19:20.293983 master-0 kubenswrapper[7337]: I0312 18:19:20.293953 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-serving-certs-ca-bundle\") pod \"telemeter-client-d597fb65b-cc7cs\" (UID: \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\") " pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" Mar 12 18:19:20.302025 master-0 kubenswrapper[7337]: I0312 18:19:20.301980 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6drrk\" (UniqueName: \"kubernetes.io/projected/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-kube-api-access-6drrk\") pod \"telemeter-client-d597fb65b-cc7cs\" (UID: \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\") " pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" Mar 12 18:19:20.388386 master-0 kubenswrapper[7337]: I0312 18:19:20.388312 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/9f1f60fa-d79d-4f31-b5bf-2ad333151537-metrics-server-audit-profiles\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:19:20.388653 master-0 kubenswrapper[7337]: I0312 18:19:20.388486 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f1f60fa-d79d-4f31-b5bf-2ad333151537-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:19:20.388653 master-0 kubenswrapper[7337]: I0312 18:19:20.388610 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqlfx\" (UniqueName: \"kubernetes.io/projected/9f1f60fa-d79d-4f31-b5bf-2ad333151537-kube-api-access-hqlfx\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:19:20.388981 master-0 kubenswrapper[7337]: I0312 18:19:20.388920 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-secret-metrics-client-certs\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:19:20.389083 master-0 kubenswrapper[7337]: I0312 18:19:20.389051 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-secret-metrics-server-tls\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:19:20.389179 master-0 kubenswrapper[7337]: I0312 18:19:20.389146 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/9f1f60fa-d79d-4f31-b5bf-2ad333151537-audit-log\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:19:20.389228 master-0 kubenswrapper[7337]: I0312 18:19:20.389200 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-client-ca-bundle\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:19:20.393278 master-0 kubenswrapper[7337]: I0312 18:19:20.393221 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" Mar 12 18:19:20.490632 master-0 kubenswrapper[7337]: I0312 18:19:20.490580 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f1f60fa-d79d-4f31-b5bf-2ad333151537-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:19:20.490795 master-0 kubenswrapper[7337]: I0312 18:19:20.490637 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqlfx\" (UniqueName: \"kubernetes.io/projected/9f1f60fa-d79d-4f31-b5bf-2ad333151537-kube-api-access-hqlfx\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:19:20.490986 master-0 kubenswrapper[7337]: I0312 18:19:20.490945 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-secret-metrics-client-certs\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:19:20.491238 master-0 kubenswrapper[7337]: I0312 18:19:20.491210 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-secret-metrics-server-tls\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:19:20.491387 master-0 kubenswrapper[7337]: I0312 18:19:20.491361 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/9f1f60fa-d79d-4f31-b5bf-2ad333151537-audit-log\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:19:20.491452 master-0 kubenswrapper[7337]: I0312 18:19:20.491392 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-client-ca-bundle\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:19:20.491628 master-0 kubenswrapper[7337]: I0312 18:19:20.491603 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/9f1f60fa-d79d-4f31-b5bf-2ad333151537-metrics-server-audit-profiles\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:19:20.492090 master-0 kubenswrapper[7337]: I0312 18:19:20.492049 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:20.492090 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:20.492090 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:20.492090 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:20.492246 master-0 kubenswrapper[7337]: I0312 18:19:20.492125 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:20.492974 master-0 kubenswrapper[7337]: I0312 18:19:20.492939 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/9f1f60fa-d79d-4f31-b5bf-2ad333151537-audit-log\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:19:20.493392 master-0 kubenswrapper[7337]: I0312 18:19:20.493351 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/9f1f60fa-d79d-4f31-b5bf-2ad333151537-metrics-server-audit-profiles\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:19:20.495280 master-0 kubenswrapper[7337]: I0312 18:19:20.495248 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f1f60fa-d79d-4f31-b5bf-2ad333151537-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:19:20.496995 master-0 kubenswrapper[7337]: I0312 18:19:20.496959 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-secret-metrics-server-tls\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:19:20.497080 master-0 kubenswrapper[7337]: I0312 18:19:20.497050 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-client-ca-bundle\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:19:20.498882 master-0 kubenswrapper[7337]: I0312 18:19:20.498843 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-secret-metrics-client-certs\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:19:20.509500 master-0 kubenswrapper[7337]: I0312 18:19:20.509456 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqlfx\" (UniqueName: \"kubernetes.io/projected/9f1f60fa-d79d-4f31-b5bf-2ad333151537-kube-api-access-hqlfx\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:19:20.559176 master-0 kubenswrapper[7337]: I0312 18:19:20.559108 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:19:20.806952 master-0 kubenswrapper[7337]: I0312 18:19:20.806910 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-d597fb65b-cc7cs"] Mar 12 18:19:20.813125 master-0 kubenswrapper[7337]: W0312 18:19:20.813091 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod933b3bc6_b1e4_4db9_b76e_fa7e9dda7ad3.slice/crio-6af457970e39840a52de23503d79330946ae806d893079d022abefc97b2ba485 WatchSource:0}: Error finding container 6af457970e39840a52de23503d79330946ae806d893079d022abefc97b2ba485: Status 404 returned error can't find the container with id 6af457970e39840a52de23503d79330946ae806d893079d022abefc97b2ba485 Mar 12 18:19:20.973206 master-0 kubenswrapper[7337]: I0312 18:19:20.973164 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-5784dff469-l5d64"] Mar 12 18:19:20.981403 master-0 kubenswrapper[7337]: W0312 18:19:20.981336 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f1f60fa_d79d_4f31_b5bf_2ad333151537.slice/crio-fa6c2fe81e494b2ba395dd1830ab3075ce81e641a81c84edd4df4d6a6849559f WatchSource:0}: Error finding container fa6c2fe81e494b2ba395dd1830ab3075ce81e641a81c84edd4df4d6a6849559f: Status 404 returned error can't find the container with id fa6c2fe81e494b2ba395dd1830ab3075ce81e641a81c84edd4df4d6a6849559f Mar 12 18:19:21.282929 master-0 kubenswrapper[7337]: I0312 18:19:21.282801 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-5784dff469-l5d64" event={"ID":"9f1f60fa-d79d-4f31-b5bf-2ad333151537","Type":"ContainerStarted","Data":"fa6c2fe81e494b2ba395dd1830ab3075ce81e641a81c84edd4df4d6a6849559f"} Mar 12 18:19:21.284097 master-0 kubenswrapper[7337]: I0312 18:19:21.284065 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" event={"ID":"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3","Type":"ContainerStarted","Data":"6af457970e39840a52de23503d79330946ae806d893079d022abefc97b2ba485"} Mar 12 18:19:21.492492 master-0 kubenswrapper[7337]: I0312 18:19:21.492441 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:21.492492 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:21.492492 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:21.492492 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:21.493079 master-0 kubenswrapper[7337]: I0312 18:19:21.493054 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:22.491895 master-0 kubenswrapper[7337]: I0312 18:19:22.491741 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:22.491895 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:22.491895 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:22.491895 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:22.491895 master-0 kubenswrapper[7337]: I0312 18:19:22.491816 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:23.625622 master-0 kubenswrapper[7337]: I0312 18:19:23.491963 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:23.625622 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:23.625622 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:23.625622 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:23.625622 master-0 kubenswrapper[7337]: I0312 18:19:23.492014 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:24.351638 master-0 kubenswrapper[7337]: I0312 18:19:24.349808 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-5784dff469-l5d64" event={"ID":"9f1f60fa-d79d-4f31-b5bf-2ad333151537","Type":"ContainerStarted","Data":"e90331cb678c8e153c33f95cb18612384f7ac4bbc46e3e49ca8de188de41f79a"} Mar 12 18:19:24.491115 master-0 kubenswrapper[7337]: I0312 18:19:24.491051 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:24.491115 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:24.491115 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:24.491115 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:24.491115 master-0 kubenswrapper[7337]: I0312 18:19:24.491100 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:25.358886 master-0 kubenswrapper[7337]: I0312 18:19:25.358798 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" event={"ID":"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3","Type":"ContainerStarted","Data":"c8c23b25538b9891d025b0dfc35feff302ed68aa1be434420fcec13631f52f01"} Mar 12 18:19:25.491598 master-0 kubenswrapper[7337]: I0312 18:19:25.491527 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:25.491598 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:25.491598 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:25.491598 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:25.491598 master-0 kubenswrapper[7337]: I0312 18:19:25.491598 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:26.367339 master-0 kubenswrapper[7337]: I0312 18:19:26.367283 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" event={"ID":"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3","Type":"ContainerStarted","Data":"11462116dd241c4ae57d6e4ef957827246728175975c48e36b7a2f25f8d19909"} Mar 12 18:19:26.367339 master-0 kubenswrapper[7337]: I0312 18:19:26.367337 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" event={"ID":"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3","Type":"ContainerStarted","Data":"dd46c94c9d4bc1fcc6d89a3965b8b23166df4c096bf0eff2d6a38e0a007e8d7b"} Mar 12 18:19:26.398128 master-0 kubenswrapper[7337]: I0312 18:19:26.398031 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-5784dff469-l5d64" podStartSLOduration=3.614562809 podStartE2EDuration="6.398011572s" podCreationTimestamp="2026-03-12 18:19:20 +0000 UTC" firstStartedPulling="2026-03-12 18:19:20.984619795 +0000 UTC m=+361.453220742" lastFinishedPulling="2026-03-12 18:19:23.768068558 +0000 UTC m=+364.236669505" observedRunningTime="2026-03-12 18:19:24.372208104 +0000 UTC m=+364.840809051" watchObservedRunningTime="2026-03-12 18:19:26.398011572 +0000 UTC m=+366.866612529" Mar 12 18:19:26.400059 master-0 kubenswrapper[7337]: I0312 18:19:26.399975 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" podStartSLOduration=1.359013569 podStartE2EDuration="6.399960612s" podCreationTimestamp="2026-03-12 18:19:20 +0000 UTC" firstStartedPulling="2026-03-12 18:19:20.815195098 +0000 UTC m=+361.283796045" lastFinishedPulling="2026-03-12 18:19:25.856142141 +0000 UTC m=+366.324743088" observedRunningTime="2026-03-12 18:19:26.393049445 +0000 UTC m=+366.861650432" watchObservedRunningTime="2026-03-12 18:19:26.399960612 +0000 UTC m=+366.868561589" Mar 12 18:19:26.492864 master-0 kubenswrapper[7337]: I0312 18:19:26.492817 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:26.492864 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:26.492864 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:26.492864 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:26.493142 master-0 kubenswrapper[7337]: I0312 18:19:26.492872 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:27.491292 master-0 kubenswrapper[7337]: I0312 18:19:27.491243 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:27.491292 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:27.491292 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:27.491292 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:27.491915 master-0 kubenswrapper[7337]: I0312 18:19:27.491304 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:28.493137 master-0 kubenswrapper[7337]: I0312 18:19:28.493058 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:28.493137 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:28.493137 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:28.493137 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:28.493877 master-0 kubenswrapper[7337]: I0312 18:19:28.493155 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:29.493076 master-0 kubenswrapper[7337]: I0312 18:19:29.492963 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:29.493076 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:29.493076 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:29.493076 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:29.493804 master-0 kubenswrapper[7337]: I0312 18:19:29.493125 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:30.492668 master-0 kubenswrapper[7337]: I0312 18:19:30.492585 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:30.492668 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:30.492668 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:30.492668 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:30.492997 master-0 kubenswrapper[7337]: I0312 18:19:30.492679 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:31.492991 master-0 kubenswrapper[7337]: I0312 18:19:31.492906 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:31.492991 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:31.492991 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:31.492991 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:31.494058 master-0 kubenswrapper[7337]: I0312 18:19:31.493000 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:32.492336 master-0 kubenswrapper[7337]: I0312 18:19:32.492240 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:32.492336 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:32.492336 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:32.492336 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:32.492726 master-0 kubenswrapper[7337]: I0312 18:19:32.492442 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:33.419371 master-0 kubenswrapper[7337]: I0312 18:19:33.419293 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" event={"ID":"3d77a98a-0176-4924-81d3-8e9890852b38","Type":"ContainerStarted","Data":"369a79a5c4d7425f647d3888e1c24858a35f5452236203337dda2c6bcfabf64d"} Mar 12 18:19:33.454423 master-0 kubenswrapper[7337]: I0312 18:19:33.454278 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" podStartSLOduration=2.217952903 podStartE2EDuration="19.454251394s" podCreationTimestamp="2026-03-12 18:19:14 +0000 UTC" firstStartedPulling="2026-03-12 18:19:15.659256621 +0000 UTC m=+356.127857568" lastFinishedPulling="2026-03-12 18:19:32.895555112 +0000 UTC m=+373.364156059" observedRunningTime="2026-03-12 18:19:33.447700876 +0000 UTC m=+373.916301843" watchObservedRunningTime="2026-03-12 18:19:33.454251394 +0000 UTC m=+373.922852371" Mar 12 18:19:33.493182 master-0 kubenswrapper[7337]: I0312 18:19:33.493107 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:33.493182 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:33.493182 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:33.493182 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:33.493182 master-0 kubenswrapper[7337]: I0312 18:19:33.493178 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:34.493237 master-0 kubenswrapper[7337]: I0312 18:19:34.493160 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:34.493237 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:34.493237 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:34.493237 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:34.493792 master-0 kubenswrapper[7337]: I0312 18:19:34.493288 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:35.493417 master-0 kubenswrapper[7337]: I0312 18:19:35.493310 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:35.493417 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:35.493417 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:35.493417 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:35.493417 master-0 kubenswrapper[7337]: I0312 18:19:35.493404 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:36.492707 master-0 kubenswrapper[7337]: I0312 18:19:36.492634 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:36.492707 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:36.492707 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:36.492707 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:36.492707 master-0 kubenswrapper[7337]: I0312 18:19:36.492703 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:37.493302 master-0 kubenswrapper[7337]: I0312 18:19:37.491215 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:37.493302 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:37.493302 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:37.493302 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:37.493302 master-0 kubenswrapper[7337]: I0312 18:19:37.491272 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:38.492046 master-0 kubenswrapper[7337]: I0312 18:19:38.491963 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:38.492046 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:38.492046 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:38.492046 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:38.492300 master-0 kubenswrapper[7337]: I0312 18:19:38.492086 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:39.492946 master-0 kubenswrapper[7337]: I0312 18:19:39.492813 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:39.492946 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:39.492946 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:39.492946 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:39.493648 master-0 kubenswrapper[7337]: I0312 18:19:39.492963 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:40.493265 master-0 kubenswrapper[7337]: I0312 18:19:40.493223 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:40.493265 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:40.493265 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:40.493265 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:40.493949 master-0 kubenswrapper[7337]: I0312 18:19:40.493278 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:40.559901 master-0 kubenswrapper[7337]: I0312 18:19:40.559826 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:19:40.560057 master-0 kubenswrapper[7337]: I0312 18:19:40.559911 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:19:41.492209 master-0 kubenswrapper[7337]: I0312 18:19:41.492091 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:41.492209 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:41.492209 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:41.492209 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:41.492209 master-0 kubenswrapper[7337]: I0312 18:19:41.492191 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:42.493979 master-0 kubenswrapper[7337]: I0312 18:19:42.493866 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:42.493979 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:42.493979 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:42.493979 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:42.494597 master-0 kubenswrapper[7337]: I0312 18:19:42.494063 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:43.489048 master-0 kubenswrapper[7337]: I0312 18:19:43.488996 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4527l_d94dc349-c5cb-4f12-8e48-867030af4981/ingress-operator/1.log" Mar 12 18:19:43.489652 master-0 kubenswrapper[7337]: I0312 18:19:43.489630 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4527l_d94dc349-c5cb-4f12-8e48-867030af4981/ingress-operator/0.log" Mar 12 18:19:43.489753 master-0 kubenswrapper[7337]: I0312 18:19:43.489666 7337 generic.go:334] "Generic (PLEG): container finished" podID="d94dc349-c5cb-4f12-8e48-867030af4981" containerID="95c9f43b56f422e41bfe1eff7ae31e7f220fb9fe9c3b7c82ac5ec70e4a6cd3be" exitCode=1 Mar 12 18:19:43.489753 master-0 kubenswrapper[7337]: I0312 18:19:43.489707 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" event={"ID":"d94dc349-c5cb-4f12-8e48-867030af4981","Type":"ContainerDied","Data":"95c9f43b56f422e41bfe1eff7ae31e7f220fb9fe9c3b7c82ac5ec70e4a6cd3be"} Mar 12 18:19:43.489848 master-0 kubenswrapper[7337]: I0312 18:19:43.489781 7337 scope.go:117] "RemoveContainer" containerID="fd2b6be186aaa869f9c5743426ef2bc5d49bada1c5fa7a307e7f55efa78a7bbf" Mar 12 18:19:43.490583 master-0 kubenswrapper[7337]: I0312 18:19:43.490503 7337 scope.go:117] "RemoveContainer" containerID="95c9f43b56f422e41bfe1eff7ae31e7f220fb9fe9c3b7c82ac5ec70e4a6cd3be" Mar 12 18:19:43.490867 master-0 kubenswrapper[7337]: E0312 18:19:43.490830 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-4527l_openshift-ingress-operator(d94dc349-c5cb-4f12-8e48-867030af4981)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" podUID="d94dc349-c5cb-4f12-8e48-867030af4981" Mar 12 18:19:43.491618 master-0 kubenswrapper[7337]: I0312 18:19:43.491123 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:43.491618 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:43.491618 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:43.491618 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:43.491618 master-0 kubenswrapper[7337]: I0312 18:19:43.491171 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:44.494357 master-0 kubenswrapper[7337]: I0312 18:19:44.494217 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:44.494357 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:44.494357 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:44.494357 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:44.494357 master-0 kubenswrapper[7337]: I0312 18:19:44.494322 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:44.501284 master-0 kubenswrapper[7337]: I0312 18:19:44.501212 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4527l_d94dc349-c5cb-4f12-8e48-867030af4981/ingress-operator/1.log" Mar 12 18:19:45.492774 master-0 kubenswrapper[7337]: I0312 18:19:45.492670 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:45.492774 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:45.492774 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:45.492774 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:45.492774 master-0 kubenswrapper[7337]: I0312 18:19:45.492744 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:46.493025 master-0 kubenswrapper[7337]: I0312 18:19:46.492911 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:46.493025 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:46.493025 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:46.493025 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:46.493025 master-0 kubenswrapper[7337]: I0312 18:19:46.493017 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:47.492969 master-0 kubenswrapper[7337]: I0312 18:19:47.492926 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:47.492969 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:47.492969 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:47.492969 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:47.493756 master-0 kubenswrapper[7337]: I0312 18:19:47.493715 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:48.493296 master-0 kubenswrapper[7337]: I0312 18:19:48.493230 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:48.493296 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:48.493296 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:48.493296 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:48.494404 master-0 kubenswrapper[7337]: I0312 18:19:48.493305 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:49.493373 master-0 kubenswrapper[7337]: I0312 18:19:49.493302 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:49.493373 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:49.493373 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:49.493373 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:49.494324 master-0 kubenswrapper[7337]: I0312 18:19:49.493379 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:50.493269 master-0 kubenswrapper[7337]: I0312 18:19:50.493194 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:50.493269 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:50.493269 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:50.493269 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:50.494030 master-0 kubenswrapper[7337]: I0312 18:19:50.493279 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:51.491957 master-0 kubenswrapper[7337]: I0312 18:19:51.491872 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:51.491957 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:51.491957 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:51.491957 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:51.492272 master-0 kubenswrapper[7337]: I0312 18:19:51.491989 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:52.492991 master-0 kubenswrapper[7337]: I0312 18:19:52.492885 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:52.492991 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:52.492991 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:52.492991 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:52.492991 master-0 kubenswrapper[7337]: I0312 18:19:52.492958 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:53.491833 master-0 kubenswrapper[7337]: I0312 18:19:53.491762 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:53.491833 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:53.491833 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:53.491833 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:53.492148 master-0 kubenswrapper[7337]: I0312 18:19:53.491844 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:54.492964 master-0 kubenswrapper[7337]: I0312 18:19:54.492900 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:54.492964 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:54.492964 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:54.492964 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:54.493756 master-0 kubenswrapper[7337]: I0312 18:19:54.492972 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:55.491992 master-0 kubenswrapper[7337]: I0312 18:19:55.491832 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:55.491992 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:55.491992 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:55.491992 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:55.491992 master-0 kubenswrapper[7337]: I0312 18:19:55.491922 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:56.491772 master-0 kubenswrapper[7337]: I0312 18:19:56.491721 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:56.491772 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:56.491772 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:56.491772 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:56.492629 master-0 kubenswrapper[7337]: I0312 18:19:56.491786 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:57.491825 master-0 kubenswrapper[7337]: I0312 18:19:57.491778 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:57.491825 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:57.491825 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:57.491825 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:57.492725 master-0 kubenswrapper[7337]: I0312 18:19:57.491834 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:57.723551 master-0 kubenswrapper[7337]: I0312 18:19:57.723459 7337 scope.go:117] "RemoveContainer" containerID="95c9f43b56f422e41bfe1eff7ae31e7f220fb9fe9c3b7c82ac5ec70e4a6cd3be" Mar 12 18:19:58.492503 master-0 kubenswrapper[7337]: I0312 18:19:58.492403 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:58.492503 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:58.492503 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:58.492503 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:58.493312 master-0 kubenswrapper[7337]: I0312 18:19:58.492553 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:19:58.622631 master-0 kubenswrapper[7337]: I0312 18:19:58.622571 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4527l_d94dc349-c5cb-4f12-8e48-867030af4981/ingress-operator/1.log" Mar 12 18:19:58.623198 master-0 kubenswrapper[7337]: I0312 18:19:58.623162 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" event={"ID":"d94dc349-c5cb-4f12-8e48-867030af4981","Type":"ContainerStarted","Data":"a0871263a834ec8eda5a88b5b72f6ff58f93c73ab7cc887038f0189f80ffd4a8"} Mar 12 18:19:59.492466 master-0 kubenswrapper[7337]: I0312 18:19:59.492390 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:19:59.492466 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:19:59.492466 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:19:59.492466 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:19:59.492466 master-0 kubenswrapper[7337]: I0312 18:19:59.492443 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:00.494187 master-0 kubenswrapper[7337]: I0312 18:20:00.494070 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:00.494187 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:00.494187 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:00.494187 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:00.495438 master-0 kubenswrapper[7337]: I0312 18:20:00.494177 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:00.567571 master-0 kubenswrapper[7337]: I0312 18:20:00.567470 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:20:00.574230 master-0 kubenswrapper[7337]: I0312 18:20:00.574175 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:20:01.492582 master-0 kubenswrapper[7337]: I0312 18:20:01.492483 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:01.492582 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:01.492582 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:01.492582 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:01.493055 master-0 kubenswrapper[7337]: I0312 18:20:01.492594 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:02.492032 master-0 kubenswrapper[7337]: I0312 18:20:02.491971 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:02.492032 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:02.492032 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:02.492032 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:02.492848 master-0 kubenswrapper[7337]: I0312 18:20:02.492688 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:03.492591 master-0 kubenswrapper[7337]: I0312 18:20:03.492434 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:03.492591 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:03.492591 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:03.492591 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:03.492591 master-0 kubenswrapper[7337]: I0312 18:20:03.492551 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:04.492383 master-0 kubenswrapper[7337]: I0312 18:20:04.492323 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:04.492383 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:04.492383 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:04.492383 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:04.493100 master-0 kubenswrapper[7337]: I0312 18:20:04.492405 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:05.492427 master-0 kubenswrapper[7337]: I0312 18:20:05.492361 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:05.492427 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:05.492427 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:05.492427 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:05.492427 master-0 kubenswrapper[7337]: I0312 18:20:05.492423 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:06.492399 master-0 kubenswrapper[7337]: I0312 18:20:06.492358 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:06.492399 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:06.492399 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:06.492399 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:06.493003 master-0 kubenswrapper[7337]: I0312 18:20:06.492419 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:07.492299 master-0 kubenswrapper[7337]: I0312 18:20:07.492215 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:07.492299 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:07.492299 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:07.492299 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:07.492860 master-0 kubenswrapper[7337]: I0312 18:20:07.492750 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:08.492831 master-0 kubenswrapper[7337]: I0312 18:20:08.492771 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:08.492831 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:08.492831 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:08.492831 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:08.493500 master-0 kubenswrapper[7337]: I0312 18:20:08.492842 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:09.492507 master-0 kubenswrapper[7337]: I0312 18:20:09.492419 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:09.492507 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:09.492507 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:09.492507 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:09.492507 master-0 kubenswrapper[7337]: I0312 18:20:09.492505 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:10.492279 master-0 kubenswrapper[7337]: I0312 18:20:10.492059 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:10.492279 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:10.492279 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:10.492279 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:10.492279 master-0 kubenswrapper[7337]: I0312 18:20:10.492131 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:11.493425 master-0 kubenswrapper[7337]: I0312 18:20:11.493352 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:11.493425 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:11.493425 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:11.493425 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:11.494108 master-0 kubenswrapper[7337]: I0312 18:20:11.493448 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:12.492117 master-0 kubenswrapper[7337]: I0312 18:20:12.492047 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:12.492117 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:12.492117 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:12.492117 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:12.492735 master-0 kubenswrapper[7337]: I0312 18:20:12.492687 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:13.491546 master-0 kubenswrapper[7337]: I0312 18:20:13.491454 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:13.491546 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:13.491546 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:13.491546 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:13.492406 master-0 kubenswrapper[7337]: I0312 18:20:13.491548 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:14.491812 master-0 kubenswrapper[7337]: I0312 18:20:14.491717 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:14.491812 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:14.491812 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:14.491812 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:14.492344 master-0 kubenswrapper[7337]: I0312 18:20:14.491813 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:15.492850 master-0 kubenswrapper[7337]: I0312 18:20:15.492737 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:15.492850 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:15.492850 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:15.492850 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:15.493778 master-0 kubenswrapper[7337]: I0312 18:20:15.492874 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:16.492849 master-0 kubenswrapper[7337]: I0312 18:20:16.492778 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:16.492849 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:16.492849 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:16.492849 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:16.493562 master-0 kubenswrapper[7337]: I0312 18:20:16.492892 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:17.491671 master-0 kubenswrapper[7337]: I0312 18:20:17.491616 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:17.491671 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:17.491671 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:17.491671 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:17.491671 master-0 kubenswrapper[7337]: I0312 18:20:17.491680 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:18.493116 master-0 kubenswrapper[7337]: I0312 18:20:18.493023 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:18.493116 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:18.493116 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:18.493116 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:18.494067 master-0 kubenswrapper[7337]: I0312 18:20:18.493124 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:19.492269 master-0 kubenswrapper[7337]: I0312 18:20:19.492206 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:19.492269 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:19.492269 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:19.492269 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:19.492637 master-0 kubenswrapper[7337]: I0312 18:20:19.492271 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:20.492218 master-0 kubenswrapper[7337]: I0312 18:20:20.492056 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:20.492218 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:20.492218 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:20.492218 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:20.492218 master-0 kubenswrapper[7337]: I0312 18:20:20.492133 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:21.492660 master-0 kubenswrapper[7337]: I0312 18:20:21.492603 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:21.492660 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:21.492660 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:21.492660 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:21.493898 master-0 kubenswrapper[7337]: I0312 18:20:21.493626 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:22.492485 master-0 kubenswrapper[7337]: I0312 18:20:22.492417 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:22.492485 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:22.492485 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:22.492485 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:22.492485 master-0 kubenswrapper[7337]: I0312 18:20:22.492481 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:23.491574 master-0 kubenswrapper[7337]: I0312 18:20:23.491529 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:23.491574 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:23.491574 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:23.491574 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:23.491869 master-0 kubenswrapper[7337]: I0312 18:20:23.491610 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:24.491870 master-0 kubenswrapper[7337]: I0312 18:20:24.491830 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:24.491870 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:24.491870 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:24.491870 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:24.492478 master-0 kubenswrapper[7337]: I0312 18:20:24.492453 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:25.491408 master-0 kubenswrapper[7337]: I0312 18:20:25.491357 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:25.491408 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:25.491408 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:25.491408 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:25.491741 master-0 kubenswrapper[7337]: I0312 18:20:25.491439 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:26.497192 master-0 kubenswrapper[7337]: I0312 18:20:26.497145 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:26.497192 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:26.497192 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:26.497192 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:26.497747 master-0 kubenswrapper[7337]: I0312 18:20:26.497198 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:27.492485 master-0 kubenswrapper[7337]: I0312 18:20:27.492437 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:27.492485 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:27.492485 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:27.492485 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:27.492485 master-0 kubenswrapper[7337]: I0312 18:20:27.492486 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:28.491785 master-0 kubenswrapper[7337]: I0312 18:20:28.491696 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:28.491785 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:28.491785 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:28.491785 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:28.492731 master-0 kubenswrapper[7337]: I0312 18:20:28.491815 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:29.491978 master-0 kubenswrapper[7337]: I0312 18:20:29.491928 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:29.491978 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:29.491978 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:29.491978 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:29.491978 master-0 kubenswrapper[7337]: I0312 18:20:29.491981 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:30.491956 master-0 kubenswrapper[7337]: I0312 18:20:30.491824 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:30.491956 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:30.491956 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:30.491956 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:30.491956 master-0 kubenswrapper[7337]: I0312 18:20:30.491910 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:31.493471 master-0 kubenswrapper[7337]: I0312 18:20:31.493327 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:31.493471 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:31.493471 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:31.493471 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:31.494426 master-0 kubenswrapper[7337]: I0312 18:20:31.493494 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:32.494166 master-0 kubenswrapper[7337]: I0312 18:20:32.494065 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:32.494166 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:32.494166 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:32.494166 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:32.495421 master-0 kubenswrapper[7337]: I0312 18:20:32.494174 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:33.492097 master-0 kubenswrapper[7337]: I0312 18:20:33.492036 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:33.492097 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:33.492097 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:33.492097 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:33.492623 master-0 kubenswrapper[7337]: I0312 18:20:33.492129 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:33.859896 master-0 kubenswrapper[7337]: I0312 18:20:33.859774 7337 patch_prober.go:28] interesting pod/machine-config-daemon-mfv5x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 18:20:33.859896 master-0 kubenswrapper[7337]: I0312 18:20:33.859874 7337 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mfv5x" podUID="492e9833-4513-4f2f-b865-d05a8973fadc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 18:20:34.493692 master-0 kubenswrapper[7337]: I0312 18:20:34.493598 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:34.493692 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:34.493692 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:34.493692 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:34.494542 master-0 kubenswrapper[7337]: I0312 18:20:34.494474 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:35.491672 master-0 kubenswrapper[7337]: I0312 18:20:35.491541 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:35.491672 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:35.491672 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:35.491672 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:35.491672 master-0 kubenswrapper[7337]: I0312 18:20:35.491626 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:36.492256 master-0 kubenswrapper[7337]: I0312 18:20:36.492203 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:36.492256 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:36.492256 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:36.492256 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:36.493501 master-0 kubenswrapper[7337]: I0312 18:20:36.493438 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:37.491680 master-0 kubenswrapper[7337]: I0312 18:20:37.491639 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:37.491680 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:37.491680 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:37.491680 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:37.492062 master-0 kubenswrapper[7337]: I0312 18:20:37.492034 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:38.492563 master-0 kubenswrapper[7337]: I0312 18:20:38.492433 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:38.492563 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:38.492563 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:38.492563 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:38.492563 master-0 kubenswrapper[7337]: I0312 18:20:38.492551 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:39.491504 master-0 kubenswrapper[7337]: I0312 18:20:39.491444 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:39.491504 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:39.491504 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:39.491504 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:39.491867 master-0 kubenswrapper[7337]: I0312 18:20:39.491542 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:40.492756 master-0 kubenswrapper[7337]: I0312 18:20:40.492358 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:40.492756 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:40.492756 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:40.492756 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:40.492756 master-0 kubenswrapper[7337]: I0312 18:20:40.492735 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:41.492354 master-0 kubenswrapper[7337]: I0312 18:20:41.492312 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:41.492354 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:41.492354 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:41.492354 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:41.492794 master-0 kubenswrapper[7337]: I0312 18:20:41.492761 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:42.493297 master-0 kubenswrapper[7337]: I0312 18:20:42.493255 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:42.493297 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:42.493297 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:42.493297 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:42.494033 master-0 kubenswrapper[7337]: I0312 18:20:42.494001 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:43.492559 master-0 kubenswrapper[7337]: I0312 18:20:43.492307 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:43.492559 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:43.492559 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:43.492559 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:43.492559 master-0 kubenswrapper[7337]: I0312 18:20:43.492373 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:44.492735 master-0 kubenswrapper[7337]: I0312 18:20:44.492671 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:44.492735 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:44.492735 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:44.492735 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:44.493555 master-0 kubenswrapper[7337]: I0312 18:20:44.492745 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:45.492399 master-0 kubenswrapper[7337]: I0312 18:20:45.492319 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:45.492399 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:45.492399 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:45.492399 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:45.493756 master-0 kubenswrapper[7337]: I0312 18:20:45.493390 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:46.493059 master-0 kubenswrapper[7337]: I0312 18:20:46.492982 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:46.493059 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:46.493059 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:46.493059 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:46.493681 master-0 kubenswrapper[7337]: I0312 18:20:46.493075 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:47.492857 master-0 kubenswrapper[7337]: I0312 18:20:47.492745 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:47.492857 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:47.492857 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:47.492857 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:47.494328 master-0 kubenswrapper[7337]: I0312 18:20:47.492885 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:48.491776 master-0 kubenswrapper[7337]: I0312 18:20:48.491703 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:48.491776 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:48.491776 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:48.491776 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:48.491776 master-0 kubenswrapper[7337]: I0312 18:20:48.491764 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:49.492472 master-0 kubenswrapper[7337]: I0312 18:20:49.492406 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:49.492472 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:49.492472 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:49.492472 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:49.493028 master-0 kubenswrapper[7337]: I0312 18:20:49.492483 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:50.492367 master-0 kubenswrapper[7337]: I0312 18:20:50.492230 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:50.492367 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:50.492367 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:50.492367 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:50.492367 master-0 kubenswrapper[7337]: I0312 18:20:50.492310 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:51.492873 master-0 kubenswrapper[7337]: I0312 18:20:51.492807 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:51.492873 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:51.492873 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:51.492873 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:51.492873 master-0 kubenswrapper[7337]: I0312 18:20:51.492875 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:52.492841 master-0 kubenswrapper[7337]: I0312 18:20:52.492778 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:52.492841 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:52.492841 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:52.492841 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:52.493386 master-0 kubenswrapper[7337]: I0312 18:20:52.492851 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:53.493123 master-0 kubenswrapper[7337]: I0312 18:20:53.493041 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:53.493123 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:53.493123 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:53.493123 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:53.494144 master-0 kubenswrapper[7337]: I0312 18:20:53.493128 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:54.492588 master-0 kubenswrapper[7337]: I0312 18:20:54.492492 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:54.492588 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:54.492588 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:54.492588 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:54.492938 master-0 kubenswrapper[7337]: I0312 18:20:54.492633 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:55.492558 master-0 kubenswrapper[7337]: I0312 18:20:55.492479 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:55.492558 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:55.492558 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:55.492558 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:55.493290 master-0 kubenswrapper[7337]: I0312 18:20:55.492582 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:56.493014 master-0 kubenswrapper[7337]: I0312 18:20:56.492949 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:56.493014 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:56.493014 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:56.493014 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:56.493014 master-0 kubenswrapper[7337]: I0312 18:20:56.493013 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:57.492771 master-0 kubenswrapper[7337]: I0312 18:20:57.492681 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:57.492771 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:57.492771 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:57.492771 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:57.494230 master-0 kubenswrapper[7337]: I0312 18:20:57.492776 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:58.491339 master-0 kubenswrapper[7337]: I0312 18:20:58.491273 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:58.491339 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:58.491339 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:58.491339 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:58.491704 master-0 kubenswrapper[7337]: I0312 18:20:58.491347 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:20:59.494092 master-0 kubenswrapper[7337]: I0312 18:20:59.494000 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:20:59.494092 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:20:59.494092 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:20:59.494092 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:20:59.495120 master-0 kubenswrapper[7337]: I0312 18:20:59.494090 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:21:00.492483 master-0 kubenswrapper[7337]: I0312 18:21:00.492325 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:21:00.492483 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:21:00.492483 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:21:00.492483 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:21:00.492483 master-0 kubenswrapper[7337]: I0312 18:21:00.492393 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:21:01.492226 master-0 kubenswrapper[7337]: I0312 18:21:01.492146 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:21:01.492226 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:21:01.492226 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:21:01.492226 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:21:01.492226 master-0 kubenswrapper[7337]: I0312 18:21:01.492228 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:21:02.492843 master-0 kubenswrapper[7337]: I0312 18:21:02.492762 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:21:02.492843 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:21:02.492843 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:21:02.492843 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:21:02.493802 master-0 kubenswrapper[7337]: I0312 18:21:02.492854 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:21:03.492622 master-0 kubenswrapper[7337]: I0312 18:21:03.492493 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:21:03.492622 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:21:03.492622 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:21:03.492622 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:21:03.492622 master-0 kubenswrapper[7337]: I0312 18:21:03.492579 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:21:03.859541 master-0 kubenswrapper[7337]: I0312 18:21:03.859449 7337 patch_prober.go:28] interesting pod/machine-config-daemon-mfv5x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 18:21:03.859541 master-0 kubenswrapper[7337]: I0312 18:21:03.859533 7337 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mfv5x" podUID="492e9833-4513-4f2f-b865-d05a8973fadc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 18:21:04.492632 master-0 kubenswrapper[7337]: I0312 18:21:04.492562 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:21:04.492632 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:21:04.492632 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:21:04.492632 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:21:04.493173 master-0 kubenswrapper[7337]: I0312 18:21:04.493126 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:21:05.493234 master-0 kubenswrapper[7337]: I0312 18:21:05.493105 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:21:05.493234 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:21:05.493234 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:21:05.493234 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:21:05.493234 master-0 kubenswrapper[7337]: I0312 18:21:05.493218 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:21:06.492064 master-0 kubenswrapper[7337]: I0312 18:21:06.491974 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:21:06.492064 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:21:06.492064 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:21:06.492064 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:21:06.492411 master-0 kubenswrapper[7337]: I0312 18:21:06.492097 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:21:07.492620 master-0 kubenswrapper[7337]: I0312 18:21:07.492574 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:21:07.492620 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:21:07.492620 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:21:07.492620 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:21:07.493395 master-0 kubenswrapper[7337]: I0312 18:21:07.493362 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:21:08.504036 master-0 kubenswrapper[7337]: I0312 18:21:08.503928 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:21:08.504036 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:21:08.504036 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:21:08.504036 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:21:08.504036 master-0 kubenswrapper[7337]: I0312 18:21:08.504070 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:21:09.492627 master-0 kubenswrapper[7337]: I0312 18:21:09.492510 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:21:09.492627 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:21:09.492627 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:21:09.492627 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:21:09.493082 master-0 kubenswrapper[7337]: I0312 18:21:09.492643 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:21:09.493082 master-0 kubenswrapper[7337]: I0312 18:21:09.492739 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:21:09.493745 master-0 kubenswrapper[7337]: I0312 18:21:09.493691 7337 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"41887ee13b262a1bb752082a6699313d133920e514fab8c14a0b03b0f36c3f44"} pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" containerMessage="Container router failed startup probe, will be restarted" Mar 12 18:21:09.493823 master-0 kubenswrapper[7337]: I0312 18:21:09.493771 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" containerID="cri-o://41887ee13b262a1bb752082a6699313d133920e514fab8c14a0b03b0f36c3f44" gracePeriod=3600 Mar 12 18:21:22.984998 master-0 kubenswrapper[7337]: E0312 18:21:22.984895 7337 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/53c3e9b4b8cca5096418831f916cb1cb0c57b6fb8aae97e2652c5c5a1bbcab4a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/53c3e9b4b8cca5096418831f916cb1cb0c57b6fb8aae97e2652c5c5a1bbcab4a/diff: no such file or directory, extraDiskErr: Mar 12 18:21:56.469685 master-0 kubenswrapper[7337]: I0312 18:21:56.469568 7337 generic.go:334] "Generic (PLEG): container finished" podID="518ffff8-8119-41be-8b76-ce49d5751254" containerID="41887ee13b262a1bb752082a6699313d133920e514fab8c14a0b03b0f36c3f44" exitCode=0 Mar 12 18:21:56.469685 master-0 kubenswrapper[7337]: I0312 18:21:56.469637 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" event={"ID":"518ffff8-8119-41be-8b76-ce49d5751254","Type":"ContainerDied","Data":"41887ee13b262a1bb752082a6699313d133920e514fab8c14a0b03b0f36c3f44"} Mar 12 18:21:56.470862 master-0 kubenswrapper[7337]: I0312 18:21:56.469711 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" event={"ID":"518ffff8-8119-41be-8b76-ce49d5751254","Type":"ContainerStarted","Data":"2c667ded264eac2ef91dbced4e1d0c451c7fcbbd73ca41017073728ca29d6478"} Mar 12 18:21:56.490026 master-0 kubenswrapper[7337]: I0312 18:21:56.489912 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:21:56.494224 master-0 kubenswrapper[7337]: I0312 18:21:56.494121 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:21:56.494224 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:21:56.494224 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:21:56.494224 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:21:56.494224 master-0 kubenswrapper[7337]: I0312 18:21:56.494202 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:21:57.491603 master-0 kubenswrapper[7337]: I0312 18:21:57.489728 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:21:57.491603 master-0 kubenswrapper[7337]: I0312 18:21:57.491498 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:21:57.491603 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:21:57.491603 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:21:57.491603 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:21:57.492240 master-0 kubenswrapper[7337]: I0312 18:21:57.491651 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:21:58.483380 master-0 kubenswrapper[7337]: I0312 18:21:58.483310 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4527l_d94dc349-c5cb-4f12-8e48-867030af4981/ingress-operator/2.log" Mar 12 18:21:58.483861 master-0 kubenswrapper[7337]: I0312 18:21:58.483822 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4527l_d94dc349-c5cb-4f12-8e48-867030af4981/ingress-operator/1.log" Mar 12 18:21:58.484182 master-0 kubenswrapper[7337]: I0312 18:21:58.484143 7337 generic.go:334] "Generic (PLEG): container finished" podID="d94dc349-c5cb-4f12-8e48-867030af4981" containerID="a0871263a834ec8eda5a88b5b72f6ff58f93c73ab7cc887038f0189f80ffd4a8" exitCode=1 Mar 12 18:21:58.484786 master-0 kubenswrapper[7337]: I0312 18:21:58.484750 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" event={"ID":"d94dc349-c5cb-4f12-8e48-867030af4981","Type":"ContainerDied","Data":"a0871263a834ec8eda5a88b5b72f6ff58f93c73ab7cc887038f0189f80ffd4a8"} Mar 12 18:21:58.484786 master-0 kubenswrapper[7337]: I0312 18:21:58.484788 7337 scope.go:117] "RemoveContainer" containerID="95c9f43b56f422e41bfe1eff7ae31e7f220fb9fe9c3b7c82ac5ec70e4a6cd3be" Mar 12 18:21:58.485086 master-0 kubenswrapper[7337]: I0312 18:21:58.485054 7337 scope.go:117] "RemoveContainer" containerID="a0871263a834ec8eda5a88b5b72f6ff58f93c73ab7cc887038f0189f80ffd4a8" Mar 12 18:21:58.485259 master-0 kubenswrapper[7337]: E0312 18:21:58.485223 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-4527l_openshift-ingress-operator(d94dc349-c5cb-4f12-8e48-867030af4981)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" podUID="d94dc349-c5cb-4f12-8e48-867030af4981" Mar 12 18:21:58.494226 master-0 kubenswrapper[7337]: I0312 18:21:58.494139 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:21:58.494226 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:21:58.494226 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:21:58.494226 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:21:58.494226 master-0 kubenswrapper[7337]: I0312 18:21:58.494213 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:21:59.512846 master-0 kubenswrapper[7337]: I0312 18:21:59.492445 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:21:59.512846 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:21:59.512846 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:21:59.512846 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:21:59.512846 master-0 kubenswrapper[7337]: I0312 18:21:59.492564 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:21:59.516779 master-0 kubenswrapper[7337]: I0312 18:21:59.516720 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4527l_d94dc349-c5cb-4f12-8e48-867030af4981/ingress-operator/2.log" Mar 12 18:22:00.492279 master-0 kubenswrapper[7337]: I0312 18:22:00.492150 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:00.492279 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:00.492279 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:00.492279 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:00.492279 master-0 kubenswrapper[7337]: I0312 18:22:00.492215 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:01.491651 master-0 kubenswrapper[7337]: I0312 18:22:01.491608 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:01.491651 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:01.491651 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:01.491651 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:01.492293 master-0 kubenswrapper[7337]: I0312 18:22:01.491851 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:02.492950 master-0 kubenswrapper[7337]: I0312 18:22:02.492867 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:02.492950 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:02.492950 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:02.492950 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:02.494071 master-0 kubenswrapper[7337]: I0312 18:22:02.492960 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:03.494186 master-0 kubenswrapper[7337]: I0312 18:22:03.494072 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:03.494186 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:03.494186 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:03.494186 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:03.494186 master-0 kubenswrapper[7337]: I0312 18:22:03.494174 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:04.492292 master-0 kubenswrapper[7337]: I0312 18:22:04.492216 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:04.492292 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:04.492292 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:04.492292 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:04.492673 master-0 kubenswrapper[7337]: I0312 18:22:04.492302 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:05.493726 master-0 kubenswrapper[7337]: I0312 18:22:05.493581 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:05.493726 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:05.493726 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:05.493726 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:05.493726 master-0 kubenswrapper[7337]: I0312 18:22:05.493720 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:06.493130 master-0 kubenswrapper[7337]: I0312 18:22:06.493021 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:06.493130 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:06.493130 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:06.493130 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:06.493505 master-0 kubenswrapper[7337]: I0312 18:22:06.493162 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:07.492169 master-0 kubenswrapper[7337]: I0312 18:22:07.492079 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:07.492169 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:07.492169 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:07.492169 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:07.492865 master-0 kubenswrapper[7337]: I0312 18:22:07.492206 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:08.493254 master-0 kubenswrapper[7337]: I0312 18:22:08.493171 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:08.493254 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:08.493254 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:08.493254 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:08.494226 master-0 kubenswrapper[7337]: I0312 18:22:08.493267 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:09.492728 master-0 kubenswrapper[7337]: I0312 18:22:09.492650 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:09.492728 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:09.492728 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:09.492728 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:09.492728 master-0 kubenswrapper[7337]: I0312 18:22:09.492715 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:09.725929 master-0 kubenswrapper[7337]: I0312 18:22:09.725891 7337 scope.go:117] "RemoveContainer" containerID="a0871263a834ec8eda5a88b5b72f6ff58f93c73ab7cc887038f0189f80ffd4a8" Mar 12 18:22:09.727046 master-0 kubenswrapper[7337]: E0312 18:22:09.726999 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-4527l_openshift-ingress-operator(d94dc349-c5cb-4f12-8e48-867030af4981)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" podUID="d94dc349-c5cb-4f12-8e48-867030af4981" Mar 12 18:22:10.492154 master-0 kubenswrapper[7337]: I0312 18:22:10.492014 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:10.492154 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:10.492154 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:10.492154 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:10.492154 master-0 kubenswrapper[7337]: I0312 18:22:10.492102 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:11.492179 master-0 kubenswrapper[7337]: I0312 18:22:11.492106 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:11.492179 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:11.492179 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:11.492179 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:11.492755 master-0 kubenswrapper[7337]: I0312 18:22:11.492195 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:12.493780 master-0 kubenswrapper[7337]: I0312 18:22:12.493692 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:12.493780 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:12.493780 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:12.493780 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:12.494731 master-0 kubenswrapper[7337]: I0312 18:22:12.493800 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:13.492447 master-0 kubenswrapper[7337]: I0312 18:22:13.492380 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:13.492447 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:13.492447 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:13.492447 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:13.492447 master-0 kubenswrapper[7337]: I0312 18:22:13.492436 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:14.492335 master-0 kubenswrapper[7337]: I0312 18:22:14.492265 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:14.492335 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:14.492335 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:14.492335 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:14.493080 master-0 kubenswrapper[7337]: I0312 18:22:14.492359 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:15.492991 master-0 kubenswrapper[7337]: I0312 18:22:15.492914 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:15.492991 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:15.492991 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:15.492991 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:15.492991 master-0 kubenswrapper[7337]: I0312 18:22:15.492992 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:16.492586 master-0 kubenswrapper[7337]: I0312 18:22:16.492541 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:16.492586 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:16.492586 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:16.492586 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:16.493006 master-0 kubenswrapper[7337]: I0312 18:22:16.492972 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:17.491027 master-0 kubenswrapper[7337]: I0312 18:22:17.490964 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:17.491027 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:17.491027 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:17.491027 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:17.491027 master-0 kubenswrapper[7337]: I0312 18:22:17.491022 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:18.493449 master-0 kubenswrapper[7337]: I0312 18:22:18.493368 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:18.493449 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:18.493449 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:18.493449 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:18.494539 master-0 kubenswrapper[7337]: I0312 18:22:18.493466 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:19.493909 master-0 kubenswrapper[7337]: I0312 18:22:19.493802 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:19.493909 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:19.493909 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:19.493909 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:19.493909 master-0 kubenswrapper[7337]: I0312 18:22:19.493905 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:20.493100 master-0 kubenswrapper[7337]: I0312 18:22:20.492614 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:20.493100 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:20.493100 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:20.493100 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:20.493405 master-0 kubenswrapper[7337]: I0312 18:22:20.493093 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:21.491784 master-0 kubenswrapper[7337]: I0312 18:22:21.491747 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:21.491784 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:21.491784 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:21.491784 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:21.492403 master-0 kubenswrapper[7337]: I0312 18:22:21.492375 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:22.492751 master-0 kubenswrapper[7337]: I0312 18:22:22.492674 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:22.492751 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:22.492751 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:22.492751 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:22.493828 master-0 kubenswrapper[7337]: I0312 18:22:22.492750 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:23.493162 master-0 kubenswrapper[7337]: I0312 18:22:23.493085 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:23.493162 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:23.493162 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:23.493162 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:23.493162 master-0 kubenswrapper[7337]: I0312 18:22:23.493163 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:24.491666 master-0 kubenswrapper[7337]: I0312 18:22:24.491620 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:24.491666 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:24.491666 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:24.491666 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:24.492010 master-0 kubenswrapper[7337]: I0312 18:22:24.491677 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:24.722182 master-0 kubenswrapper[7337]: I0312 18:22:24.722125 7337 scope.go:117] "RemoveContainer" containerID="a0871263a834ec8eda5a88b5b72f6ff58f93c73ab7cc887038f0189f80ffd4a8" Mar 12 18:22:25.340534 master-0 kubenswrapper[7337]: I0312 18:22:25.340419 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 12 18:22:25.341599 master-0 kubenswrapper[7337]: I0312 18:22:25.341565 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 18:22:25.343769 master-0 kubenswrapper[7337]: I0312 18:22:25.343736 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-h72pz" Mar 12 18:22:25.347402 master-0 kubenswrapper[7337]: I0312 18:22:25.347373 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 12 18:22:25.371368 master-0 kubenswrapper[7337]: I0312 18:22:25.371298 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 12 18:22:25.389470 master-0 kubenswrapper[7337]: I0312 18:22:25.389415 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05a33cf5-173b-463b-9f72-84e888bef88c-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"05a33cf5-173b-463b-9f72-84e888bef88c\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 18:22:25.389670 master-0 kubenswrapper[7337]: I0312 18:22:25.389481 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05a33cf5-173b-463b-9f72-84e888bef88c-var-lock\") pod \"installer-2-master-0\" (UID: \"05a33cf5-173b-463b-9f72-84e888bef88c\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 18:22:25.389670 master-0 kubenswrapper[7337]: I0312 18:22:25.389586 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05a33cf5-173b-463b-9f72-84e888bef88c-kube-api-access\") pod \"installer-2-master-0\" (UID: \"05a33cf5-173b-463b-9f72-84e888bef88c\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 18:22:25.491136 master-0 kubenswrapper[7337]: I0312 18:22:25.491079 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05a33cf5-173b-463b-9f72-84e888bef88c-kube-api-access\") pod \"installer-2-master-0\" (UID: \"05a33cf5-173b-463b-9f72-84e888bef88c\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 18:22:25.491325 master-0 kubenswrapper[7337]: I0312 18:22:25.491206 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05a33cf5-173b-463b-9f72-84e888bef88c-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"05a33cf5-173b-463b-9f72-84e888bef88c\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 18:22:25.491325 master-0 kubenswrapper[7337]: I0312 18:22:25.491239 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05a33cf5-173b-463b-9f72-84e888bef88c-var-lock\") pod \"installer-2-master-0\" (UID: \"05a33cf5-173b-463b-9f72-84e888bef88c\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 18:22:25.491325 master-0 kubenswrapper[7337]: I0312 18:22:25.491292 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05a33cf5-173b-463b-9f72-84e888bef88c-var-lock\") pod \"installer-2-master-0\" (UID: \"05a33cf5-173b-463b-9f72-84e888bef88c\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 18:22:25.491325 master-0 kubenswrapper[7337]: I0312 18:22:25.491296 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05a33cf5-173b-463b-9f72-84e888bef88c-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"05a33cf5-173b-463b-9f72-84e888bef88c\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 18:22:25.492334 master-0 kubenswrapper[7337]: I0312 18:22:25.492296 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:25.492334 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:25.492334 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:25.492334 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:25.492499 master-0 kubenswrapper[7337]: I0312 18:22:25.492341 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:25.508561 master-0 kubenswrapper[7337]: I0312 18:22:25.508500 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05a33cf5-173b-463b-9f72-84e888bef88c-kube-api-access\") pod \"installer-2-master-0\" (UID: \"05a33cf5-173b-463b-9f72-84e888bef88c\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 18:22:25.656616 master-0 kubenswrapper[7337]: I0312 18:22:25.656478 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 18:22:25.714580 master-0 kubenswrapper[7337]: I0312 18:22:25.714525 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4527l_d94dc349-c5cb-4f12-8e48-867030af4981/ingress-operator/2.log" Mar 12 18:22:25.715024 master-0 kubenswrapper[7337]: I0312 18:22:25.714979 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" event={"ID":"d94dc349-c5cb-4f12-8e48-867030af4981","Type":"ContainerStarted","Data":"6c80cd8acac8db8cb197069003bd675e1b445a06ed49a0673a80b100538ac223"} Mar 12 18:22:26.078342 master-0 kubenswrapper[7337]: I0312 18:22:26.078297 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 12 18:22:26.080185 master-0 kubenswrapper[7337]: W0312 18:22:26.079510 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod05a33cf5_173b_463b_9f72_84e888bef88c.slice/crio-dd87a8e3a2db22b923ad48f9061ad75a9c5e4246215719b24ff10e19f1c10ecc WatchSource:0}: Error finding container dd87a8e3a2db22b923ad48f9061ad75a9c5e4246215719b24ff10e19f1c10ecc: Status 404 returned error can't find the container with id dd87a8e3a2db22b923ad48f9061ad75a9c5e4246215719b24ff10e19f1c10ecc Mar 12 18:22:26.492970 master-0 kubenswrapper[7337]: I0312 18:22:26.492886 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:26.492970 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:26.492970 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:26.492970 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:26.493369 master-0 kubenswrapper[7337]: I0312 18:22:26.492972 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:26.726109 master-0 kubenswrapper[7337]: I0312 18:22:26.725903 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"05a33cf5-173b-463b-9f72-84e888bef88c","Type":"ContainerStarted","Data":"8fc3b2c7306c49a93f4bae8dd3316d6c06ff111bf95682381984a0a7e932e287"} Mar 12 18:22:26.726109 master-0 kubenswrapper[7337]: I0312 18:22:26.725966 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"05a33cf5-173b-463b-9f72-84e888bef88c","Type":"ContainerStarted","Data":"dd87a8e3a2db22b923ad48f9061ad75a9c5e4246215719b24ff10e19f1c10ecc"} Mar 12 18:22:26.754827 master-0 kubenswrapper[7337]: I0312 18:22:26.753616 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-0" podStartSLOduration=1.753594164 podStartE2EDuration="1.753594164s" podCreationTimestamp="2026-03-12 18:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:22:26.752580138 +0000 UTC m=+547.221181125" watchObservedRunningTime="2026-03-12 18:22:26.753594164 +0000 UTC m=+547.222195141" Mar 12 18:22:27.494509 master-0 kubenswrapper[7337]: I0312 18:22:27.494434 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:27.494509 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:27.494509 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:27.494509 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:27.495161 master-0 kubenswrapper[7337]: I0312 18:22:27.494503 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:28.492753 master-0 kubenswrapper[7337]: I0312 18:22:28.492665 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:28.492753 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:28.492753 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:28.492753 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:28.493239 master-0 kubenswrapper[7337]: I0312 18:22:28.492863 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:29.493141 master-0 kubenswrapper[7337]: I0312 18:22:29.493068 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:29.493141 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:29.493141 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:29.493141 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:29.493734 master-0 kubenswrapper[7337]: I0312 18:22:29.493143 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:30.493203 master-0 kubenswrapper[7337]: I0312 18:22:30.493046 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:30.493203 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:30.493203 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:30.493203 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:30.493799 master-0 kubenswrapper[7337]: I0312 18:22:30.493181 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:31.493208 master-0 kubenswrapper[7337]: I0312 18:22:31.493127 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:31.493208 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:31.493208 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:31.493208 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:31.493741 master-0 kubenswrapper[7337]: I0312 18:22:31.493218 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:32.492077 master-0 kubenswrapper[7337]: I0312 18:22:32.491968 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:32.492077 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:32.492077 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:32.492077 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:32.492586 master-0 kubenswrapper[7337]: I0312 18:22:32.492085 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:33.491757 master-0 kubenswrapper[7337]: I0312 18:22:33.491676 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:33.491757 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:33.491757 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:33.491757 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:33.492377 master-0 kubenswrapper[7337]: I0312 18:22:33.491772 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:34.492103 master-0 kubenswrapper[7337]: I0312 18:22:34.491940 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:34.492103 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:34.492103 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:34.492103 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:34.492682 master-0 kubenswrapper[7337]: I0312 18:22:34.492114 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:35.492649 master-0 kubenswrapper[7337]: I0312 18:22:35.492551 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:35.492649 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:35.492649 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:35.492649 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:35.492649 master-0 kubenswrapper[7337]: I0312 18:22:35.492639 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:36.494181 master-0 kubenswrapper[7337]: I0312 18:22:36.494109 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:36.494181 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:36.494181 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:36.494181 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:36.494181 master-0 kubenswrapper[7337]: I0312 18:22:36.494161 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:36.535232 master-0 kubenswrapper[7337]: I0312 18:22:36.535158 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 12 18:22:36.535460 master-0 kubenswrapper[7337]: I0312 18:22:36.535414 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-2-master-0" podUID="05a33cf5-173b-463b-9f72-84e888bef88c" containerName="installer" containerID="cri-o://8fc3b2c7306c49a93f4bae8dd3316d6c06ff111bf95682381984a0a7e932e287" gracePeriod=30 Mar 12 18:22:37.491720 master-0 kubenswrapper[7337]: I0312 18:22:37.491662 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:37.491720 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:37.491720 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:37.491720 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:37.492144 master-0 kubenswrapper[7337]: I0312 18:22:37.491727 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:38.491611 master-0 kubenswrapper[7337]: I0312 18:22:38.491550 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:38.491611 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:38.491611 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:38.491611 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:38.491611 master-0 kubenswrapper[7337]: I0312 18:22:38.491606 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:39.492007 master-0 kubenswrapper[7337]: I0312 18:22:39.491868 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:39.492007 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:39.492007 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:39.492007 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:39.492987 master-0 kubenswrapper[7337]: I0312 18:22:39.492012 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:39.716792 master-0 kubenswrapper[7337]: I0312 18:22:39.716740 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 12 18:22:39.717604 master-0 kubenswrapper[7337]: I0312 18:22:39.717566 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 18:22:39.736635 master-0 kubenswrapper[7337]: I0312 18:22:39.735771 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 12 18:22:39.919814 master-0 kubenswrapper[7337]: I0312 18:22:39.919759 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2bbd04e-d147-4343-9e5d-300e42de9dbb-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"e2bbd04e-d147-4343-9e5d-300e42de9dbb\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 18:22:39.920131 master-0 kubenswrapper[7337]: I0312 18:22:39.920098 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e2bbd04e-d147-4343-9e5d-300e42de9dbb-var-lock\") pod \"installer-3-master-0\" (UID: \"e2bbd04e-d147-4343-9e5d-300e42de9dbb\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 18:22:39.920131 master-0 kubenswrapper[7337]: I0312 18:22:39.920129 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2bbd04e-d147-4343-9e5d-300e42de9dbb-kube-api-access\") pod \"installer-3-master-0\" (UID: \"e2bbd04e-d147-4343-9e5d-300e42de9dbb\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 18:22:40.021554 master-0 kubenswrapper[7337]: I0312 18:22:40.021482 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2bbd04e-d147-4343-9e5d-300e42de9dbb-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"e2bbd04e-d147-4343-9e5d-300e42de9dbb\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 18:22:40.021753 master-0 kubenswrapper[7337]: I0312 18:22:40.021584 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e2bbd04e-d147-4343-9e5d-300e42de9dbb-var-lock\") pod \"installer-3-master-0\" (UID: \"e2bbd04e-d147-4343-9e5d-300e42de9dbb\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 18:22:40.021753 master-0 kubenswrapper[7337]: I0312 18:22:40.021608 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2bbd04e-d147-4343-9e5d-300e42de9dbb-kube-api-access\") pod \"installer-3-master-0\" (UID: \"e2bbd04e-d147-4343-9e5d-300e42de9dbb\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 18:22:40.022642 master-0 kubenswrapper[7337]: I0312 18:22:40.021954 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e2bbd04e-d147-4343-9e5d-300e42de9dbb-var-lock\") pod \"installer-3-master-0\" (UID: \"e2bbd04e-d147-4343-9e5d-300e42de9dbb\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 18:22:40.022642 master-0 kubenswrapper[7337]: I0312 18:22:40.022049 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2bbd04e-d147-4343-9e5d-300e42de9dbb-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"e2bbd04e-d147-4343-9e5d-300e42de9dbb\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 18:22:40.039213 master-0 kubenswrapper[7337]: I0312 18:22:40.039137 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2bbd04e-d147-4343-9e5d-300e42de9dbb-kube-api-access\") pod \"installer-3-master-0\" (UID: \"e2bbd04e-d147-4343-9e5d-300e42de9dbb\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 18:22:40.337547 master-0 kubenswrapper[7337]: I0312 18:22:40.337413 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 18:22:40.492368 master-0 kubenswrapper[7337]: I0312 18:22:40.492328 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:40.492368 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:40.492368 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:40.492368 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:40.492955 master-0 kubenswrapper[7337]: I0312 18:22:40.492377 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:40.714478 master-0 kubenswrapper[7337]: I0312 18:22:40.714372 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 12 18:22:40.722789 master-0 kubenswrapper[7337]: W0312 18:22:40.722087 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode2bbd04e_d147_4343_9e5d_300e42de9dbb.slice/crio-f2bc7ecb16572f7c27823d06dc4505f361b41e0cd30f5c3e98a48ae3773ffae2 WatchSource:0}: Error finding container f2bc7ecb16572f7c27823d06dc4505f361b41e0cd30f5c3e98a48ae3773ffae2: Status 404 returned error can't find the container with id f2bc7ecb16572f7c27823d06dc4505f361b41e0cd30f5c3e98a48ae3773ffae2 Mar 12 18:22:40.832582 master-0 kubenswrapper[7337]: I0312 18:22:40.832519 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"e2bbd04e-d147-4343-9e5d-300e42de9dbb","Type":"ContainerStarted","Data":"f2bc7ecb16572f7c27823d06dc4505f361b41e0cd30f5c3e98a48ae3773ffae2"} Mar 12 18:22:41.492685 master-0 kubenswrapper[7337]: I0312 18:22:41.492625 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:41.492685 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:41.492685 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:41.492685 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:41.493659 master-0 kubenswrapper[7337]: I0312 18:22:41.492697 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:41.843343 master-0 kubenswrapper[7337]: I0312 18:22:41.843281 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"e2bbd04e-d147-4343-9e5d-300e42de9dbb","Type":"ContainerStarted","Data":"6ad211f881c1b186d3265c89c0f87451f6acb70c49f4334e2e7867092be6c91a"} Mar 12 18:22:41.866706 master-0 kubenswrapper[7337]: I0312 18:22:41.866627 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-master-0" podStartSLOduration=2.8666085519999998 podStartE2EDuration="2.866608552s" podCreationTimestamp="2026-03-12 18:22:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:22:41.864917269 +0000 UTC m=+562.333518226" watchObservedRunningTime="2026-03-12 18:22:41.866608552 +0000 UTC m=+562.335209499" Mar 12 18:22:42.492150 master-0 kubenswrapper[7337]: I0312 18:22:42.492109 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:42.492150 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:42.492150 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:42.492150 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:42.492462 master-0 kubenswrapper[7337]: I0312 18:22:42.492166 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:43.626990 master-0 kubenswrapper[7337]: I0312 18:22:43.626938 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:43.626990 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:43.626990 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:43.626990 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:43.626990 master-0 kubenswrapper[7337]: I0312 18:22:43.626987 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:44.501304 master-0 kubenswrapper[7337]: I0312 18:22:44.501227 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:44.501304 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:44.501304 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:44.501304 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:44.501586 master-0 kubenswrapper[7337]: I0312 18:22:44.501318 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:45.492925 master-0 kubenswrapper[7337]: I0312 18:22:45.492831 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:45.492925 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:45.492925 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:45.492925 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:45.493443 master-0 kubenswrapper[7337]: I0312 18:22:45.492958 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:46.492056 master-0 kubenswrapper[7337]: I0312 18:22:46.491971 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:46.492056 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:46.492056 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:46.492056 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:46.492056 master-0 kubenswrapper[7337]: I0312 18:22:46.492058 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:47.495894 master-0 kubenswrapper[7337]: I0312 18:22:47.495319 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:47.495894 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:47.495894 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:47.495894 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:47.495894 master-0 kubenswrapper[7337]: I0312 18:22:47.495451 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:48.493066 master-0 kubenswrapper[7337]: I0312 18:22:48.493015 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:48.493066 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:48.493066 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:48.493066 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:48.493356 master-0 kubenswrapper[7337]: I0312 18:22:48.493078 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:49.491950 master-0 kubenswrapper[7337]: I0312 18:22:49.491894 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:49.491950 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:49.491950 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:49.491950 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:49.492481 master-0 kubenswrapper[7337]: I0312 18:22:49.491975 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:50.492093 master-0 kubenswrapper[7337]: I0312 18:22:50.491970 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:50.492093 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:50.492093 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:50.492093 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:50.492093 master-0 kubenswrapper[7337]: I0312 18:22:50.492024 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:51.493088 master-0 kubenswrapper[7337]: I0312 18:22:51.493008 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:51.493088 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:51.493088 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:51.493088 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:51.493739 master-0 kubenswrapper[7337]: I0312 18:22:51.493122 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:52.408305 master-0 kubenswrapper[7337]: I0312 18:22:52.408248 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 12 18:22:52.419361 master-0 kubenswrapper[7337]: I0312 18:22:52.419316 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 12 18:22:52.422990 master-0 kubenswrapper[7337]: I0312 18:22:52.421748 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 12 18:22:52.422990 master-0 kubenswrapper[7337]: I0312 18:22:52.422542 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-9qp64" Mar 12 18:22:52.432761 master-0 kubenswrapper[7337]: I0312 18:22:52.431821 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 12 18:22:52.491867 master-0 kubenswrapper[7337]: I0312 18:22:52.491811 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:52.491867 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:52.491867 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:52.491867 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:52.492199 master-0 kubenswrapper[7337]: I0312 18:22:52.491882 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:52.550964 master-0 kubenswrapper[7337]: I0312 18:22:52.550875 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30102cc9-45f8-46f8-bb34-eec48fdb297d-kube-api-access\") pod \"installer-2-master-0\" (UID: \"30102cc9-45f8-46f8-bb34-eec48fdb297d\") " pod="openshift-etcd/installer-2-master-0" Mar 12 18:22:52.550964 master-0 kubenswrapper[7337]: I0312 18:22:52.550954 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/30102cc9-45f8-46f8-bb34-eec48fdb297d-var-lock\") pod \"installer-2-master-0\" (UID: \"30102cc9-45f8-46f8-bb34-eec48fdb297d\") " pod="openshift-etcd/installer-2-master-0" Mar 12 18:22:52.551721 master-0 kubenswrapper[7337]: I0312 18:22:52.550986 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/30102cc9-45f8-46f8-bb34-eec48fdb297d-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"30102cc9-45f8-46f8-bb34-eec48fdb297d\") " pod="openshift-etcd/installer-2-master-0" Mar 12 18:22:52.652928 master-0 kubenswrapper[7337]: I0312 18:22:52.652855 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30102cc9-45f8-46f8-bb34-eec48fdb297d-kube-api-access\") pod \"installer-2-master-0\" (UID: \"30102cc9-45f8-46f8-bb34-eec48fdb297d\") " pod="openshift-etcd/installer-2-master-0" Mar 12 18:22:52.652928 master-0 kubenswrapper[7337]: I0312 18:22:52.652907 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/30102cc9-45f8-46f8-bb34-eec48fdb297d-var-lock\") pod \"installer-2-master-0\" (UID: \"30102cc9-45f8-46f8-bb34-eec48fdb297d\") " pod="openshift-etcd/installer-2-master-0" Mar 12 18:22:52.652928 master-0 kubenswrapper[7337]: I0312 18:22:52.652940 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/30102cc9-45f8-46f8-bb34-eec48fdb297d-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"30102cc9-45f8-46f8-bb34-eec48fdb297d\") " pod="openshift-etcd/installer-2-master-0" Mar 12 18:22:52.653335 master-0 kubenswrapper[7337]: I0312 18:22:52.653024 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/30102cc9-45f8-46f8-bb34-eec48fdb297d-var-lock\") pod \"installer-2-master-0\" (UID: \"30102cc9-45f8-46f8-bb34-eec48fdb297d\") " pod="openshift-etcd/installer-2-master-0" Mar 12 18:22:52.653335 master-0 kubenswrapper[7337]: I0312 18:22:52.653069 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/30102cc9-45f8-46f8-bb34-eec48fdb297d-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"30102cc9-45f8-46f8-bb34-eec48fdb297d\") " pod="openshift-etcd/installer-2-master-0" Mar 12 18:22:52.674217 master-0 kubenswrapper[7337]: I0312 18:22:52.674092 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30102cc9-45f8-46f8-bb34-eec48fdb297d-kube-api-access\") pod \"installer-2-master-0\" (UID: \"30102cc9-45f8-46f8-bb34-eec48fdb297d\") " pod="openshift-etcd/installer-2-master-0" Mar 12 18:22:52.744928 master-0 kubenswrapper[7337]: I0312 18:22:52.744821 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 12 18:22:53.192968 master-0 kubenswrapper[7337]: I0312 18:22:53.192927 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 12 18:22:53.491734 master-0 kubenswrapper[7337]: I0312 18:22:53.491671 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:53.491734 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:53.491734 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:53.491734 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:53.492044 master-0 kubenswrapper[7337]: I0312 18:22:53.491746 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:53.920156 master-0 kubenswrapper[7337]: I0312 18:22:53.920087 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"30102cc9-45f8-46f8-bb34-eec48fdb297d","Type":"ContainerStarted","Data":"15f58aad78a995767697fd5b4bbf18700052b6bed0c718d5b6a5383ae0c8a9a8"} Mar 12 18:22:53.920156 master-0 kubenswrapper[7337]: I0312 18:22:53.920153 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"30102cc9-45f8-46f8-bb34-eec48fdb297d","Type":"ContainerStarted","Data":"d0fb27537deeb6ada4bd6bd0dc8f77614abe096d108a40275f771f7f507fd43a"} Mar 12 18:22:53.942863 master-0 kubenswrapper[7337]: I0312 18:22:53.942791 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-master-0" podStartSLOduration=1.942772245 podStartE2EDuration="1.942772245s" podCreationTimestamp="2026-03-12 18:22:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:22:53.937308267 +0000 UTC m=+574.405909244" watchObservedRunningTime="2026-03-12 18:22:53.942772245 +0000 UTC m=+574.411373212" Mar 12 18:22:54.492337 master-0 kubenswrapper[7337]: I0312 18:22:54.492281 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:54.492337 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:54.492337 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:54.492337 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:54.492665 master-0 kubenswrapper[7337]: I0312 18:22:54.492342 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:55.492373 master-0 kubenswrapper[7337]: I0312 18:22:55.492298 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:55.492373 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:55.492373 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:55.492373 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:55.492373 master-0 kubenswrapper[7337]: I0312 18:22:55.492359 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:56.492260 master-0 kubenswrapper[7337]: I0312 18:22:56.492203 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:56.492260 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:56.492260 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:56.492260 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:56.492998 master-0 kubenswrapper[7337]: I0312 18:22:56.492264 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:57.491338 master-0 kubenswrapper[7337]: I0312 18:22:57.491305 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:57.491338 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:57.491338 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:57.491338 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:57.491647 master-0 kubenswrapper[7337]: I0312 18:22:57.491350 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:57.510240 master-0 kubenswrapper[7337]: I0312 18:22:57.510196 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_05a33cf5-173b-463b-9f72-84e888bef88c/installer/0.log" Mar 12 18:22:57.510723 master-0 kubenswrapper[7337]: I0312 18:22:57.510262 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 18:22:57.631879 master-0 kubenswrapper[7337]: I0312 18:22:57.631822 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05a33cf5-173b-463b-9f72-84e888bef88c-kube-api-access\") pod \"05a33cf5-173b-463b-9f72-84e888bef88c\" (UID: \"05a33cf5-173b-463b-9f72-84e888bef88c\") " Mar 12 18:22:57.631879 master-0 kubenswrapper[7337]: I0312 18:22:57.631864 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05a33cf5-173b-463b-9f72-84e888bef88c-var-lock\") pod \"05a33cf5-173b-463b-9f72-84e888bef88c\" (UID: \"05a33cf5-173b-463b-9f72-84e888bef88c\") " Mar 12 18:22:57.632096 master-0 kubenswrapper[7337]: I0312 18:22:57.631953 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05a33cf5-173b-463b-9f72-84e888bef88c-kubelet-dir\") pod \"05a33cf5-173b-463b-9f72-84e888bef88c\" (UID: \"05a33cf5-173b-463b-9f72-84e888bef88c\") " Mar 12 18:22:57.632142 master-0 kubenswrapper[7337]: I0312 18:22:57.632085 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05a33cf5-173b-463b-9f72-84e888bef88c-var-lock" (OuterVolumeSpecName: "var-lock") pod "05a33cf5-173b-463b-9f72-84e888bef88c" (UID: "05a33cf5-173b-463b-9f72-84e888bef88c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:22:57.632242 master-0 kubenswrapper[7337]: I0312 18:22:57.632195 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05a33cf5-173b-463b-9f72-84e888bef88c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "05a33cf5-173b-463b-9f72-84e888bef88c" (UID: "05a33cf5-173b-463b-9f72-84e888bef88c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:22:57.632290 master-0 kubenswrapper[7337]: I0312 18:22:57.632234 7337 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05a33cf5-173b-463b-9f72-84e888bef88c-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 18:22:57.634479 master-0 kubenswrapper[7337]: I0312 18:22:57.634429 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05a33cf5-173b-463b-9f72-84e888bef88c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "05a33cf5-173b-463b-9f72-84e888bef88c" (UID: "05a33cf5-173b-463b-9f72-84e888bef88c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:22:57.733450 master-0 kubenswrapper[7337]: I0312 18:22:57.733366 7337 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05a33cf5-173b-463b-9f72-84e888bef88c-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:22:57.733657 master-0 kubenswrapper[7337]: I0312 18:22:57.733643 7337 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05a33cf5-173b-463b-9f72-84e888bef88c-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 18:22:57.947403 master-0 kubenswrapper[7337]: I0312 18:22:57.947375 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_05a33cf5-173b-463b-9f72-84e888bef88c/installer/0.log" Mar 12 18:22:57.947739 master-0 kubenswrapper[7337]: I0312 18:22:57.947696 7337 generic.go:334] "Generic (PLEG): container finished" podID="05a33cf5-173b-463b-9f72-84e888bef88c" containerID="8fc3b2c7306c49a93f4bae8dd3316d6c06ff111bf95682381984a0a7e932e287" exitCode=1 Mar 12 18:22:57.947838 master-0 kubenswrapper[7337]: I0312 18:22:57.947772 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 18:22:57.948038 master-0 kubenswrapper[7337]: I0312 18:22:57.947780 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"05a33cf5-173b-463b-9f72-84e888bef88c","Type":"ContainerDied","Data":"8fc3b2c7306c49a93f4bae8dd3316d6c06ff111bf95682381984a0a7e932e287"} Mar 12 18:22:57.948105 master-0 kubenswrapper[7337]: I0312 18:22:57.948062 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"05a33cf5-173b-463b-9f72-84e888bef88c","Type":"ContainerDied","Data":"dd87a8e3a2db22b923ad48f9061ad75a9c5e4246215719b24ff10e19f1c10ecc"} Mar 12 18:22:57.948105 master-0 kubenswrapper[7337]: I0312 18:22:57.948090 7337 scope.go:117] "RemoveContainer" containerID="8fc3b2c7306c49a93f4bae8dd3316d6c06ff111bf95682381984a0a7e932e287" Mar 12 18:22:57.973589 master-0 kubenswrapper[7337]: I0312 18:22:57.973543 7337 scope.go:117] "RemoveContainer" containerID="8fc3b2c7306c49a93f4bae8dd3316d6c06ff111bf95682381984a0a7e932e287" Mar 12 18:22:57.974552 master-0 kubenswrapper[7337]: E0312 18:22:57.974383 7337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fc3b2c7306c49a93f4bae8dd3316d6c06ff111bf95682381984a0a7e932e287\": container with ID starting with 8fc3b2c7306c49a93f4bae8dd3316d6c06ff111bf95682381984a0a7e932e287 not found: ID does not exist" containerID="8fc3b2c7306c49a93f4bae8dd3316d6c06ff111bf95682381984a0a7e932e287" Mar 12 18:22:57.974552 master-0 kubenswrapper[7337]: I0312 18:22:57.974424 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fc3b2c7306c49a93f4bae8dd3316d6c06ff111bf95682381984a0a7e932e287"} err="failed to get container status \"8fc3b2c7306c49a93f4bae8dd3316d6c06ff111bf95682381984a0a7e932e287\": rpc error: code = NotFound desc = could not find container \"8fc3b2c7306c49a93f4bae8dd3316d6c06ff111bf95682381984a0a7e932e287\": container with ID starting with 8fc3b2c7306c49a93f4bae8dd3316d6c06ff111bf95682381984a0a7e932e287 not found: ID does not exist" Mar 12 18:22:57.975028 master-0 kubenswrapper[7337]: I0312 18:22:57.975006 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 12 18:22:57.985496 master-0 kubenswrapper[7337]: I0312 18:22:57.983868 7337 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 12 18:22:58.491996 master-0 kubenswrapper[7337]: I0312 18:22:58.491951 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:58.491996 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:58.491996 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:58.491996 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:58.492256 master-0 kubenswrapper[7337]: I0312 18:22:58.492010 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:59.494012 master-0 kubenswrapper[7337]: I0312 18:22:59.493889 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:22:59.494012 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:22:59.494012 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:22:59.494012 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:22:59.495117 master-0 kubenswrapper[7337]: I0312 18:22:59.494052 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:22:59.736713 master-0 kubenswrapper[7337]: I0312 18:22:59.736584 7337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05a33cf5-173b-463b-9f72-84e888bef88c" path="/var/lib/kubelet/pods/05a33cf5-173b-463b-9f72-84e888bef88c/volumes" Mar 12 18:23:00.493188 master-0 kubenswrapper[7337]: I0312 18:23:00.493051 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:00.493188 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:00.493188 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:00.493188 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:00.493188 master-0 kubenswrapper[7337]: I0312 18:23:00.493151 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:01.491830 master-0 kubenswrapper[7337]: I0312 18:23:01.491768 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:01.491830 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:01.491830 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:01.491830 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:01.492456 master-0 kubenswrapper[7337]: I0312 18:23:01.491847 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:02.492758 master-0 kubenswrapper[7337]: I0312 18:23:02.492700 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:02.492758 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:02.492758 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:02.492758 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:02.493777 master-0 kubenswrapper[7337]: I0312 18:23:02.492764 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:03.491972 master-0 kubenswrapper[7337]: I0312 18:23:03.491898 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:03.491972 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:03.491972 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:03.491972 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:03.492320 master-0 kubenswrapper[7337]: I0312 18:23:03.491999 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:04.492113 master-0 kubenswrapper[7337]: I0312 18:23:04.492059 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:04.492113 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:04.492113 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:04.492113 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:04.492876 master-0 kubenswrapper[7337]: I0312 18:23:04.492754 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:05.492856 master-0 kubenswrapper[7337]: I0312 18:23:05.492809 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:05.492856 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:05.492856 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:05.492856 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:05.493720 master-0 kubenswrapper[7337]: I0312 18:23:05.493686 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:06.492169 master-0 kubenswrapper[7337]: I0312 18:23:06.492111 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:06.492169 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:06.492169 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:06.492169 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:06.492545 master-0 kubenswrapper[7337]: I0312 18:23:06.492178 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:07.490955 master-0 kubenswrapper[7337]: I0312 18:23:07.490903 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:07.490955 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:07.490955 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:07.490955 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:07.490955 master-0 kubenswrapper[7337]: I0312 18:23:07.490952 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:07.932114 master-0 kubenswrapper[7337]: I0312 18:23:07.932049 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 12 18:23:07.932381 master-0 kubenswrapper[7337]: E0312 18:23:07.932363 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05a33cf5-173b-463b-9f72-84e888bef88c" containerName="installer" Mar 12 18:23:07.932434 master-0 kubenswrapper[7337]: I0312 18:23:07.932384 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="05a33cf5-173b-463b-9f72-84e888bef88c" containerName="installer" Mar 12 18:23:07.932562 master-0 kubenswrapper[7337]: I0312 18:23:07.932548 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="05a33cf5-173b-463b-9f72-84e888bef88c" containerName="installer" Mar 12 18:23:07.933109 master-0 kubenswrapper[7337]: I0312 18:23:07.933059 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 12 18:23:07.935324 master-0 kubenswrapper[7337]: I0312 18:23:07.935305 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 12 18:23:07.936335 master-0 kubenswrapper[7337]: I0312 18:23:07.936312 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-99kdt" Mar 12 18:23:07.947641 master-0 kubenswrapper[7337]: I0312 18:23:07.945801 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 12 18:23:08.086433 master-0 kubenswrapper[7337]: I0312 18:23:08.086398 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/99f63924-b198-4954-ba14-5c48e8830ec0-var-lock\") pod \"installer-5-master-0\" (UID: \"99f63924-b198-4954-ba14-5c48e8830ec0\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 12 18:23:08.086779 master-0 kubenswrapper[7337]: I0312 18:23:08.086762 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/99f63924-b198-4954-ba14-5c48e8830ec0-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"99f63924-b198-4954-ba14-5c48e8830ec0\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 12 18:23:08.086886 master-0 kubenswrapper[7337]: I0312 18:23:08.086873 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/99f63924-b198-4954-ba14-5c48e8830ec0-kube-api-access\") pod \"installer-5-master-0\" (UID: \"99f63924-b198-4954-ba14-5c48e8830ec0\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 12 18:23:08.188613 master-0 kubenswrapper[7337]: I0312 18:23:08.188482 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/99f63924-b198-4954-ba14-5c48e8830ec0-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"99f63924-b198-4954-ba14-5c48e8830ec0\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 12 18:23:08.189134 master-0 kubenswrapper[7337]: I0312 18:23:08.188684 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/99f63924-b198-4954-ba14-5c48e8830ec0-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"99f63924-b198-4954-ba14-5c48e8830ec0\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 12 18:23:08.189134 master-0 kubenswrapper[7337]: I0312 18:23:08.188904 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/99f63924-b198-4954-ba14-5c48e8830ec0-kube-api-access\") pod \"installer-5-master-0\" (UID: \"99f63924-b198-4954-ba14-5c48e8830ec0\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 12 18:23:08.189134 master-0 kubenswrapper[7337]: I0312 18:23:08.188985 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/99f63924-b198-4954-ba14-5c48e8830ec0-var-lock\") pod \"installer-5-master-0\" (UID: \"99f63924-b198-4954-ba14-5c48e8830ec0\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 12 18:23:08.189134 master-0 kubenswrapper[7337]: I0312 18:23:08.189084 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/99f63924-b198-4954-ba14-5c48e8830ec0-var-lock\") pod \"installer-5-master-0\" (UID: \"99f63924-b198-4954-ba14-5c48e8830ec0\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 12 18:23:08.207851 master-0 kubenswrapper[7337]: I0312 18:23:08.207781 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/99f63924-b198-4954-ba14-5c48e8830ec0-kube-api-access\") pod \"installer-5-master-0\" (UID: \"99f63924-b198-4954-ba14-5c48e8830ec0\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 12 18:23:08.269738 master-0 kubenswrapper[7337]: I0312 18:23:08.269625 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 12 18:23:08.491430 master-0 kubenswrapper[7337]: I0312 18:23:08.491325 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:08.491430 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:08.491430 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:08.491430 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:08.491430 master-0 kubenswrapper[7337]: I0312 18:23:08.491394 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:08.686266 master-0 kubenswrapper[7337]: I0312 18:23:08.686220 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 12 18:23:08.695749 master-0 kubenswrapper[7337]: W0312 18:23:08.694950 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod99f63924_b198_4954_ba14_5c48e8830ec0.slice/crio-b2aaa458623aaac6f6d633ebf1840bf60a09bfe111c0fe4eefba933212240641 WatchSource:0}: Error finding container b2aaa458623aaac6f6d633ebf1840bf60a09bfe111c0fe4eefba933212240641: Status 404 returned error can't find the container with id b2aaa458623aaac6f6d633ebf1840bf60a09bfe111c0fe4eefba933212240641 Mar 12 18:23:09.025380 master-0 kubenswrapper[7337]: I0312 18:23:09.025336 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"99f63924-b198-4954-ba14-5c48e8830ec0","Type":"ContainerStarted","Data":"bcd9c4470387a4f73246459472597ab7bf839663226c4513e3b54a4697a699f9"} Mar 12 18:23:09.025380 master-0 kubenswrapper[7337]: I0312 18:23:09.025382 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"99f63924-b198-4954-ba14-5c48e8830ec0","Type":"ContainerStarted","Data":"b2aaa458623aaac6f6d633ebf1840bf60a09bfe111c0fe4eefba933212240641"} Mar 12 18:23:09.041283 master-0 kubenswrapper[7337]: I0312 18:23:09.041203 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-5-master-0" podStartSLOduration=2.041185355 podStartE2EDuration="2.041185355s" podCreationTimestamp="2026-03-12 18:23:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:23:09.040254371 +0000 UTC m=+589.508855338" watchObservedRunningTime="2026-03-12 18:23:09.041185355 +0000 UTC m=+589.509786302" Mar 12 18:23:09.493300 master-0 kubenswrapper[7337]: I0312 18:23:09.493230 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:09.493300 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:09.493300 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:09.493300 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:09.494297 master-0 kubenswrapper[7337]: I0312 18:23:09.493310 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:10.493472 master-0 kubenswrapper[7337]: I0312 18:23:10.493282 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:10.493472 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:10.493472 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:10.493472 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:10.493472 master-0 kubenswrapper[7337]: I0312 18:23:10.493401 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:11.492382 master-0 kubenswrapper[7337]: I0312 18:23:11.492307 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:11.492382 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:11.492382 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:11.492382 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:11.492382 master-0 kubenswrapper[7337]: I0312 18:23:11.492375 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:12.492778 master-0 kubenswrapper[7337]: I0312 18:23:12.492708 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:12.492778 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:12.492778 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:12.492778 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:12.493477 master-0 kubenswrapper[7337]: I0312 18:23:12.492785 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:13.493320 master-0 kubenswrapper[7337]: I0312 18:23:13.493235 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:13.493320 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:13.493320 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:13.493320 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:13.494348 master-0 kubenswrapper[7337]: I0312 18:23:13.493342 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:14.493000 master-0 kubenswrapper[7337]: I0312 18:23:14.492923 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:14.493000 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:14.493000 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:14.493000 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:14.494156 master-0 kubenswrapper[7337]: I0312 18:23:14.492999 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:15.492223 master-0 kubenswrapper[7337]: I0312 18:23:15.492178 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:15.492223 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:15.492223 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:15.492223 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:15.492610 master-0 kubenswrapper[7337]: I0312 18:23:15.492581 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:16.491645 master-0 kubenswrapper[7337]: I0312 18:23:16.491570 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:16.491645 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:16.491645 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:16.491645 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:16.492306 master-0 kubenswrapper[7337]: I0312 18:23:16.491655 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:17.491227 master-0 kubenswrapper[7337]: I0312 18:23:17.491181 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:17.491227 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:17.491227 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:17.491227 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:17.491660 master-0 kubenswrapper[7337]: I0312 18:23:17.491242 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:18.492238 master-0 kubenswrapper[7337]: I0312 18:23:18.492173 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:18.492238 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:18.492238 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:18.492238 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:18.492238 master-0 kubenswrapper[7337]: I0312 18:23:18.492231 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:19.493507 master-0 kubenswrapper[7337]: I0312 18:23:19.493438 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:19.493507 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:19.493507 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:19.493507 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:19.494886 master-0 kubenswrapper[7337]: I0312 18:23:19.493587 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:20.494018 master-0 kubenswrapper[7337]: I0312 18:23:20.493779 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:20.494018 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:20.494018 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:20.494018 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:20.494018 master-0 kubenswrapper[7337]: I0312 18:23:20.493939 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:21.492098 master-0 kubenswrapper[7337]: I0312 18:23:21.492061 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:21.492098 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:21.492098 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:21.492098 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:21.492433 master-0 kubenswrapper[7337]: I0312 18:23:21.492409 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:22.491981 master-0 kubenswrapper[7337]: I0312 18:23:22.491930 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:22.491981 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:22.491981 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:22.491981 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:22.493130 master-0 kubenswrapper[7337]: I0312 18:23:22.493087 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:23.492223 master-0 kubenswrapper[7337]: I0312 18:23:23.492145 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:23.492223 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:23.492223 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:23.492223 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:23.492914 master-0 kubenswrapper[7337]: I0312 18:23:23.492223 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:24.349910 master-0 kubenswrapper[7337]: I0312 18:23:24.349859 7337 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 12 18:23:24.350753 master-0 kubenswrapper[7337]: I0312 18:23:24.350694 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" containerID="cri-o://e44a9cc4aee80272b38b5e4941479361f9126b94fc8be3e31ee711d40c88f185" gracePeriod=30 Mar 12 18:23:24.351472 master-0 kubenswrapper[7337]: I0312 18:23:24.350760 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" containerID="cri-o://feb59d6cd6246e5eb3016a3006c4795b8f2234b2b5776c1e7531d520271356d0" gracePeriod=30 Mar 12 18:23:24.351472 master-0 kubenswrapper[7337]: I0312 18:23:24.350785 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" containerID="cri-o://85481ac128156495a33b73e0081f7dd95a2a0153c6a78beb32cc5e772332a467" gracePeriod=30 Mar 12 18:23:24.351472 master-0 kubenswrapper[7337]: I0312 18:23:24.350801 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" containerID="cri-o://b5ec126109a6232b9aa6b3936f959178287d28c871336513191b8b615b8c76f4" gracePeriod=30 Mar 12 18:23:24.351472 master-0 kubenswrapper[7337]: I0312 18:23:24.350752 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" containerID="cri-o://36cd53616859a569c9c20c415080914078c0deb1d74d4eb2e8bfe2dd611dc704" gracePeriod=30 Mar 12 18:23:24.355619 master-0 kubenswrapper[7337]: I0312 18:23:24.353320 7337 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 12 18:23:24.355619 master-0 kubenswrapper[7337]: E0312 18:23:24.353853 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" Mar 12 18:23:24.355619 master-0 kubenswrapper[7337]: I0312 18:23:24.353874 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" Mar 12 18:23:24.355619 master-0 kubenswrapper[7337]: E0312 18:23:24.353891 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" Mar 12 18:23:24.355619 master-0 kubenswrapper[7337]: I0312 18:23:24.353903 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" Mar 12 18:23:24.355619 master-0 kubenswrapper[7337]: E0312 18:23:24.353918 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-resources-copy" Mar 12 18:23:24.355619 master-0 kubenswrapper[7337]: I0312 18:23:24.353929 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-resources-copy" Mar 12 18:23:24.355619 master-0 kubenswrapper[7337]: E0312 18:23:24.353955 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" Mar 12 18:23:24.355619 master-0 kubenswrapper[7337]: I0312 18:23:24.353966 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" Mar 12 18:23:24.355619 master-0 kubenswrapper[7337]: E0312 18:23:24.353982 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" Mar 12 18:23:24.355619 master-0 kubenswrapper[7337]: I0312 18:23:24.353994 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" Mar 12 18:23:24.355619 master-0 kubenswrapper[7337]: E0312 18:23:24.354019 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="setup" Mar 12 18:23:24.355619 master-0 kubenswrapper[7337]: I0312 18:23:24.354029 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="setup" Mar 12 18:23:24.355619 master-0 kubenswrapper[7337]: E0312 18:23:24.354049 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" Mar 12 18:23:24.355619 master-0 kubenswrapper[7337]: I0312 18:23:24.354061 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" Mar 12 18:23:24.355619 master-0 kubenswrapper[7337]: E0312 18:23:24.354079 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-ensure-env-vars" Mar 12 18:23:24.355619 master-0 kubenswrapper[7337]: I0312 18:23:24.354090 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-ensure-env-vars" Mar 12 18:23:24.355619 master-0 kubenswrapper[7337]: I0312 18:23:24.354310 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" Mar 12 18:23:24.355619 master-0 kubenswrapper[7337]: I0312 18:23:24.354333 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" Mar 12 18:23:24.355619 master-0 kubenswrapper[7337]: I0312 18:23:24.354356 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" Mar 12 18:23:24.355619 master-0 kubenswrapper[7337]: I0312 18:23:24.354373 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" Mar 12 18:23:24.355619 master-0 kubenswrapper[7337]: I0312 18:23:24.354387 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" Mar 12 18:23:24.491633 master-0 kubenswrapper[7337]: I0312 18:23:24.491566 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:24.491633 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:24.491633 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:24.491633 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:24.492000 master-0 kubenswrapper[7337]: I0312 18:23:24.491631 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:24.525099 master-0 kubenswrapper[7337]: I0312 18:23:24.525023 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:23:24.526191 master-0 kubenswrapper[7337]: I0312 18:23:24.525307 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:23:24.526191 master-0 kubenswrapper[7337]: I0312 18:23:24.525369 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:23:24.526191 master-0 kubenswrapper[7337]: I0312 18:23:24.525420 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:23:24.526191 master-0 kubenswrapper[7337]: I0312 18:23:24.525503 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:23:24.526191 master-0 kubenswrapper[7337]: I0312 18:23:24.525605 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:23:24.626825 master-0 kubenswrapper[7337]: I0312 18:23:24.626673 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:23:24.626825 master-0 kubenswrapper[7337]: I0312 18:23:24.626772 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:23:24.626825 master-0 kubenswrapper[7337]: I0312 18:23:24.626775 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:23:24.626825 master-0 kubenswrapper[7337]: I0312 18:23:24.626797 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:23:24.626825 master-0 kubenswrapper[7337]: I0312 18:23:24.626828 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:23:24.627271 master-0 kubenswrapper[7337]: I0312 18:23:24.626847 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:23:24.627271 master-0 kubenswrapper[7337]: I0312 18:23:24.626863 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:23:24.627271 master-0 kubenswrapper[7337]: I0312 18:23:24.626874 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:23:24.627271 master-0 kubenswrapper[7337]: I0312 18:23:24.626886 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:23:24.627271 master-0 kubenswrapper[7337]: I0312 18:23:24.626897 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:23:24.627271 master-0 kubenswrapper[7337]: I0312 18:23:24.626909 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:23:24.627271 master-0 kubenswrapper[7337]: I0312 18:23:24.627031 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:23:25.148260 master-0 kubenswrapper[7337]: I0312 18:23:25.148198 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 12 18:23:25.149582 master-0 kubenswrapper[7337]: I0312 18:23:25.149543 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 12 18:23:25.153067 master-0 kubenswrapper[7337]: I0312 18:23:25.153011 7337 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="b5ec126109a6232b9aa6b3936f959178287d28c871336513191b8b615b8c76f4" exitCode=2 Mar 12 18:23:25.153137 master-0 kubenswrapper[7337]: I0312 18:23:25.153076 7337 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="e44a9cc4aee80272b38b5e4941479361f9126b94fc8be3e31ee711d40c88f185" exitCode=0 Mar 12 18:23:25.153137 master-0 kubenswrapper[7337]: I0312 18:23:25.153089 7337 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="85481ac128156495a33b73e0081f7dd95a2a0153c6a78beb32cc5e772332a467" exitCode=2 Mar 12 18:23:25.492560 master-0 kubenswrapper[7337]: I0312 18:23:25.492430 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:25.492560 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:25.492560 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:25.492560 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:25.492799 master-0 kubenswrapper[7337]: I0312 18:23:25.492547 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:26.492338 master-0 kubenswrapper[7337]: I0312 18:23:26.491875 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:26.492338 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:26.492338 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:26.492338 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:26.492338 master-0 kubenswrapper[7337]: I0312 18:23:26.491955 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:27.491853 master-0 kubenswrapper[7337]: I0312 18:23:27.490751 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:27.491853 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:27.491853 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:27.491853 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:27.491853 master-0 kubenswrapper[7337]: I0312 18:23:27.490799 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:28.492377 master-0 kubenswrapper[7337]: I0312 18:23:28.492322 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:28.492377 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:28.492377 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:28.492377 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:28.493328 master-0 kubenswrapper[7337]: I0312 18:23:28.492385 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:29.493797 master-0 kubenswrapper[7337]: I0312 18:23:29.493699 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:29.493797 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:29.493797 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:29.493797 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:29.495252 master-0 kubenswrapper[7337]: I0312 18:23:29.495195 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:30.493982 master-0 kubenswrapper[7337]: I0312 18:23:30.493735 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:30.493982 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:30.493982 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:30.493982 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:30.493982 master-0 kubenswrapper[7337]: I0312 18:23:30.493885 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:31.493185 master-0 kubenswrapper[7337]: I0312 18:23:31.493108 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:31.493185 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:31.493185 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:31.493185 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:31.493185 master-0 kubenswrapper[7337]: I0312 18:23:31.493176 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:32.494304 master-0 kubenswrapper[7337]: I0312 18:23:32.494228 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:32.494304 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:32.494304 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:32.494304 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:32.494304 master-0 kubenswrapper[7337]: I0312 18:23:32.494302 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:33.493028 master-0 kubenswrapper[7337]: I0312 18:23:33.492959 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:33.493028 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:33.493028 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:33.493028 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:33.493028 master-0 kubenswrapper[7337]: I0312 18:23:33.493021 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:34.491961 master-0 kubenswrapper[7337]: I0312 18:23:34.491852 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:34.491961 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:34.491961 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:34.491961 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:34.491961 master-0 kubenswrapper[7337]: I0312 18:23:34.491943 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:35.494365 master-0 kubenswrapper[7337]: I0312 18:23:35.494291 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:35.494365 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:35.494365 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:35.494365 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:35.494365 master-0 kubenswrapper[7337]: I0312 18:23:35.494361 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:36.493103 master-0 kubenswrapper[7337]: I0312 18:23:36.493017 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:36.493103 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:36.493103 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:36.493103 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:36.493103 master-0 kubenswrapper[7337]: I0312 18:23:36.493105 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:37.257393 master-0 kubenswrapper[7337]: I0312 18:23:37.257306 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/kube-controller-manager/0.log" Mar 12 18:23:37.257393 master-0 kubenswrapper[7337]: I0312 18:23:37.257390 7337 generic.go:334] "Generic (PLEG): container finished" podID="39c441a05d91070efc538925475b0a44" containerID="a84352e48f1355ad688a8d43acd0737d8ced53bb92d29ec7f76753f1e69e464d" exitCode=1 Mar 12 18:23:37.258363 master-0 kubenswrapper[7337]: I0312 18:23:37.257436 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"39c441a05d91070efc538925475b0a44","Type":"ContainerDied","Data":"a84352e48f1355ad688a8d43acd0737d8ced53bb92d29ec7f76753f1e69e464d"} Mar 12 18:23:37.258363 master-0 kubenswrapper[7337]: I0312 18:23:37.258095 7337 scope.go:117] "RemoveContainer" containerID="a84352e48f1355ad688a8d43acd0737d8ced53bb92d29ec7f76753f1e69e464d" Mar 12 18:23:37.489463 master-0 kubenswrapper[7337]: I0312 18:23:37.489399 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:23:37.489463 master-0 kubenswrapper[7337]: I0312 18:23:37.489453 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:23:37.489828 master-0 kubenswrapper[7337]: I0312 18:23:37.489583 7337 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:23:37.491801 master-0 kubenswrapper[7337]: I0312 18:23:37.491738 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:37.491801 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:37.491801 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:37.491801 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:37.492030 master-0 kubenswrapper[7337]: I0312 18:23:37.491832 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:38.266118 master-0 kubenswrapper[7337]: I0312 18:23:38.266048 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/kube-controller-manager/0.log" Mar 12 18:23:38.266633 master-0 kubenswrapper[7337]: I0312 18:23:38.266136 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"39c441a05d91070efc538925475b0a44","Type":"ContainerStarted","Data":"56c803b302b6c89542dd77ed04fecb43a59a8287926d38c4629dc8bd033d7a46"} Mar 12 18:23:38.492860 master-0 kubenswrapper[7337]: I0312 18:23:38.492775 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:38.492860 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:38.492860 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:38.492860 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:38.493306 master-0 kubenswrapper[7337]: I0312 18:23:38.492874 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:39.275103 master-0 kubenswrapper[7337]: I0312 18:23:39.275034 7337 generic.go:334] "Generic (PLEG): container finished" podID="30102cc9-45f8-46f8-bb34-eec48fdb297d" containerID="15f58aad78a995767697fd5b4bbf18700052b6bed0c718d5b6a5383ae0c8a9a8" exitCode=0 Mar 12 18:23:39.275886 master-0 kubenswrapper[7337]: I0312 18:23:39.275166 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"30102cc9-45f8-46f8-bb34-eec48fdb297d","Type":"ContainerDied","Data":"15f58aad78a995767697fd5b4bbf18700052b6bed0c718d5b6a5383ae0c8a9a8"} Mar 12 18:23:39.492706 master-0 kubenswrapper[7337]: I0312 18:23:39.492634 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:39.492706 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:39.492706 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:39.492706 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:39.493019 master-0 kubenswrapper[7337]: I0312 18:23:39.492727 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:39.949448 master-0 kubenswrapper[7337]: E0312 18:23:39.949317 7337 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:23:40.493203 master-0 kubenswrapper[7337]: I0312 18:23:40.493113 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:40.493203 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:40.493203 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:40.493203 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:40.494278 master-0 kubenswrapper[7337]: I0312 18:23:40.493238 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:40.677335 master-0 kubenswrapper[7337]: I0312 18:23:40.677240 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 12 18:23:40.788975 master-0 kubenswrapper[7337]: I0312 18:23:40.788877 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30102cc9-45f8-46f8-bb34-eec48fdb297d-kube-api-access\") pod \"30102cc9-45f8-46f8-bb34-eec48fdb297d\" (UID: \"30102cc9-45f8-46f8-bb34-eec48fdb297d\") " Mar 12 18:23:40.788975 master-0 kubenswrapper[7337]: I0312 18:23:40.788956 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/30102cc9-45f8-46f8-bb34-eec48fdb297d-var-lock\") pod \"30102cc9-45f8-46f8-bb34-eec48fdb297d\" (UID: \"30102cc9-45f8-46f8-bb34-eec48fdb297d\") " Mar 12 18:23:40.788975 master-0 kubenswrapper[7337]: I0312 18:23:40.788983 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/30102cc9-45f8-46f8-bb34-eec48fdb297d-kubelet-dir\") pod \"30102cc9-45f8-46f8-bb34-eec48fdb297d\" (UID: \"30102cc9-45f8-46f8-bb34-eec48fdb297d\") " Mar 12 18:23:40.789404 master-0 kubenswrapper[7337]: I0312 18:23:40.789163 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30102cc9-45f8-46f8-bb34-eec48fdb297d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "30102cc9-45f8-46f8-bb34-eec48fdb297d" (UID: "30102cc9-45f8-46f8-bb34-eec48fdb297d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:23:40.789404 master-0 kubenswrapper[7337]: I0312 18:23:40.789224 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30102cc9-45f8-46f8-bb34-eec48fdb297d-var-lock" (OuterVolumeSpecName: "var-lock") pod "30102cc9-45f8-46f8-bb34-eec48fdb297d" (UID: "30102cc9-45f8-46f8-bb34-eec48fdb297d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:23:40.789750 master-0 kubenswrapper[7337]: I0312 18:23:40.789712 7337 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/30102cc9-45f8-46f8-bb34-eec48fdb297d-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 18:23:40.789750 master-0 kubenswrapper[7337]: I0312 18:23:40.789737 7337 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/30102cc9-45f8-46f8-bb34-eec48fdb297d-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:23:40.791796 master-0 kubenswrapper[7337]: I0312 18:23:40.791745 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30102cc9-45f8-46f8-bb34-eec48fdb297d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "30102cc9-45f8-46f8-bb34-eec48fdb297d" (UID: "30102cc9-45f8-46f8-bb34-eec48fdb297d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:23:40.891383 master-0 kubenswrapper[7337]: I0312 18:23:40.891104 7337 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30102cc9-45f8-46f8-bb34-eec48fdb297d-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 18:23:41.290483 master-0 kubenswrapper[7337]: I0312 18:23:41.290375 7337 generic.go:334] "Generic (PLEG): container finished" podID="a1a56802af72ce1aac6b5077f1695ac0" containerID="ab77ac8c9287ab57ea2a467e8e1f1dee411c2f94f5642f785b9b7baa2542752b" exitCode=1 Mar 12 18:23:41.290483 master-0 kubenswrapper[7337]: I0312 18:23:41.290444 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerDied","Data":"ab77ac8c9287ab57ea2a467e8e1f1dee411c2f94f5642f785b9b7baa2542752b"} Mar 12 18:23:41.290483 master-0 kubenswrapper[7337]: I0312 18:23:41.290477 7337 scope.go:117] "RemoveContainer" containerID="7905195025f5bcb8de89213d12f95430c8990ba81078908548bc95e6c97e2325" Mar 12 18:23:41.291036 master-0 kubenswrapper[7337]: I0312 18:23:41.291013 7337 scope.go:117] "RemoveContainer" containerID="ab77ac8c9287ab57ea2a467e8e1f1dee411c2f94f5642f785b9b7baa2542752b" Mar 12 18:23:41.291269 master-0 kubenswrapper[7337]: E0312 18:23:41.291245 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=bootstrap-kube-scheduler-master-0_kube-system(a1a56802af72ce1aac6b5077f1695ac0)\"" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="a1a56802af72ce1aac6b5077f1695ac0" Mar 12 18:23:41.293497 master-0 kubenswrapper[7337]: I0312 18:23:41.293461 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ljw8b_062f1b21-2ffc-47da-8334-427c3b2a1a90/authentication-operator/0.log" Mar 12 18:23:41.293606 master-0 kubenswrapper[7337]: I0312 18:23:41.293566 7337 generic.go:334] "Generic (PLEG): container finished" podID="062f1b21-2ffc-47da-8334-427c3b2a1a90" containerID="385dbea872bd37bf8e7a76f3902ee88fd0be82523d84cbe8f74298971654ec6b" exitCode=1 Mar 12 18:23:41.293754 master-0 kubenswrapper[7337]: I0312 18:23:41.293702 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" event={"ID":"062f1b21-2ffc-47da-8334-427c3b2a1a90","Type":"ContainerDied","Data":"385dbea872bd37bf8e7a76f3902ee88fd0be82523d84cbe8f74298971654ec6b"} Mar 12 18:23:41.294284 master-0 kubenswrapper[7337]: I0312 18:23:41.294252 7337 scope.go:117] "RemoveContainer" containerID="385dbea872bd37bf8e7a76f3902ee88fd0be82523d84cbe8f74298971654ec6b" Mar 12 18:23:41.295343 master-0 kubenswrapper[7337]: I0312 18:23:41.295321 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"30102cc9-45f8-46f8-bb34-eec48fdb297d","Type":"ContainerDied","Data":"d0fb27537deeb6ada4bd6bd0dc8f77614abe096d108a40275f771f7f507fd43a"} Mar 12 18:23:41.295398 master-0 kubenswrapper[7337]: I0312 18:23:41.295344 7337 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0fb27537deeb6ada4bd6bd0dc8f77614abe096d108a40275f771f7f507fd43a" Mar 12 18:23:41.295429 master-0 kubenswrapper[7337]: I0312 18:23:41.295399 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 12 18:23:41.492413 master-0 kubenswrapper[7337]: I0312 18:23:41.492323 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:41.492413 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:41.492413 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:41.492413 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:41.492990 master-0 kubenswrapper[7337]: I0312 18:23:41.492425 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:42.305037 master-0 kubenswrapper[7337]: I0312 18:23:42.304991 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ljw8b_062f1b21-2ffc-47da-8334-427c3b2a1a90/authentication-operator/0.log" Mar 12 18:23:42.306284 master-0 kubenswrapper[7337]: I0312 18:23:42.305116 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" event={"ID":"062f1b21-2ffc-47da-8334-427c3b2a1a90","Type":"ContainerStarted","Data":"d457f16cde4caae54a1dbf33594546ae3dcb4fddad7eadd6df336af05fa29aa3"} Mar 12 18:23:42.493551 master-0 kubenswrapper[7337]: I0312 18:23:42.493440 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:42.493551 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:42.493551 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:42.493551 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:42.493973 master-0 kubenswrapper[7337]: I0312 18:23:42.493576 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:43.492499 master-0 kubenswrapper[7337]: I0312 18:23:43.492361 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:43.492499 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:43.492499 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:43.492499 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:43.492499 master-0 kubenswrapper[7337]: I0312 18:23:43.492504 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:44.493471 master-0 kubenswrapper[7337]: I0312 18:23:44.493381 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:44.493471 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:44.493471 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:44.493471 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:44.494634 master-0 kubenswrapper[7337]: I0312 18:23:44.493480 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:45.492867 master-0 kubenswrapper[7337]: I0312 18:23:45.492797 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:45.492867 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:45.492867 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:45.492867 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:45.493085 master-0 kubenswrapper[7337]: I0312 18:23:45.492909 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:46.338442 master-0 kubenswrapper[7337]: I0312 18:23:46.338358 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_e2bbd04e-d147-4343-9e5d-300e42de9dbb/installer/0.log" Mar 12 18:23:46.338442 master-0 kubenswrapper[7337]: I0312 18:23:46.338440 7337 generic.go:334] "Generic (PLEG): container finished" podID="e2bbd04e-d147-4343-9e5d-300e42de9dbb" containerID="6ad211f881c1b186d3265c89c0f87451f6acb70c49f4334e2e7867092be6c91a" exitCode=1 Mar 12 18:23:46.339189 master-0 kubenswrapper[7337]: I0312 18:23:46.338484 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"e2bbd04e-d147-4343-9e5d-300e42de9dbb","Type":"ContainerDied","Data":"6ad211f881c1b186d3265c89c0f87451f6acb70c49f4334e2e7867092be6c91a"} Mar 12 18:23:46.492073 master-0 kubenswrapper[7337]: I0312 18:23:46.492007 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:46.492073 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:46.492073 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:46.492073 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:46.492381 master-0 kubenswrapper[7337]: I0312 18:23:46.492079 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:47.489203 master-0 kubenswrapper[7337]: I0312 18:23:47.489148 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:23:47.489836 master-0 kubenswrapper[7337]: I0312 18:23:47.489818 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:23:47.495791 master-0 kubenswrapper[7337]: I0312 18:23:47.495770 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:47.495791 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:47.495791 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:47.495791 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:47.496002 master-0 kubenswrapper[7337]: I0312 18:23:47.495980 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:47.499995 master-0 kubenswrapper[7337]: I0312 18:23:47.499970 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:23:47.702365 master-0 kubenswrapper[7337]: I0312 18:23:47.702308 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_e2bbd04e-d147-4343-9e5d-300e42de9dbb/installer/0.log" Mar 12 18:23:47.702653 master-0 kubenswrapper[7337]: I0312 18:23:47.702390 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 18:23:47.805266 master-0 kubenswrapper[7337]: I0312 18:23:47.805180 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2bbd04e-d147-4343-9e5d-300e42de9dbb-kubelet-dir\") pod \"e2bbd04e-d147-4343-9e5d-300e42de9dbb\" (UID: \"e2bbd04e-d147-4343-9e5d-300e42de9dbb\") " Mar 12 18:23:47.805266 master-0 kubenswrapper[7337]: I0312 18:23:47.805262 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2bbd04e-d147-4343-9e5d-300e42de9dbb-kube-api-access\") pod \"e2bbd04e-d147-4343-9e5d-300e42de9dbb\" (UID: \"e2bbd04e-d147-4343-9e5d-300e42de9dbb\") " Mar 12 18:23:47.805673 master-0 kubenswrapper[7337]: I0312 18:23:47.805329 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e2bbd04e-d147-4343-9e5d-300e42de9dbb-var-lock\") pod \"e2bbd04e-d147-4343-9e5d-300e42de9dbb\" (UID: \"e2bbd04e-d147-4343-9e5d-300e42de9dbb\") " Mar 12 18:23:47.805673 master-0 kubenswrapper[7337]: I0312 18:23:47.805296 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2bbd04e-d147-4343-9e5d-300e42de9dbb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e2bbd04e-d147-4343-9e5d-300e42de9dbb" (UID: "e2bbd04e-d147-4343-9e5d-300e42de9dbb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:23:47.805804 master-0 kubenswrapper[7337]: I0312 18:23:47.805711 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2bbd04e-d147-4343-9e5d-300e42de9dbb-var-lock" (OuterVolumeSpecName: "var-lock") pod "e2bbd04e-d147-4343-9e5d-300e42de9dbb" (UID: "e2bbd04e-d147-4343-9e5d-300e42de9dbb"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:23:47.805804 master-0 kubenswrapper[7337]: I0312 18:23:47.805746 7337 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2bbd04e-d147-4343-9e5d-300e42de9dbb-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:23:47.808136 master-0 kubenswrapper[7337]: I0312 18:23:47.808086 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2bbd04e-d147-4343-9e5d-300e42de9dbb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e2bbd04e-d147-4343-9e5d-300e42de9dbb" (UID: "e2bbd04e-d147-4343-9e5d-300e42de9dbb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:23:47.906619 master-0 kubenswrapper[7337]: I0312 18:23:47.906448 7337 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2bbd04e-d147-4343-9e5d-300e42de9dbb-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 18:23:47.906619 master-0 kubenswrapper[7337]: I0312 18:23:47.906496 7337 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e2bbd04e-d147-4343-9e5d-300e42de9dbb-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 18:23:48.364195 master-0 kubenswrapper[7337]: I0312 18:23:48.364143 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_e2bbd04e-d147-4343-9e5d-300e42de9dbb/installer/0.log" Mar 12 18:23:48.364634 master-0 kubenswrapper[7337]: I0312 18:23:48.364566 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"e2bbd04e-d147-4343-9e5d-300e42de9dbb","Type":"ContainerDied","Data":"f2bc7ecb16572f7c27823d06dc4505f361b41e0cd30f5c3e98a48ae3773ffae2"} Mar 12 18:23:48.364634 master-0 kubenswrapper[7337]: I0312 18:23:48.364625 7337 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2bc7ecb16572f7c27823d06dc4505f361b41e0cd30f5c3e98a48ae3773ffae2" Mar 12 18:23:48.364784 master-0 kubenswrapper[7337]: I0312 18:23:48.364596 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 18:23:48.370902 master-0 kubenswrapper[7337]: I0312 18:23:48.370878 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:23:48.493251 master-0 kubenswrapper[7337]: I0312 18:23:48.493156 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:48.493251 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:48.493251 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:48.493251 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:48.493251 master-0 kubenswrapper[7337]: I0312 18:23:48.493242 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:49.491760 master-0 kubenswrapper[7337]: I0312 18:23:49.491718 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:49.491760 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:49.491760 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:49.491760 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:49.492136 master-0 kubenswrapper[7337]: I0312 18:23:49.492108 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:49.950377 master-0 kubenswrapper[7337]: E0312 18:23:49.950333 7337 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:23:50.492716 master-0 kubenswrapper[7337]: I0312 18:23:50.492655 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:50.492716 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:50.492716 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:50.492716 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:50.492997 master-0 kubenswrapper[7337]: I0312 18:23:50.492743 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:51.491623 master-0 kubenswrapper[7337]: I0312 18:23:51.491583 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:51.491623 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:51.491623 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:51.491623 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:51.492202 master-0 kubenswrapper[7337]: I0312 18:23:51.492171 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:52.492479 master-0 kubenswrapper[7337]: I0312 18:23:52.492428 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:52.492479 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:52.492479 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:52.492479 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:52.492479 master-0 kubenswrapper[7337]: I0312 18:23:52.492477 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:52.722462 master-0 kubenswrapper[7337]: I0312 18:23:52.722404 7337 scope.go:117] "RemoveContainer" containerID="ab77ac8c9287ab57ea2a467e8e1f1dee411c2f94f5642f785b9b7baa2542752b" Mar 12 18:23:53.400076 master-0 kubenswrapper[7337]: I0312 18:23:53.399991 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"91fc9e27f58a493917f258512c2dfe1c4bf9d4efc52492f0f4d3e21237d1136f"} Mar 12 18:23:53.494363 master-0 kubenswrapper[7337]: I0312 18:23:53.494285 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:53.494363 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:53.494363 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:53.494363 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:53.494935 master-0 kubenswrapper[7337]: I0312 18:23:53.494407 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:54.409143 master-0 kubenswrapper[7337]: I0312 18:23:54.409095 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-5-master-0_99f63924-b198-4954-ba14-5c48e8830ec0/installer/0.log" Mar 12 18:23:54.409272 master-0 kubenswrapper[7337]: I0312 18:23:54.409153 7337 generic.go:334] "Generic (PLEG): container finished" podID="99f63924-b198-4954-ba14-5c48e8830ec0" containerID="bcd9c4470387a4f73246459472597ab7bf839663226c4513e3b54a4697a699f9" exitCode=1 Mar 12 18:23:54.409272 master-0 kubenswrapper[7337]: I0312 18:23:54.409185 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"99f63924-b198-4954-ba14-5c48e8830ec0","Type":"ContainerDied","Data":"bcd9c4470387a4f73246459472597ab7bf839663226c4513e3b54a4697a699f9"} Mar 12 18:23:54.492458 master-0 kubenswrapper[7337]: I0312 18:23:54.492237 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:54.492458 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:54.492458 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:54.492458 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:54.492458 master-0 kubenswrapper[7337]: I0312 18:23:54.492408 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:54.951776 master-0 kubenswrapper[7337]: I0312 18:23:54.951716 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 12 18:23:54.952825 master-0 kubenswrapper[7337]: I0312 18:23:54.952789 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 12 18:23:54.953716 master-0 kubenswrapper[7337]: I0312 18:23:54.953680 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd/0.log" Mar 12 18:23:54.954135 master-0 kubenswrapper[7337]: I0312 18:23:54.954096 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcdctl/0.log" Mar 12 18:23:54.955500 master-0 kubenswrapper[7337]: I0312 18:23:54.955424 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 12 18:23:55.041152 master-0 kubenswrapper[7337]: I0312 18:23:55.041026 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 12 18:23:55.041152 master-0 kubenswrapper[7337]: I0312 18:23:55.041144 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 12 18:23:55.041443 master-0 kubenswrapper[7337]: I0312 18:23:55.041268 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 12 18:23:55.041443 master-0 kubenswrapper[7337]: I0312 18:23:55.041315 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 12 18:23:55.041443 master-0 kubenswrapper[7337]: I0312 18:23:55.041372 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 12 18:23:55.041443 master-0 kubenswrapper[7337]: I0312 18:23:55.041417 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 12 18:23:55.041812 master-0 kubenswrapper[7337]: I0312 18:23:55.041740 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:23:55.041812 master-0 kubenswrapper[7337]: I0312 18:23:55.041798 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir" (OuterVolumeSpecName: "log-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:23:55.041812 master-0 kubenswrapper[7337]: I0312 18:23:55.041773 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir" (OuterVolumeSpecName: "data-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:23:55.042028 master-0 kubenswrapper[7337]: I0312 18:23:55.041828 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:23:55.042028 master-0 kubenswrapper[7337]: I0312 18:23:55.041837 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:23:55.042028 master-0 kubenswrapper[7337]: I0312 18:23:55.041982 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:23:55.042441 master-0 kubenswrapper[7337]: I0312 18:23:55.042381 7337 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:23:55.042441 master-0 kubenswrapper[7337]: I0312 18:23:55.042423 7337 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:23:55.042656 master-0 kubenswrapper[7337]: I0312 18:23:55.042446 7337 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") on node \"master-0\" DevicePath \"\"" Mar 12 18:23:55.042656 master-0 kubenswrapper[7337]: I0312 18:23:55.042465 7337 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:23:55.042656 master-0 kubenswrapper[7337]: I0312 18:23:55.042483 7337 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:23:55.042656 master-0 kubenswrapper[7337]: I0312 18:23:55.042501 7337 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:23:55.422535 master-0 kubenswrapper[7337]: I0312 18:23:55.422469 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 12 18:23:55.423688 master-0 kubenswrapper[7337]: I0312 18:23:55.423653 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 12 18:23:55.424503 master-0 kubenswrapper[7337]: I0312 18:23:55.424454 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd/0.log" Mar 12 18:23:55.425041 master-0 kubenswrapper[7337]: I0312 18:23:55.425001 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcdctl/0.log" Mar 12 18:23:55.426580 master-0 kubenswrapper[7337]: I0312 18:23:55.426496 7337 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="36cd53616859a569c9c20c415080914078c0deb1d74d4eb2e8bfe2dd611dc704" exitCode=137 Mar 12 18:23:55.426580 master-0 kubenswrapper[7337]: I0312 18:23:55.426567 7337 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="feb59d6cd6246e5eb3016a3006c4795b8f2234b2b5776c1e7531d520271356d0" exitCode=137 Mar 12 18:23:55.426740 master-0 kubenswrapper[7337]: I0312 18:23:55.426613 7337 scope.go:117] "RemoveContainer" containerID="b5ec126109a6232b9aa6b3936f959178287d28c871336513191b8b615b8c76f4" Mar 12 18:23:55.426801 master-0 kubenswrapper[7337]: I0312 18:23:55.426763 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 12 18:23:55.448992 master-0 kubenswrapper[7337]: I0312 18:23:55.448311 7337 scope.go:117] "RemoveContainer" containerID="e44a9cc4aee80272b38b5e4941479361f9126b94fc8be3e31ee711d40c88f185" Mar 12 18:23:55.468074 master-0 kubenswrapper[7337]: I0312 18:23:55.468001 7337 scope.go:117] "RemoveContainer" containerID="85481ac128156495a33b73e0081f7dd95a2a0153c6a78beb32cc5e772332a467" Mar 12 18:23:55.489678 master-0 kubenswrapper[7337]: I0312 18:23:55.489635 7337 scope.go:117] "RemoveContainer" containerID="36cd53616859a569c9c20c415080914078c0deb1d74d4eb2e8bfe2dd611dc704" Mar 12 18:23:55.492215 master-0 kubenswrapper[7337]: I0312 18:23:55.492159 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:23:55.492215 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:23:55.492215 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:23:55.492215 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:23:55.492479 master-0 kubenswrapper[7337]: I0312 18:23:55.492223 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:23:55.492479 master-0 kubenswrapper[7337]: I0312 18:23:55.492279 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:23:55.493095 master-0 kubenswrapper[7337]: I0312 18:23:55.493049 7337 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"2c667ded264eac2ef91dbced4e1d0c451c7fcbbd73ca41017073728ca29d6478"} pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" containerMessage="Container router failed startup probe, will be restarted" Mar 12 18:23:55.493149 master-0 kubenswrapper[7337]: I0312 18:23:55.493113 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" containerID="cri-o://2c667ded264eac2ef91dbced4e1d0c451c7fcbbd73ca41017073728ca29d6478" gracePeriod=3600 Mar 12 18:23:55.512012 master-0 kubenswrapper[7337]: I0312 18:23:55.511962 7337 scope.go:117] "RemoveContainer" containerID="feb59d6cd6246e5eb3016a3006c4795b8f2234b2b5776c1e7531d520271356d0" Mar 12 18:23:55.528165 master-0 kubenswrapper[7337]: I0312 18:23:55.528118 7337 scope.go:117] "RemoveContainer" containerID="6396a294b799f6df8bf8204dbbb2ddb526c388ee810b3a7185e4216792bec332" Mar 12 18:23:55.541156 master-0 kubenswrapper[7337]: I0312 18:23:55.541104 7337 scope.go:117] "RemoveContainer" containerID="842896c0759c15623b7ea51af59e9868a9fd1b13d33758d9aef5d9eaaeb4b330" Mar 12 18:23:55.563319 master-0 kubenswrapper[7337]: I0312 18:23:55.562813 7337 scope.go:117] "RemoveContainer" containerID="6541e49d2ae9b56107cb844297ee7f71d2445a28f09ad80d8048aba689f8fea0" Mar 12 18:23:55.579145 master-0 kubenswrapper[7337]: I0312 18:23:55.579092 7337 scope.go:117] "RemoveContainer" containerID="b5ec126109a6232b9aa6b3936f959178287d28c871336513191b8b615b8c76f4" Mar 12 18:23:55.579829 master-0 kubenswrapper[7337]: E0312 18:23:55.579796 7337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5ec126109a6232b9aa6b3936f959178287d28c871336513191b8b615b8c76f4\": container with ID starting with b5ec126109a6232b9aa6b3936f959178287d28c871336513191b8b615b8c76f4 not found: ID does not exist" containerID="b5ec126109a6232b9aa6b3936f959178287d28c871336513191b8b615b8c76f4" Mar 12 18:23:55.579904 master-0 kubenswrapper[7337]: I0312 18:23:55.579839 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5ec126109a6232b9aa6b3936f959178287d28c871336513191b8b615b8c76f4"} err="failed to get container status \"b5ec126109a6232b9aa6b3936f959178287d28c871336513191b8b615b8c76f4\": rpc error: code = NotFound desc = could not find container \"b5ec126109a6232b9aa6b3936f959178287d28c871336513191b8b615b8c76f4\": container with ID starting with b5ec126109a6232b9aa6b3936f959178287d28c871336513191b8b615b8c76f4 not found: ID does not exist" Mar 12 18:23:55.579904 master-0 kubenswrapper[7337]: I0312 18:23:55.579873 7337 scope.go:117] "RemoveContainer" containerID="e44a9cc4aee80272b38b5e4941479361f9126b94fc8be3e31ee711d40c88f185" Mar 12 18:23:55.580318 master-0 kubenswrapper[7337]: E0312 18:23:55.580284 7337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e44a9cc4aee80272b38b5e4941479361f9126b94fc8be3e31ee711d40c88f185\": container with ID starting with e44a9cc4aee80272b38b5e4941479361f9126b94fc8be3e31ee711d40c88f185 not found: ID does not exist" containerID="e44a9cc4aee80272b38b5e4941479361f9126b94fc8be3e31ee711d40c88f185" Mar 12 18:23:55.580417 master-0 kubenswrapper[7337]: I0312 18:23:55.580322 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e44a9cc4aee80272b38b5e4941479361f9126b94fc8be3e31ee711d40c88f185"} err="failed to get container status \"e44a9cc4aee80272b38b5e4941479361f9126b94fc8be3e31ee711d40c88f185\": rpc error: code = NotFound desc = could not find container \"e44a9cc4aee80272b38b5e4941479361f9126b94fc8be3e31ee711d40c88f185\": container with ID starting with e44a9cc4aee80272b38b5e4941479361f9126b94fc8be3e31ee711d40c88f185 not found: ID does not exist" Mar 12 18:23:55.580417 master-0 kubenswrapper[7337]: I0312 18:23:55.580382 7337 scope.go:117] "RemoveContainer" containerID="85481ac128156495a33b73e0081f7dd95a2a0153c6a78beb32cc5e772332a467" Mar 12 18:23:55.581021 master-0 kubenswrapper[7337]: E0312 18:23:55.580803 7337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85481ac128156495a33b73e0081f7dd95a2a0153c6a78beb32cc5e772332a467\": container with ID starting with 85481ac128156495a33b73e0081f7dd95a2a0153c6a78beb32cc5e772332a467 not found: ID does not exist" containerID="85481ac128156495a33b73e0081f7dd95a2a0153c6a78beb32cc5e772332a467" Mar 12 18:23:55.581099 master-0 kubenswrapper[7337]: I0312 18:23:55.581023 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85481ac128156495a33b73e0081f7dd95a2a0153c6a78beb32cc5e772332a467"} err="failed to get container status \"85481ac128156495a33b73e0081f7dd95a2a0153c6a78beb32cc5e772332a467\": rpc error: code = NotFound desc = could not find container \"85481ac128156495a33b73e0081f7dd95a2a0153c6a78beb32cc5e772332a467\": container with ID starting with 85481ac128156495a33b73e0081f7dd95a2a0153c6a78beb32cc5e772332a467 not found: ID does not exist" Mar 12 18:23:55.581099 master-0 kubenswrapper[7337]: I0312 18:23:55.581047 7337 scope.go:117] "RemoveContainer" containerID="36cd53616859a569c9c20c415080914078c0deb1d74d4eb2e8bfe2dd611dc704" Mar 12 18:23:55.581467 master-0 kubenswrapper[7337]: E0312 18:23:55.581434 7337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36cd53616859a569c9c20c415080914078c0deb1d74d4eb2e8bfe2dd611dc704\": container with ID starting with 36cd53616859a569c9c20c415080914078c0deb1d74d4eb2e8bfe2dd611dc704 not found: ID does not exist" containerID="36cd53616859a569c9c20c415080914078c0deb1d74d4eb2e8bfe2dd611dc704" Mar 12 18:23:55.581554 master-0 kubenswrapper[7337]: I0312 18:23:55.581470 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36cd53616859a569c9c20c415080914078c0deb1d74d4eb2e8bfe2dd611dc704"} err="failed to get container status \"36cd53616859a569c9c20c415080914078c0deb1d74d4eb2e8bfe2dd611dc704\": rpc error: code = NotFound desc = could not find container \"36cd53616859a569c9c20c415080914078c0deb1d74d4eb2e8bfe2dd611dc704\": container with ID starting with 36cd53616859a569c9c20c415080914078c0deb1d74d4eb2e8bfe2dd611dc704 not found: ID does not exist" Mar 12 18:23:55.581554 master-0 kubenswrapper[7337]: I0312 18:23:55.581494 7337 scope.go:117] "RemoveContainer" containerID="feb59d6cd6246e5eb3016a3006c4795b8f2234b2b5776c1e7531d520271356d0" Mar 12 18:23:55.582129 master-0 kubenswrapper[7337]: E0312 18:23:55.582088 7337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"feb59d6cd6246e5eb3016a3006c4795b8f2234b2b5776c1e7531d520271356d0\": container with ID starting with feb59d6cd6246e5eb3016a3006c4795b8f2234b2b5776c1e7531d520271356d0 not found: ID does not exist" containerID="feb59d6cd6246e5eb3016a3006c4795b8f2234b2b5776c1e7531d520271356d0" Mar 12 18:23:55.582196 master-0 kubenswrapper[7337]: I0312 18:23:55.582123 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feb59d6cd6246e5eb3016a3006c4795b8f2234b2b5776c1e7531d520271356d0"} err="failed to get container status \"feb59d6cd6246e5eb3016a3006c4795b8f2234b2b5776c1e7531d520271356d0\": rpc error: code = NotFound desc = could not find container \"feb59d6cd6246e5eb3016a3006c4795b8f2234b2b5776c1e7531d520271356d0\": container with ID starting with feb59d6cd6246e5eb3016a3006c4795b8f2234b2b5776c1e7531d520271356d0 not found: ID does not exist" Mar 12 18:23:55.582196 master-0 kubenswrapper[7337]: I0312 18:23:55.582146 7337 scope.go:117] "RemoveContainer" containerID="6396a294b799f6df8bf8204dbbb2ddb526c388ee810b3a7185e4216792bec332" Mar 12 18:23:55.582645 master-0 kubenswrapper[7337]: E0312 18:23:55.582604 7337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6396a294b799f6df8bf8204dbbb2ddb526c388ee810b3a7185e4216792bec332\": container with ID starting with 6396a294b799f6df8bf8204dbbb2ddb526c388ee810b3a7185e4216792bec332 not found: ID does not exist" containerID="6396a294b799f6df8bf8204dbbb2ddb526c388ee810b3a7185e4216792bec332" Mar 12 18:23:55.582709 master-0 kubenswrapper[7337]: I0312 18:23:55.582642 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6396a294b799f6df8bf8204dbbb2ddb526c388ee810b3a7185e4216792bec332"} err="failed to get container status \"6396a294b799f6df8bf8204dbbb2ddb526c388ee810b3a7185e4216792bec332\": rpc error: code = NotFound desc = could not find container \"6396a294b799f6df8bf8204dbbb2ddb526c388ee810b3a7185e4216792bec332\": container with ID starting with 6396a294b799f6df8bf8204dbbb2ddb526c388ee810b3a7185e4216792bec332 not found: ID does not exist" Mar 12 18:23:55.582709 master-0 kubenswrapper[7337]: I0312 18:23:55.582665 7337 scope.go:117] "RemoveContainer" containerID="842896c0759c15623b7ea51af59e9868a9fd1b13d33758d9aef5d9eaaeb4b330" Mar 12 18:23:55.583145 master-0 kubenswrapper[7337]: E0312 18:23:55.583107 7337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"842896c0759c15623b7ea51af59e9868a9fd1b13d33758d9aef5d9eaaeb4b330\": container with ID starting with 842896c0759c15623b7ea51af59e9868a9fd1b13d33758d9aef5d9eaaeb4b330 not found: ID does not exist" containerID="842896c0759c15623b7ea51af59e9868a9fd1b13d33758d9aef5d9eaaeb4b330" Mar 12 18:23:55.583205 master-0 kubenswrapper[7337]: I0312 18:23:55.583141 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"842896c0759c15623b7ea51af59e9868a9fd1b13d33758d9aef5d9eaaeb4b330"} err="failed to get container status \"842896c0759c15623b7ea51af59e9868a9fd1b13d33758d9aef5d9eaaeb4b330\": rpc error: code = NotFound desc = could not find container \"842896c0759c15623b7ea51af59e9868a9fd1b13d33758d9aef5d9eaaeb4b330\": container with ID starting with 842896c0759c15623b7ea51af59e9868a9fd1b13d33758d9aef5d9eaaeb4b330 not found: ID does not exist" Mar 12 18:23:55.583205 master-0 kubenswrapper[7337]: I0312 18:23:55.583169 7337 scope.go:117] "RemoveContainer" containerID="6541e49d2ae9b56107cb844297ee7f71d2445a28f09ad80d8048aba689f8fea0" Mar 12 18:23:55.583472 master-0 kubenswrapper[7337]: E0312 18:23:55.583441 7337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6541e49d2ae9b56107cb844297ee7f71d2445a28f09ad80d8048aba689f8fea0\": container with ID starting with 6541e49d2ae9b56107cb844297ee7f71d2445a28f09ad80d8048aba689f8fea0 not found: ID does not exist" containerID="6541e49d2ae9b56107cb844297ee7f71d2445a28f09ad80d8048aba689f8fea0" Mar 12 18:23:55.583541 master-0 kubenswrapper[7337]: I0312 18:23:55.583476 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6541e49d2ae9b56107cb844297ee7f71d2445a28f09ad80d8048aba689f8fea0"} err="failed to get container status \"6541e49d2ae9b56107cb844297ee7f71d2445a28f09ad80d8048aba689f8fea0\": rpc error: code = NotFound desc = could not find container \"6541e49d2ae9b56107cb844297ee7f71d2445a28f09ad80d8048aba689f8fea0\": container with ID starting with 6541e49d2ae9b56107cb844297ee7f71d2445a28f09ad80d8048aba689f8fea0 not found: ID does not exist" Mar 12 18:23:55.583541 master-0 kubenswrapper[7337]: I0312 18:23:55.583498 7337 scope.go:117] "RemoveContainer" containerID="b5ec126109a6232b9aa6b3936f959178287d28c871336513191b8b615b8c76f4" Mar 12 18:23:55.584070 master-0 kubenswrapper[7337]: I0312 18:23:55.583985 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5ec126109a6232b9aa6b3936f959178287d28c871336513191b8b615b8c76f4"} err="failed to get container status \"b5ec126109a6232b9aa6b3936f959178287d28c871336513191b8b615b8c76f4\": rpc error: code = NotFound desc = could not find container \"b5ec126109a6232b9aa6b3936f959178287d28c871336513191b8b615b8c76f4\": container with ID starting with b5ec126109a6232b9aa6b3936f959178287d28c871336513191b8b615b8c76f4 not found: ID does not exist" Mar 12 18:23:55.584070 master-0 kubenswrapper[7337]: I0312 18:23:55.584020 7337 scope.go:117] "RemoveContainer" containerID="e44a9cc4aee80272b38b5e4941479361f9126b94fc8be3e31ee711d40c88f185" Mar 12 18:23:55.584332 master-0 kubenswrapper[7337]: I0312 18:23:55.584287 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e44a9cc4aee80272b38b5e4941479361f9126b94fc8be3e31ee711d40c88f185"} err="failed to get container status \"e44a9cc4aee80272b38b5e4941479361f9126b94fc8be3e31ee711d40c88f185\": rpc error: code = NotFound desc = could not find container \"e44a9cc4aee80272b38b5e4941479361f9126b94fc8be3e31ee711d40c88f185\": container with ID starting with e44a9cc4aee80272b38b5e4941479361f9126b94fc8be3e31ee711d40c88f185 not found: ID does not exist" Mar 12 18:23:55.584409 master-0 kubenswrapper[7337]: I0312 18:23:55.584323 7337 scope.go:117] "RemoveContainer" containerID="85481ac128156495a33b73e0081f7dd95a2a0153c6a78beb32cc5e772332a467" Mar 12 18:23:55.584954 master-0 kubenswrapper[7337]: I0312 18:23:55.584909 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85481ac128156495a33b73e0081f7dd95a2a0153c6a78beb32cc5e772332a467"} err="failed to get container status \"85481ac128156495a33b73e0081f7dd95a2a0153c6a78beb32cc5e772332a467\": rpc error: code = NotFound desc = could not find container \"85481ac128156495a33b73e0081f7dd95a2a0153c6a78beb32cc5e772332a467\": container with ID starting with 85481ac128156495a33b73e0081f7dd95a2a0153c6a78beb32cc5e772332a467 not found: ID does not exist" Mar 12 18:23:55.584954 master-0 kubenswrapper[7337]: I0312 18:23:55.584946 7337 scope.go:117] "RemoveContainer" containerID="36cd53616859a569c9c20c415080914078c0deb1d74d4eb2e8bfe2dd611dc704" Mar 12 18:23:55.585250 master-0 kubenswrapper[7337]: I0312 18:23:55.585208 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36cd53616859a569c9c20c415080914078c0deb1d74d4eb2e8bfe2dd611dc704"} err="failed to get container status \"36cd53616859a569c9c20c415080914078c0deb1d74d4eb2e8bfe2dd611dc704\": rpc error: code = NotFound desc = could not find container \"36cd53616859a569c9c20c415080914078c0deb1d74d4eb2e8bfe2dd611dc704\": container with ID starting with 36cd53616859a569c9c20c415080914078c0deb1d74d4eb2e8bfe2dd611dc704 not found: ID does not exist" Mar 12 18:23:55.585250 master-0 kubenswrapper[7337]: I0312 18:23:55.585240 7337 scope.go:117] "RemoveContainer" containerID="feb59d6cd6246e5eb3016a3006c4795b8f2234b2b5776c1e7531d520271356d0" Mar 12 18:23:55.585526 master-0 kubenswrapper[7337]: I0312 18:23:55.585472 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feb59d6cd6246e5eb3016a3006c4795b8f2234b2b5776c1e7531d520271356d0"} err="failed to get container status \"feb59d6cd6246e5eb3016a3006c4795b8f2234b2b5776c1e7531d520271356d0\": rpc error: code = NotFound desc = could not find container \"feb59d6cd6246e5eb3016a3006c4795b8f2234b2b5776c1e7531d520271356d0\": container with ID starting with feb59d6cd6246e5eb3016a3006c4795b8f2234b2b5776c1e7531d520271356d0 not found: ID does not exist" Mar 12 18:23:55.585526 master-0 kubenswrapper[7337]: I0312 18:23:55.585502 7337 scope.go:117] "RemoveContainer" containerID="6396a294b799f6df8bf8204dbbb2ddb526c388ee810b3a7185e4216792bec332" Mar 12 18:23:55.585822 master-0 kubenswrapper[7337]: I0312 18:23:55.585777 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6396a294b799f6df8bf8204dbbb2ddb526c388ee810b3a7185e4216792bec332"} err="failed to get container status \"6396a294b799f6df8bf8204dbbb2ddb526c388ee810b3a7185e4216792bec332\": rpc error: code = NotFound desc = could not find container \"6396a294b799f6df8bf8204dbbb2ddb526c388ee810b3a7185e4216792bec332\": container with ID starting with 6396a294b799f6df8bf8204dbbb2ddb526c388ee810b3a7185e4216792bec332 not found: ID does not exist" Mar 12 18:23:55.585822 master-0 kubenswrapper[7337]: I0312 18:23:55.585813 7337 scope.go:117] "RemoveContainer" containerID="842896c0759c15623b7ea51af59e9868a9fd1b13d33758d9aef5d9eaaeb4b330" Mar 12 18:23:55.586061 master-0 kubenswrapper[7337]: I0312 18:23:55.586031 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"842896c0759c15623b7ea51af59e9868a9fd1b13d33758d9aef5d9eaaeb4b330"} err="failed to get container status \"842896c0759c15623b7ea51af59e9868a9fd1b13d33758d9aef5d9eaaeb4b330\": rpc error: code = NotFound desc = could not find container \"842896c0759c15623b7ea51af59e9868a9fd1b13d33758d9aef5d9eaaeb4b330\": container with ID starting with 842896c0759c15623b7ea51af59e9868a9fd1b13d33758d9aef5d9eaaeb4b330 not found: ID does not exist" Mar 12 18:23:55.586110 master-0 kubenswrapper[7337]: I0312 18:23:55.586060 7337 scope.go:117] "RemoveContainer" containerID="6541e49d2ae9b56107cb844297ee7f71d2445a28f09ad80d8048aba689f8fea0" Mar 12 18:23:55.586301 master-0 kubenswrapper[7337]: I0312 18:23:55.586271 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6541e49d2ae9b56107cb844297ee7f71d2445a28f09ad80d8048aba689f8fea0"} err="failed to get container status \"6541e49d2ae9b56107cb844297ee7f71d2445a28f09ad80d8048aba689f8fea0\": rpc error: code = NotFound desc = could not find container \"6541e49d2ae9b56107cb844297ee7f71d2445a28f09ad80d8048aba689f8fea0\": container with ID starting with 6541e49d2ae9b56107cb844297ee7f71d2445a28f09ad80d8048aba689f8fea0 not found: ID does not exist" Mar 12 18:23:55.700632 master-0 kubenswrapper[7337]: I0312 18:23:55.700594 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-5-master-0_99f63924-b198-4954-ba14-5c48e8830ec0/installer/0.log" Mar 12 18:23:55.700806 master-0 kubenswrapper[7337]: I0312 18:23:55.700671 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 12 18:23:55.730464 master-0 kubenswrapper[7337]: I0312 18:23:55.730428 7337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" path="/var/lib/kubelet/pods/8e52bef89f4b50e4590a1719bcc5d7e5/volumes" Mar 12 18:23:55.751707 master-0 kubenswrapper[7337]: I0312 18:23:55.751659 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/99f63924-b198-4954-ba14-5c48e8830ec0-var-lock\") pod \"99f63924-b198-4954-ba14-5c48e8830ec0\" (UID: \"99f63924-b198-4954-ba14-5c48e8830ec0\") " Mar 12 18:23:55.751955 master-0 kubenswrapper[7337]: I0312 18:23:55.751717 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/99f63924-b198-4954-ba14-5c48e8830ec0-kube-api-access\") pod \"99f63924-b198-4954-ba14-5c48e8830ec0\" (UID: \"99f63924-b198-4954-ba14-5c48e8830ec0\") " Mar 12 18:23:55.751955 master-0 kubenswrapper[7337]: I0312 18:23:55.751750 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/99f63924-b198-4954-ba14-5c48e8830ec0-kubelet-dir\") pod \"99f63924-b198-4954-ba14-5c48e8830ec0\" (UID: \"99f63924-b198-4954-ba14-5c48e8830ec0\") " Mar 12 18:23:55.751955 master-0 kubenswrapper[7337]: I0312 18:23:55.751815 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99f63924-b198-4954-ba14-5c48e8830ec0-var-lock" (OuterVolumeSpecName: "var-lock") pod "99f63924-b198-4954-ba14-5c48e8830ec0" (UID: "99f63924-b198-4954-ba14-5c48e8830ec0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:23:55.752042 master-0 kubenswrapper[7337]: I0312 18:23:55.751957 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99f63924-b198-4954-ba14-5c48e8830ec0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "99f63924-b198-4954-ba14-5c48e8830ec0" (UID: "99f63924-b198-4954-ba14-5c48e8830ec0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:23:55.752308 master-0 kubenswrapper[7337]: I0312 18:23:55.752279 7337 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/99f63924-b198-4954-ba14-5c48e8830ec0-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:23:55.752346 master-0 kubenswrapper[7337]: I0312 18:23:55.752306 7337 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/99f63924-b198-4954-ba14-5c48e8830ec0-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 18:23:55.754173 master-0 kubenswrapper[7337]: I0312 18:23:55.754146 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99f63924-b198-4954-ba14-5c48e8830ec0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "99f63924-b198-4954-ba14-5c48e8830ec0" (UID: "99f63924-b198-4954-ba14-5c48e8830ec0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:23:55.853861 master-0 kubenswrapper[7337]: I0312 18:23:55.853794 7337 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/99f63924-b198-4954-ba14-5c48e8830ec0-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 18:23:56.435981 master-0 kubenswrapper[7337]: I0312 18:23:56.435940 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-5-master-0_99f63924-b198-4954-ba14-5c48e8830ec0/installer/0.log" Mar 12 18:23:56.436579 master-0 kubenswrapper[7337]: I0312 18:23:56.435998 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"99f63924-b198-4954-ba14-5c48e8830ec0","Type":"ContainerDied","Data":"b2aaa458623aaac6f6d633ebf1840bf60a09bfe111c0fe4eefba933212240641"} Mar 12 18:23:56.436579 master-0 kubenswrapper[7337]: I0312 18:23:56.436037 7337 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2aaa458623aaac6f6d633ebf1840bf60a09bfe111c0fe4eefba933212240641" Mar 12 18:23:56.436579 master-0 kubenswrapper[7337]: I0312 18:23:56.436108 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 12 18:23:58.370021 master-0 kubenswrapper[7337]: E0312 18:23:58.369870 7337 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.189c2b2810627d62 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:8e52bef89f4b50e4590a1719bcc5d7e5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Killing,Message:Stopping container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:23:24.35070909 +0000 UTC m=+604.819310077,LastTimestamp:2026-03-12 18:23:24.35070909 +0000 UTC m=+604.819310077,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:23:59.722126 master-0 kubenswrapper[7337]: I0312 18:23:59.721948 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 12 18:23:59.755913 master-0 kubenswrapper[7337]: I0312 18:23:59.755834 7337 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="901fa277-f91f-4050-9bba-fc5a7d764d16" Mar 12 18:23:59.755913 master-0 kubenswrapper[7337]: I0312 18:23:59.755892 7337 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="901fa277-f91f-4050-9bba-fc5a7d764d16" Mar 12 18:23:59.952401 master-0 kubenswrapper[7337]: E0312 18:23:59.952173 7337 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:24:09.952988 master-0 kubenswrapper[7337]: E0312 18:24:09.952858 7337 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:24:15.582510 master-0 kubenswrapper[7337]: I0312 18:24:15.582437 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-hqrqt_8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe/approver/1.log" Mar 12 18:24:15.583392 master-0 kubenswrapper[7337]: I0312 18:24:15.583132 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-hqrqt_8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe/approver/0.log" Mar 12 18:24:15.583926 master-0 kubenswrapper[7337]: I0312 18:24:15.583870 7337 generic.go:334] "Generic (PLEG): container finished" podID="8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe" containerID="f3f978addd81177f408450b5c8b37d7927d5e40c6e8538e905d2bc327fb8a086" exitCode=1 Mar 12 18:24:15.584006 master-0 kubenswrapper[7337]: I0312 18:24:15.583928 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-hqrqt" event={"ID":"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe","Type":"ContainerDied","Data":"f3f978addd81177f408450b5c8b37d7927d5e40c6e8538e905d2bc327fb8a086"} Mar 12 18:24:15.584006 master-0 kubenswrapper[7337]: I0312 18:24:15.583978 7337 scope.go:117] "RemoveContainer" containerID="0e62c9f4417a5a9e30eb23f06a18c4ab2b7d089c3e060926866187529335e3de" Mar 12 18:24:15.584889 master-0 kubenswrapper[7337]: I0312 18:24:15.584846 7337 scope.go:117] "RemoveContainer" containerID="f3f978addd81177f408450b5c8b37d7927d5e40c6e8538e905d2bc327fb8a086" Mar 12 18:24:15.585198 master-0 kubenswrapper[7337]: E0312 18:24:15.585152 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"approver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=approver pod=network-node-identity-hqrqt_openshift-network-node-identity(8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe)\"" pod="openshift-network-node-identity/network-node-identity-hqrqt" podUID="8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe" Mar 12 18:24:15.676762 master-0 kubenswrapper[7337]: I0312 18:24:15.676667 7337 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-ljw8b container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Mar 12 18:24:15.676762 master-0 kubenswrapper[7337]: I0312 18:24:15.676744 7337 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" podUID="062f1b21-2ffc-47da-8334-427c3b2a1a90" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Mar 12 18:24:17.869867 master-0 kubenswrapper[7337]: I0312 18:24:17.869800 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-hqrqt_8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe/approver/1.log" Mar 12 18:24:19.953541 master-0 kubenswrapper[7337]: E0312 18:24:19.953456 7337 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:24:19.953541 master-0 kubenswrapper[7337]: I0312 18:24:19.953542 7337 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 12 18:24:25.676859 master-0 kubenswrapper[7337]: I0312 18:24:25.676406 7337 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-ljw8b container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Mar 12 18:24:25.676859 master-0 kubenswrapper[7337]: I0312 18:24:25.676505 7337 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" podUID="062f1b21-2ffc-47da-8334-427c3b2a1a90" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Mar 12 18:24:27.723273 master-0 kubenswrapper[7337]: I0312 18:24:27.723183 7337 scope.go:117] "RemoveContainer" containerID="f3f978addd81177f408450b5c8b37d7927d5e40c6e8538e905d2bc327fb8a086" Mar 12 18:24:27.952183 master-0 kubenswrapper[7337]: I0312 18:24:27.952153 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-hqrqt_8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe/approver/1.log" Mar 12 18:24:27.952953 master-0 kubenswrapper[7337]: I0312 18:24:27.952924 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-hqrqt" event={"ID":"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe","Type":"ContainerStarted","Data":"9cf455cfddd24d2e983906cca2fb3817be9dbdb56368263e8d4789e1a654329a"} Mar 12 18:24:29.954392 master-0 kubenswrapper[7337]: E0312 18:24:29.954299 7337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 12 18:24:32.375816 master-0 kubenswrapper[7337]: E0312 18:24:32.375602 7337 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189c2adee843576f openshift-kube-controller-manager 10004 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:39c441a05d91070efc538925475b0a44,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:18:10 +0000 UTC,LastTimestamp:2026-03-12 18:23:37.259951911 +0000 UTC m=+617.728552858,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:24:33.759802 master-0 kubenswrapper[7337]: E0312 18:24:33.759719 7337 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 12 18:24:33.760580 master-0 kubenswrapper[7337]: I0312 18:24:33.760540 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 12 18:24:34.003258 master-0 kubenswrapper[7337]: I0312 18:24:34.003155 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"a27b7d74527b56755a6c2c471b3ca3c73b2cfc54277efe40b5551df95fef2671"} Mar 12 18:24:35.012791 master-0 kubenswrapper[7337]: I0312 18:24:35.012739 7337 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="84b05cdad590c2078d906c0b5bbb00f860e5030460386d4b22d12520cb006e5f" exitCode=0 Mar 12 18:24:35.013476 master-0 kubenswrapper[7337]: I0312 18:24:35.012975 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"84b05cdad590c2078d906c0b5bbb00f860e5030460386d4b22d12520cb006e5f"} Mar 12 18:24:35.013716 master-0 kubenswrapper[7337]: I0312 18:24:35.013665 7337 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="901fa277-f91f-4050-9bba-fc5a7d764d16" Mar 12 18:24:35.013779 master-0 kubenswrapper[7337]: I0312 18:24:35.013721 7337 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="901fa277-f91f-4050-9bba-fc5a7d764d16" Mar 12 18:24:35.676088 master-0 kubenswrapper[7337]: I0312 18:24:35.676032 7337 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-ljw8b container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Mar 12 18:24:35.676305 master-0 kubenswrapper[7337]: I0312 18:24:35.676112 7337 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" podUID="062f1b21-2ffc-47da-8334-427c3b2a1a90" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Mar 12 18:24:35.676305 master-0 kubenswrapper[7337]: I0312 18:24:35.676183 7337 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:24:35.676959 master-0 kubenswrapper[7337]: I0312 18:24:35.676915 7337 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"d457f16cde4caae54a1dbf33594546ae3dcb4fddad7eadd6df336af05fa29aa3"} pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Mar 12 18:24:35.677015 master-0 kubenswrapper[7337]: I0312 18:24:35.676986 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" podUID="062f1b21-2ffc-47da-8334-427c3b2a1a90" containerName="authentication-operator" containerID="cri-o://d457f16cde4caae54a1dbf33594546ae3dcb4fddad7eadd6df336af05fa29aa3" gracePeriod=30 Mar 12 18:24:37.027198 master-0 kubenswrapper[7337]: I0312 18:24:37.027157 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ljw8b_062f1b21-2ffc-47da-8334-427c3b2a1a90/authentication-operator/1.log" Mar 12 18:24:37.028991 master-0 kubenswrapper[7337]: I0312 18:24:37.028936 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ljw8b_062f1b21-2ffc-47da-8334-427c3b2a1a90/authentication-operator/0.log" Mar 12 18:24:37.029122 master-0 kubenswrapper[7337]: I0312 18:24:37.029003 7337 generic.go:334] "Generic (PLEG): container finished" podID="062f1b21-2ffc-47da-8334-427c3b2a1a90" containerID="d457f16cde4caae54a1dbf33594546ae3dcb4fddad7eadd6df336af05fa29aa3" exitCode=255 Mar 12 18:24:37.029122 master-0 kubenswrapper[7337]: I0312 18:24:37.029042 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" event={"ID":"062f1b21-2ffc-47da-8334-427c3b2a1a90","Type":"ContainerDied","Data":"d457f16cde4caae54a1dbf33594546ae3dcb4fddad7eadd6df336af05fa29aa3"} Mar 12 18:24:37.029122 master-0 kubenswrapper[7337]: I0312 18:24:37.029072 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" event={"ID":"062f1b21-2ffc-47da-8334-427c3b2a1a90","Type":"ContainerStarted","Data":"d487597f8c9bef7ba2d4e09c3b10fc797aeda33ac566bd3fb174bfbc58571eff"} Mar 12 18:24:37.029122 master-0 kubenswrapper[7337]: I0312 18:24:37.029089 7337 scope.go:117] "RemoveContainer" containerID="385dbea872bd37bf8e7a76f3902ee88fd0be82523d84cbe8f74298971654ec6b" Mar 12 18:24:37.259603 master-0 kubenswrapper[7337]: I0312 18:24:37.259535 7337 status_manager.go:851] "Failed to get status for pod" podUID="39c441a05d91070efc538925475b0a44" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)" Mar 12 18:24:38.042914 master-0 kubenswrapper[7337]: I0312 18:24:38.042816 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ljw8b_062f1b21-2ffc-47da-8334-427c3b2a1a90/authentication-operator/1.log" Mar 12 18:24:40.155457 master-0 kubenswrapper[7337]: E0312 18:24:40.155380 7337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 12 18:24:43.079570 master-0 kubenswrapper[7337]: I0312 18:24:43.079474 7337 generic.go:334] "Generic (PLEG): container finished" podID="518ffff8-8119-41be-8b76-ce49d5751254" containerID="2c667ded264eac2ef91dbced4e1d0c451c7fcbbd73ca41017073728ca29d6478" exitCode=0 Mar 12 18:24:43.080331 master-0 kubenswrapper[7337]: I0312 18:24:43.079601 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" event={"ID":"518ffff8-8119-41be-8b76-ce49d5751254","Type":"ContainerDied","Data":"2c667ded264eac2ef91dbced4e1d0c451c7fcbbd73ca41017073728ca29d6478"} Mar 12 18:24:43.080331 master-0 kubenswrapper[7337]: I0312 18:24:43.079747 7337 scope.go:117] "RemoveContainer" containerID="41887ee13b262a1bb752082a6699313d133920e514fab8c14a0b03b0f36c3f44" Mar 12 18:24:44.088303 master-0 kubenswrapper[7337]: I0312 18:24:44.088240 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" event={"ID":"518ffff8-8119-41be-8b76-ce49d5751254","Type":"ContainerStarted","Data":"769cd1e8b5824a316a70fa02fbd72a61b282feb96440b00bc150d9a2b430b6d3"} Mar 12 18:24:44.490390 master-0 kubenswrapper[7337]: I0312 18:24:44.490260 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:24:44.493785 master-0 kubenswrapper[7337]: I0312 18:24:44.493746 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:24:44.493785 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:24:44.493785 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:24:44.493785 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:24:44.493978 master-0 kubenswrapper[7337]: I0312 18:24:44.493795 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:24:45.493351 master-0 kubenswrapper[7337]: I0312 18:24:45.493281 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:24:45.493351 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:24:45.493351 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:24:45.493351 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:24:45.493985 master-0 kubenswrapper[7337]: I0312 18:24:45.493363 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:24:46.491964 master-0 kubenswrapper[7337]: I0312 18:24:46.491850 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:24:46.491964 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:24:46.491964 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:24:46.491964 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:24:46.491964 master-0 kubenswrapper[7337]: I0312 18:24:46.491926 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:24:46.616207 master-0 kubenswrapper[7337]: E0312 18:24:46.616133 7337 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T18:24:36Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T18:24:36Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T18:24:36Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T18:24:36Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:24:47.489836 master-0 kubenswrapper[7337]: I0312 18:24:47.489753 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:24:47.493455 master-0 kubenswrapper[7337]: I0312 18:24:47.493427 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:24:47.493455 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:24:47.493455 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:24:47.493455 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:24:47.493731 master-0 kubenswrapper[7337]: I0312 18:24:47.493706 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:24:48.493702 master-0 kubenswrapper[7337]: I0312 18:24:48.493597 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:24:48.493702 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:24:48.493702 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:24:48.493702 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:24:48.494823 master-0 kubenswrapper[7337]: I0312 18:24:48.493710 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:24:49.492907 master-0 kubenswrapper[7337]: I0312 18:24:49.492808 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:24:49.492907 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:24:49.492907 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:24:49.492907 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:24:49.493611 master-0 kubenswrapper[7337]: I0312 18:24:49.492942 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:24:50.493228 master-0 kubenswrapper[7337]: I0312 18:24:50.493066 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:24:50.493228 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:24:50.493228 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:24:50.493228 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:24:50.493228 master-0 kubenswrapper[7337]: I0312 18:24:50.493127 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:24:50.556234 master-0 kubenswrapper[7337]: E0312 18:24:50.556152 7337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Mar 12 18:24:51.491965 master-0 kubenswrapper[7337]: I0312 18:24:51.491915 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:24:51.491965 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:24:51.491965 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:24:51.491965 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:24:51.492231 master-0 kubenswrapper[7337]: I0312 18:24:51.491983 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:24:52.492988 master-0 kubenswrapper[7337]: I0312 18:24:52.492917 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:24:52.492988 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:24:52.492988 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:24:52.492988 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:24:52.492988 master-0 kubenswrapper[7337]: I0312 18:24:52.492972 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:24:53.493609 master-0 kubenswrapper[7337]: I0312 18:24:53.493542 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:24:53.493609 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:24:53.493609 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:24:53.493609 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:24:53.494176 master-0 kubenswrapper[7337]: I0312 18:24:53.493630 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:24:54.493278 master-0 kubenswrapper[7337]: I0312 18:24:54.493180 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:24:54.493278 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:24:54.493278 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:24:54.493278 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:24:54.494499 master-0 kubenswrapper[7337]: I0312 18:24:54.493272 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:24:55.493596 master-0 kubenswrapper[7337]: I0312 18:24:55.493445 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:24:55.493596 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:24:55.493596 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:24:55.493596 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:24:55.493596 master-0 kubenswrapper[7337]: I0312 18:24:55.493571 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:24:56.191862 master-0 kubenswrapper[7337]: I0312 18:24:56.191796 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4527l_d94dc349-c5cb-4f12-8e48-867030af4981/ingress-operator/3.log" Mar 12 18:24:56.192952 master-0 kubenswrapper[7337]: I0312 18:24:56.192905 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4527l_d94dc349-c5cb-4f12-8e48-867030af4981/ingress-operator/2.log" Mar 12 18:24:56.193942 master-0 kubenswrapper[7337]: I0312 18:24:56.193881 7337 generic.go:334] "Generic (PLEG): container finished" podID="d94dc349-c5cb-4f12-8e48-867030af4981" containerID="6c80cd8acac8db8cb197069003bd675e1b445a06ed49a0673a80b100538ac223" exitCode=1 Mar 12 18:24:56.193998 master-0 kubenswrapper[7337]: I0312 18:24:56.193962 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" event={"ID":"d94dc349-c5cb-4f12-8e48-867030af4981","Type":"ContainerDied","Data":"6c80cd8acac8db8cb197069003bd675e1b445a06ed49a0673a80b100538ac223"} Mar 12 18:24:56.194080 master-0 kubenswrapper[7337]: I0312 18:24:56.194044 7337 scope.go:117] "RemoveContainer" containerID="a0871263a834ec8eda5a88b5b72f6ff58f93c73ab7cc887038f0189f80ffd4a8" Mar 12 18:24:56.194964 master-0 kubenswrapper[7337]: I0312 18:24:56.194922 7337 scope.go:117] "RemoveContainer" containerID="6c80cd8acac8db8cb197069003bd675e1b445a06ed49a0673a80b100538ac223" Mar 12 18:24:56.195481 master-0 kubenswrapper[7337]: E0312 18:24:56.195420 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-4527l_openshift-ingress-operator(d94dc349-c5cb-4f12-8e48-867030af4981)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" podUID="d94dc349-c5cb-4f12-8e48-867030af4981" Mar 12 18:24:56.493083 master-0 kubenswrapper[7337]: I0312 18:24:56.492736 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:24:56.493083 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:24:56.493083 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:24:56.493083 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:24:56.493083 master-0 kubenswrapper[7337]: I0312 18:24:56.492846 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:24:56.616893 master-0 kubenswrapper[7337]: E0312 18:24:56.616773 7337 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:24:57.201986 master-0 kubenswrapper[7337]: I0312 18:24:57.201928 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4527l_d94dc349-c5cb-4f12-8e48-867030af4981/ingress-operator/3.log" Mar 12 18:24:57.492970 master-0 kubenswrapper[7337]: I0312 18:24:57.492807 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:24:57.492970 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:24:57.492970 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:24:57.492970 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:24:57.493189 master-0 kubenswrapper[7337]: I0312 18:24:57.492958 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:24:58.493280 master-0 kubenswrapper[7337]: I0312 18:24:58.493213 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:24:58.493280 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:24:58.493280 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:24:58.493280 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:24:58.494371 master-0 kubenswrapper[7337]: I0312 18:24:58.494322 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:24:59.492887 master-0 kubenswrapper[7337]: I0312 18:24:59.492708 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:24:59.492887 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:24:59.492887 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:24:59.492887 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:24:59.492887 master-0 kubenswrapper[7337]: I0312 18:24:59.492807 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:00.493056 master-0 kubenswrapper[7337]: I0312 18:25:00.492882 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:00.493056 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:00.493056 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:00.493056 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:00.493056 master-0 kubenswrapper[7337]: I0312 18:25:00.492991 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:01.358364 master-0 kubenswrapper[7337]: E0312 18:25:01.358295 7337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 12 18:25:01.494366 master-0 kubenswrapper[7337]: I0312 18:25:01.494265 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:01.494366 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:01.494366 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:01.494366 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:01.494366 master-0 kubenswrapper[7337]: I0312 18:25:01.494357 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:02.492430 master-0 kubenswrapper[7337]: I0312 18:25:02.492273 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:02.492430 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:02.492430 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:02.492430 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:02.492890 master-0 kubenswrapper[7337]: I0312 18:25:02.492438 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:03.494363 master-0 kubenswrapper[7337]: I0312 18:25:03.494271 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:03.494363 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:03.494363 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:03.494363 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:03.495438 master-0 kubenswrapper[7337]: I0312 18:25:03.494376 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:04.495018 master-0 kubenswrapper[7337]: I0312 18:25:04.494892 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:04.495018 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:04.495018 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:04.495018 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:04.496437 master-0 kubenswrapper[7337]: I0312 18:25:04.495077 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:05.493579 master-0 kubenswrapper[7337]: I0312 18:25:05.493461 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:05.493579 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:05.493579 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:05.493579 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:05.493579 master-0 kubenswrapper[7337]: I0312 18:25:05.493576 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:06.379489 master-0 kubenswrapper[7337]: E0312 18:25:06.379301 7337 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event=< Mar 12 18:25:06.379489 master-0 kubenswrapper[7337]: &Event{ObjectMeta:{router-default-79f8cd6fdd-79bhf.189c2aecf55c8496 openshift-ingress 11066 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ingress,Name:router-default-79f8cd6fdd-79bhf,UID:518ffff8-8119-41be-8b76-ce49d5751254,APIVersion:v1,ResourceVersion:10568,FieldPath:spec.containers{router},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Mar 12 18:25:06.379489 master-0 kubenswrapper[7337]: body: [-]backend-http failed: reason withheld Mar 12 18:25:06.379489 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:06.379489 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:06.379489 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:06.379489 master-0 kubenswrapper[7337]: Mar 12 18:25:06.379489 master-0 kubenswrapper[7337]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:19:10 +0000 UTC,LastTimestamp:2026-03-12 18:23:37.491808051 +0000 UTC m=+617.960409028,Count:222,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Mar 12 18:25:06.379489 master-0 kubenswrapper[7337]: > Mar 12 18:25:06.492356 master-0 kubenswrapper[7337]: I0312 18:25:06.492252 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:06.492356 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:06.492356 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:06.492356 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:06.492356 master-0 kubenswrapper[7337]: I0312 18:25:06.492343 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:06.617723 master-0 kubenswrapper[7337]: E0312 18:25:06.617609 7337 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:25:07.497890 master-0 kubenswrapper[7337]: I0312 18:25:07.497819 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:07.497890 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:07.497890 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:07.497890 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:07.498509 master-0 kubenswrapper[7337]: I0312 18:25:07.497912 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:07.722679 master-0 kubenswrapper[7337]: I0312 18:25:07.722626 7337 scope.go:117] "RemoveContainer" containerID="6c80cd8acac8db8cb197069003bd675e1b445a06ed49a0673a80b100538ac223" Mar 12 18:25:07.722932 master-0 kubenswrapper[7337]: E0312 18:25:07.722886 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-4527l_openshift-ingress-operator(d94dc349-c5cb-4f12-8e48-867030af4981)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" podUID="d94dc349-c5cb-4f12-8e48-867030af4981" Mar 12 18:25:08.493501 master-0 kubenswrapper[7337]: I0312 18:25:08.493394 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:08.493501 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:08.493501 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:08.493501 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:08.493501 master-0 kubenswrapper[7337]: I0312 18:25:08.493500 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:09.016259 master-0 kubenswrapper[7337]: E0312 18:25:09.016167 7337 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 12 18:25:09.493481 master-0 kubenswrapper[7337]: I0312 18:25:09.493365 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:09.493481 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:09.493481 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:09.493481 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:09.493850 master-0 kubenswrapper[7337]: I0312 18:25:09.493567 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:10.317179 master-0 kubenswrapper[7337]: I0312 18:25:10.310425 7337 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="a4e1a60fb2a5676e0c3a007005c7ba4c139f5bc8097de545710cc25465fe8dd1" exitCode=0 Mar 12 18:25:10.317179 master-0 kubenswrapper[7337]: I0312 18:25:10.310472 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"a4e1a60fb2a5676e0c3a007005c7ba4c139f5bc8097de545710cc25465fe8dd1"} Mar 12 18:25:10.317179 master-0 kubenswrapper[7337]: I0312 18:25:10.311288 7337 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="901fa277-f91f-4050-9bba-fc5a7d764d16" Mar 12 18:25:10.317179 master-0 kubenswrapper[7337]: I0312 18:25:10.311361 7337 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="901fa277-f91f-4050-9bba-fc5a7d764d16" Mar 12 18:25:10.522061 master-0 kubenswrapper[7337]: I0312 18:25:10.521865 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:10.522061 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:10.522061 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:10.522061 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:10.522061 master-0 kubenswrapper[7337]: I0312 18:25:10.521938 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:11.492004 master-0 kubenswrapper[7337]: I0312 18:25:11.491958 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:11.492004 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:11.492004 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:11.492004 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:11.492684 master-0 kubenswrapper[7337]: I0312 18:25:11.492015 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:12.328937 master-0 kubenswrapper[7337]: I0312 18:25:12.328881 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-mb6tc_d1b3859c-20a1-4a1c-8508-86ed843768f5/manager/1.log" Mar 12 18:25:12.330645 master-0 kubenswrapper[7337]: I0312 18:25:12.330578 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-mb6tc_d1b3859c-20a1-4a1c-8508-86ed843768f5/manager/0.log" Mar 12 18:25:12.331397 master-0 kubenswrapper[7337]: I0312 18:25:12.331341 7337 generic.go:334] "Generic (PLEG): container finished" podID="d1b3859c-20a1-4a1c-8508-86ed843768f5" containerID="96aef0daff5b4b065b760a53564e1e05a4751150ad79dc2f9bc551c5dafe3e48" exitCode=1 Mar 12 18:25:12.331749 master-0 kubenswrapper[7337]: I0312 18:25:12.331396 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" event={"ID":"d1b3859c-20a1-4a1c-8508-86ed843768f5","Type":"ContainerDied","Data":"96aef0daff5b4b065b760a53564e1e05a4751150ad79dc2f9bc551c5dafe3e48"} Mar 12 18:25:12.332002 master-0 kubenswrapper[7337]: I0312 18:25:12.331972 7337 scope.go:117] "RemoveContainer" containerID="736a8404a1683d56f8dbc8f71de47cc325d858c0409febcb5d511b27a322ce13" Mar 12 18:25:12.332833 master-0 kubenswrapper[7337]: I0312 18:25:12.332789 7337 scope.go:117] "RemoveContainer" containerID="96aef0daff5b4b065b760a53564e1e05a4751150ad79dc2f9bc551c5dafe3e48" Mar 12 18:25:12.333229 master-0 kubenswrapper[7337]: E0312 18:25:12.333177 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=catalogd-controller-manager-7f8b8b6f4c-mb6tc_openshift-catalogd(d1b3859c-20a1-4a1c-8508-86ed843768f5)\"" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" podUID="d1b3859c-20a1-4a1c-8508-86ed843768f5" Mar 12 18:25:12.336970 master-0 kubenswrapper[7337]: I0312 18:25:12.336916 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-z9srl_ee4c1949-96b4-4444-9675-9df1d46f681e/cluster-cloud-controller-manager/0.log" Mar 12 18:25:12.337126 master-0 kubenswrapper[7337]: I0312 18:25:12.336989 7337 generic.go:334] "Generic (PLEG): container finished" podID="ee4c1949-96b4-4444-9675-9df1d46f681e" containerID="55f44f89a0ddfa17022efb42d5b69490ffb4f27463e27a43d9ad2629d1fed3e4" exitCode=1 Mar 12 18:25:12.337126 master-0 kubenswrapper[7337]: I0312 18:25:12.337069 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" event={"ID":"ee4c1949-96b4-4444-9675-9df1d46f681e","Type":"ContainerDied","Data":"55f44f89a0ddfa17022efb42d5b69490ffb4f27463e27a43d9ad2629d1fed3e4"} Mar 12 18:25:12.337918 master-0 kubenswrapper[7337]: I0312 18:25:12.337863 7337 scope.go:117] "RemoveContainer" containerID="55f44f89a0ddfa17022efb42d5b69490ffb4f27463e27a43d9ad2629d1fed3e4" Mar 12 18:25:12.492814 master-0 kubenswrapper[7337]: I0312 18:25:12.492719 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:12.492814 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:12.492814 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:12.492814 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:12.492814 master-0 kubenswrapper[7337]: I0312 18:25:12.492805 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:12.969564 master-0 kubenswrapper[7337]: E0312 18:25:12.969415 7337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Mar 12 18:25:13.348559 master-0 kubenswrapper[7337]: I0312 18:25:13.348470 7337 generic.go:334] "Generic (PLEG): container finished" podID="4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64" containerID="7ae4d8d774c3ae2fa8787557fe823a911ba5793827a61120b252083df1bc5f38" exitCode=0 Mar 12 18:25:13.348854 master-0 kubenswrapper[7337]: I0312 18:25:13.348625 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" event={"ID":"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64","Type":"ContainerDied","Data":"7ae4d8d774c3ae2fa8787557fe823a911ba5793827a61120b252083df1bc5f38"} Mar 12 18:25:13.348854 master-0 kubenswrapper[7337]: I0312 18:25:13.348690 7337 scope.go:117] "RemoveContainer" containerID="91ac1142b73c6d3658240c3848ad3ec4d35a6a2c1e366a3eec630ba38825ae3c" Mar 12 18:25:13.349754 master-0 kubenswrapper[7337]: I0312 18:25:13.349673 7337 scope.go:117] "RemoveContainer" containerID="7ae4d8d774c3ae2fa8787557fe823a911ba5793827a61120b252083df1bc5f38" Mar 12 18:25:13.350118 master-0 kubenswrapper[7337]: E0312 18:25:13.350043 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-64bf9778cb-clkx5_openshift-marketplace(4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64)\"" pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" podUID="4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64" Mar 12 18:25:13.353852 master-0 kubenswrapper[7337]: I0312 18:25:13.353784 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-mb6tc_d1b3859c-20a1-4a1c-8508-86ed843768f5/manager/1.log" Mar 12 18:25:13.359561 master-0 kubenswrapper[7337]: I0312 18:25:13.359481 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-z9srl_ee4c1949-96b4-4444-9675-9df1d46f681e/cluster-cloud-controller-manager/0.log" Mar 12 18:25:13.359706 master-0 kubenswrapper[7337]: I0312 18:25:13.359652 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" event={"ID":"ee4c1949-96b4-4444-9675-9df1d46f681e","Type":"ContainerStarted","Data":"166d33c879a446829e13a69ad59fb1009d5c6a372d20c1ab8cea54aee948f469"} Mar 12 18:25:13.493122 master-0 kubenswrapper[7337]: I0312 18:25:13.493034 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:13.493122 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:13.493122 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:13.493122 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:13.494883 master-0 kubenswrapper[7337]: I0312 18:25:13.493125 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:14.375185 master-0 kubenswrapper[7337]: I0312 18:25:14.375140 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-z9srl_ee4c1949-96b4-4444-9675-9df1d46f681e/config-sync-controllers/0.log" Mar 12 18:25:14.376329 master-0 kubenswrapper[7337]: I0312 18:25:14.376263 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-z9srl_ee4c1949-96b4-4444-9675-9df1d46f681e/cluster-cloud-controller-manager/0.log" Mar 12 18:25:14.376559 master-0 kubenswrapper[7337]: I0312 18:25:14.376377 7337 generic.go:334] "Generic (PLEG): container finished" podID="ee4c1949-96b4-4444-9675-9df1d46f681e" containerID="488293f6a0a5ffc939b73e8e291035b18dd6b6d9c6030cee524df83362585aa5" exitCode=1 Mar 12 18:25:14.376697 master-0 kubenswrapper[7337]: I0312 18:25:14.376504 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" event={"ID":"ee4c1949-96b4-4444-9675-9df1d46f681e","Type":"ContainerDied","Data":"488293f6a0a5ffc939b73e8e291035b18dd6b6d9c6030cee524df83362585aa5"} Mar 12 18:25:14.377531 master-0 kubenswrapper[7337]: I0312 18:25:14.377460 7337 scope.go:117] "RemoveContainer" containerID="488293f6a0a5ffc939b73e8e291035b18dd6b6d9c6030cee524df83362585aa5" Mar 12 18:25:14.492722 master-0 kubenswrapper[7337]: I0312 18:25:14.492671 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:14.492722 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:14.492722 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:14.492722 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:14.493159 master-0 kubenswrapper[7337]: I0312 18:25:14.493125 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:15.391851 master-0 kubenswrapper[7337]: I0312 18:25:15.391763 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-z9srl_ee4c1949-96b4-4444-9675-9df1d46f681e/config-sync-controllers/0.log" Mar 12 18:25:15.392992 master-0 kubenswrapper[7337]: I0312 18:25:15.392932 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-z9srl_ee4c1949-96b4-4444-9675-9df1d46f681e/cluster-cloud-controller-manager/0.log" Mar 12 18:25:15.393060 master-0 kubenswrapper[7337]: I0312 18:25:15.393033 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" event={"ID":"ee4c1949-96b4-4444-9675-9df1d46f681e","Type":"ContainerStarted","Data":"30d038e9f503f9a5193512a13a277001e75dbbce7e9f307d783a04442b934b73"} Mar 12 18:25:15.492751 master-0 kubenswrapper[7337]: I0312 18:25:15.492650 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:15.492751 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:15.492751 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:15.492751 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:15.494108 master-0 kubenswrapper[7337]: I0312 18:25:15.492752 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:15.677830 master-0 kubenswrapper[7337]: I0312 18:25:15.677565 7337 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-ljw8b container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Mar 12 18:25:15.677830 master-0 kubenswrapper[7337]: I0312 18:25:15.677704 7337 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" podUID="062f1b21-2ffc-47da-8334-427c3b2a1a90" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Mar 12 18:25:16.492949 master-0 kubenswrapper[7337]: I0312 18:25:16.492853 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:16.492949 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:16.492949 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:16.492949 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:16.493708 master-0 kubenswrapper[7337]: I0312 18:25:16.492950 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:16.619617 master-0 kubenswrapper[7337]: E0312 18:25:16.619469 7337 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:25:17.492446 master-0 kubenswrapper[7337]: I0312 18:25:17.492360 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:17.492446 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:17.492446 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:17.492446 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:17.493141 master-0 kubenswrapper[7337]: I0312 18:25:17.492477 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:18.493280 master-0 kubenswrapper[7337]: I0312 18:25:18.493168 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:18.493280 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:18.493280 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:18.493280 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:18.493280 master-0 kubenswrapper[7337]: I0312 18:25:18.493269 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:19.492673 master-0 kubenswrapper[7337]: I0312 18:25:19.492576 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:19.492673 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:19.492673 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:19.492673 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:19.492673 master-0 kubenswrapper[7337]: I0312 18:25:19.492638 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:20.493505 master-0 kubenswrapper[7337]: I0312 18:25:20.493366 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:20.493505 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:20.493505 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:20.493505 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:20.493505 master-0 kubenswrapper[7337]: I0312 18:25:20.493436 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:20.665288 master-0 kubenswrapper[7337]: I0312 18:25:20.665218 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:25:20.666217 master-0 kubenswrapper[7337]: I0312 18:25:20.666174 7337 scope.go:117] "RemoveContainer" containerID="96aef0daff5b4b065b760a53564e1e05a4751150ad79dc2f9bc551c5dafe3e48" Mar 12 18:25:20.666544 master-0 kubenswrapper[7337]: E0312 18:25:20.666477 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=catalogd-controller-manager-7f8b8b6f4c-mb6tc_openshift-catalogd(d1b3859c-20a1-4a1c-8508-86ed843768f5)\"" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" podUID="d1b3859c-20a1-4a1c-8508-86ed843768f5" Mar 12 18:25:21.492222 master-0 kubenswrapper[7337]: I0312 18:25:21.492186 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:21.492222 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:21.492222 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:21.492222 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:21.492537 master-0 kubenswrapper[7337]: I0312 18:25:21.492247 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:22.493185 master-0 kubenswrapper[7337]: I0312 18:25:22.492971 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:22.493185 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:22.493185 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:22.493185 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:22.493185 master-0 kubenswrapper[7337]: I0312 18:25:22.493027 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:22.722879 master-0 kubenswrapper[7337]: I0312 18:25:22.722821 7337 scope.go:117] "RemoveContainer" containerID="6c80cd8acac8db8cb197069003bd675e1b445a06ed49a0673a80b100538ac223" Mar 12 18:25:22.723124 master-0 kubenswrapper[7337]: E0312 18:25:22.723031 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-4527l_openshift-ingress-operator(d94dc349-c5cb-4f12-8e48-867030af4981)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" podUID="d94dc349-c5cb-4f12-8e48-867030af4981" Mar 12 18:25:22.768942 master-0 kubenswrapper[7337]: I0312 18:25:22.768760 7337 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:25:22.768942 master-0 kubenswrapper[7337]: I0312 18:25:22.768863 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:25:22.769277 master-0 kubenswrapper[7337]: I0312 18:25:22.769242 7337 scope.go:117] "RemoveContainer" containerID="7ae4d8d774c3ae2fa8787557fe823a911ba5793827a61120b252083df1bc5f38" Mar 12 18:25:22.769446 master-0 kubenswrapper[7337]: E0312 18:25:22.769396 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-64bf9778cb-clkx5_openshift-marketplace(4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64)\"" pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" podUID="4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64" Mar 12 18:25:23.446970 master-0 kubenswrapper[7337]: I0312 18:25:23.446925 7337 scope.go:117] "RemoveContainer" containerID="7ae4d8d774c3ae2fa8787557fe823a911ba5793827a61120b252083df1bc5f38" Mar 12 18:25:23.498695 master-0 kubenswrapper[7337]: I0312 18:25:23.498637 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:23.498695 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:23.498695 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:23.498695 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:23.499306 master-0 kubenswrapper[7337]: I0312 18:25:23.498722 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:24.458891 master-0 kubenswrapper[7337]: I0312 18:25:24.458756 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" event={"ID":"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64","Type":"ContainerStarted","Data":"b8cba7d41ad7783312a2b3d8c3bcfeff5e159977479443c4c61ad7cf5ef8a846"} Mar 12 18:25:24.459365 master-0 kubenswrapper[7337]: I0312 18:25:24.459316 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:25:24.462805 master-0 kubenswrapper[7337]: I0312 18:25:24.462722 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:25:24.493178 master-0 kubenswrapper[7337]: I0312 18:25:24.493076 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:24.493178 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:24.493178 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:24.493178 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:24.493502 master-0 kubenswrapper[7337]: I0312 18:25:24.493182 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:25.493968 master-0 kubenswrapper[7337]: I0312 18:25:25.493892 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:25.493968 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:25.493968 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:25.493968 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:25.495400 master-0 kubenswrapper[7337]: I0312 18:25:25.495329 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:25.676403 master-0 kubenswrapper[7337]: I0312 18:25:25.676302 7337 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-ljw8b container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Mar 12 18:25:25.676403 master-0 kubenswrapper[7337]: I0312 18:25:25.676395 7337 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" podUID="062f1b21-2ffc-47da-8334-427c3b2a1a90" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Mar 12 18:25:26.170368 master-0 kubenswrapper[7337]: E0312 18:25:26.170277 7337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Mar 12 18:25:26.494584 master-0 kubenswrapper[7337]: I0312 18:25:26.494308 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:26.494584 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:26.494584 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:26.494584 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:26.496052 master-0 kubenswrapper[7337]: I0312 18:25:26.495024 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:26.620852 master-0 kubenswrapper[7337]: E0312 18:25:26.620678 7337 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:25:26.620852 master-0 kubenswrapper[7337]: E0312 18:25:26.620726 7337 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 18:25:27.497241 master-0 kubenswrapper[7337]: I0312 18:25:27.497165 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:27.497241 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:27.497241 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:27.497241 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:27.497241 master-0 kubenswrapper[7337]: I0312 18:25:27.497222 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:28.494272 master-0 kubenswrapper[7337]: I0312 18:25:28.494158 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:28.494272 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:28.494272 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:28.494272 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:28.494272 master-0 kubenswrapper[7337]: I0312 18:25:28.494252 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:28.495902 master-0 kubenswrapper[7337]: I0312 18:25:28.495836 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-9nzsn_b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652/manager/1.log" Mar 12 18:25:28.497771 master-0 kubenswrapper[7337]: I0312 18:25:28.497701 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-9nzsn_b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652/manager/0.log" Mar 12 18:25:28.498689 master-0 kubenswrapper[7337]: I0312 18:25:28.497798 7337 generic.go:334] "Generic (PLEG): container finished" podID="b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652" containerID="68e99d6ea6e8d10062ce5f4ba00f8d6b01cd9c70d46c0f8c6da206028e5ae034" exitCode=1 Mar 12 18:25:28.498689 master-0 kubenswrapper[7337]: I0312 18:25:28.497855 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" event={"ID":"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652","Type":"ContainerDied","Data":"68e99d6ea6e8d10062ce5f4ba00f8d6b01cd9c70d46c0f8c6da206028e5ae034"} Mar 12 18:25:28.498689 master-0 kubenswrapper[7337]: I0312 18:25:28.497915 7337 scope.go:117] "RemoveContainer" containerID="a15650ff0279cc1eb053cd0564e886ecaf1299636ec1285faa1562a29a442c43" Mar 12 18:25:28.499253 master-0 kubenswrapper[7337]: I0312 18:25:28.499031 7337 scope.go:117] "RemoveContainer" containerID="68e99d6ea6e8d10062ce5f4ba00f8d6b01cd9c70d46c0f8c6da206028e5ae034" Mar 12 18:25:28.499729 master-0 kubenswrapper[7337]: E0312 18:25:28.499673 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=operator-controller-controller-manager-6598bfb6c4-9nzsn_openshift-operator-controller(b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652)\"" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" podUID="b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652" Mar 12 18:25:29.491949 master-0 kubenswrapper[7337]: I0312 18:25:29.491883 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:29.491949 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:29.491949 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:29.491949 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:29.491949 master-0 kubenswrapper[7337]: I0312 18:25:29.491944 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:29.506746 master-0 kubenswrapper[7337]: I0312 18:25:29.506698 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-9nzsn_b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652/manager/1.log" Mar 12 18:25:30.492903 master-0 kubenswrapper[7337]: I0312 18:25:30.492763 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:30.492903 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:30.492903 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:30.492903 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:30.492903 master-0 kubenswrapper[7337]: I0312 18:25:30.492866 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:30.665450 master-0 kubenswrapper[7337]: I0312 18:25:30.665366 7337 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:25:30.666318 master-0 kubenswrapper[7337]: I0312 18:25:30.666235 7337 scope.go:117] "RemoveContainer" containerID="96aef0daff5b4b065b760a53564e1e05a4751150ad79dc2f9bc551c5dafe3e48" Mar 12 18:25:31.353810 master-0 kubenswrapper[7337]: I0312 18:25:31.353658 7337 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:25:31.353810 master-0 kubenswrapper[7337]: I0312 18:25:31.353813 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:25:31.354866 master-0 kubenswrapper[7337]: I0312 18:25:31.354812 7337 scope.go:117] "RemoveContainer" containerID="68e99d6ea6e8d10062ce5f4ba00f8d6b01cd9c70d46c0f8c6da206028e5ae034" Mar 12 18:25:31.355318 master-0 kubenswrapper[7337]: E0312 18:25:31.355257 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=operator-controller-controller-manager-6598bfb6c4-9nzsn_openshift-operator-controller(b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652)\"" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" podUID="b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652" Mar 12 18:25:31.492757 master-0 kubenswrapper[7337]: I0312 18:25:31.492687 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:31.492757 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:31.492757 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:31.492757 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:31.493102 master-0 kubenswrapper[7337]: I0312 18:25:31.492777 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:31.522848 master-0 kubenswrapper[7337]: I0312 18:25:31.522776 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-mb6tc_d1b3859c-20a1-4a1c-8508-86ed843768f5/manager/1.log" Mar 12 18:25:31.523370 master-0 kubenswrapper[7337]: I0312 18:25:31.523338 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" event={"ID":"d1b3859c-20a1-4a1c-8508-86ed843768f5","Type":"ContainerStarted","Data":"9d49361e23fb3f563d72b40f76bf3dee0993f9d5027718dcbf6e1cd11e5ec93d"} Mar 12 18:25:31.523609 master-0 kubenswrapper[7337]: I0312 18:25:31.523584 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:25:31.525718 master-0 kubenswrapper[7337]: I0312 18:25:31.525674 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-2ltx9_bce831df-c604-4608-a24e-b14d62c5287a/snapshot-controller/2.log" Mar 12 18:25:31.526300 master-0 kubenswrapper[7337]: I0312 18:25:31.526278 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-2ltx9_bce831df-c604-4608-a24e-b14d62c5287a/snapshot-controller/1.log" Mar 12 18:25:31.526335 master-0 kubenswrapper[7337]: I0312 18:25:31.526319 7337 generic.go:334] "Generic (PLEG): container finished" podID="bce831df-c604-4608-a24e-b14d62c5287a" containerID="b960a5afb85862022f06f7d612fd0f8b4c4023ddaec1e0fe6a309ca8f51ad930" exitCode=1 Mar 12 18:25:31.526366 master-0 kubenswrapper[7337]: I0312 18:25:31.526344 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-2ltx9" event={"ID":"bce831df-c604-4608-a24e-b14d62c5287a","Type":"ContainerDied","Data":"b960a5afb85862022f06f7d612fd0f8b4c4023ddaec1e0fe6a309ca8f51ad930"} Mar 12 18:25:31.526398 master-0 kubenswrapper[7337]: I0312 18:25:31.526377 7337 scope.go:117] "RemoveContainer" containerID="d836c15d1f62aaf6703c6affbd63fc3695d34670b745bd3f6f244a838540e38e" Mar 12 18:25:31.527403 master-0 kubenswrapper[7337]: I0312 18:25:31.527351 7337 scope.go:117] "RemoveContainer" containerID="b960a5afb85862022f06f7d612fd0f8b4c4023ddaec1e0fe6a309ca8f51ad930" Mar 12 18:25:31.527907 master-0 kubenswrapper[7337]: E0312 18:25:31.527844 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-2ltx9_openshift-cluster-storage-operator(bce831df-c604-4608-a24e-b14d62c5287a)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-2ltx9" podUID="bce831df-c604-4608-a24e-b14d62c5287a" Mar 12 18:25:32.493472 master-0 kubenswrapper[7337]: I0312 18:25:32.493397 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:32.493472 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:32.493472 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:32.493472 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:32.494562 master-0 kubenswrapper[7337]: I0312 18:25:32.493488 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:32.541573 master-0 kubenswrapper[7337]: I0312 18:25:32.541458 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-2ltx9_bce831df-c604-4608-a24e-b14d62c5287a/snapshot-controller/2.log" Mar 12 18:25:33.492862 master-0 kubenswrapper[7337]: I0312 18:25:33.492784 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:33.492862 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:33.492862 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:33.492862 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:33.493289 master-0 kubenswrapper[7337]: I0312 18:25:33.492877 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:34.492636 master-0 kubenswrapper[7337]: I0312 18:25:34.492481 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:34.492636 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:34.492636 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:34.492636 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:34.492636 master-0 kubenswrapper[7337]: I0312 18:25:34.492579 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:35.493320 master-0 kubenswrapper[7337]: I0312 18:25:35.493267 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:35.493320 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:35.493320 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:35.493320 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:35.494283 master-0 kubenswrapper[7337]: I0312 18:25:35.494245 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:35.675821 master-0 kubenswrapper[7337]: I0312 18:25:35.675754 7337 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-ljw8b container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Mar 12 18:25:35.676029 master-0 kubenswrapper[7337]: I0312 18:25:35.675820 7337 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" podUID="062f1b21-2ffc-47da-8334-427c3b2a1a90" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Mar 12 18:25:35.676029 master-0 kubenswrapper[7337]: I0312 18:25:35.675871 7337 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:25:35.676568 master-0 kubenswrapper[7337]: I0312 18:25:35.676528 7337 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"d487597f8c9bef7ba2d4e09c3b10fc797aeda33ac566bd3fb174bfbc58571eff"} pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Mar 12 18:25:35.676653 master-0 kubenswrapper[7337]: I0312 18:25:35.676579 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" podUID="062f1b21-2ffc-47da-8334-427c3b2a1a90" containerName="authentication-operator" containerID="cri-o://d487597f8c9bef7ba2d4e09c3b10fc797aeda33ac566bd3fb174bfbc58571eff" gracePeriod=30 Mar 12 18:25:36.493160 master-0 kubenswrapper[7337]: I0312 18:25:36.493043 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:36.493160 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:36.493160 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:36.493160 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:36.494022 master-0 kubenswrapper[7337]: I0312 18:25:36.493185 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:36.591567 master-0 kubenswrapper[7337]: I0312 18:25:36.591419 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ljw8b_062f1b21-2ffc-47da-8334-427c3b2a1a90/authentication-operator/2.log" Mar 12 18:25:36.592415 master-0 kubenswrapper[7337]: I0312 18:25:36.592360 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ljw8b_062f1b21-2ffc-47da-8334-427c3b2a1a90/authentication-operator/1.log" Mar 12 18:25:36.592563 master-0 kubenswrapper[7337]: I0312 18:25:36.592445 7337 generic.go:334] "Generic (PLEG): container finished" podID="062f1b21-2ffc-47da-8334-427c3b2a1a90" containerID="d487597f8c9bef7ba2d4e09c3b10fc797aeda33ac566bd3fb174bfbc58571eff" exitCode=255 Mar 12 18:25:36.592563 master-0 kubenswrapper[7337]: I0312 18:25:36.592501 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" event={"ID":"062f1b21-2ffc-47da-8334-427c3b2a1a90","Type":"ContainerDied","Data":"d487597f8c9bef7ba2d4e09c3b10fc797aeda33ac566bd3fb174bfbc58571eff"} Mar 12 18:25:36.592720 master-0 kubenswrapper[7337]: I0312 18:25:36.592602 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" event={"ID":"062f1b21-2ffc-47da-8334-427c3b2a1a90","Type":"ContainerStarted","Data":"fd76ce4ce8b52711507badb364936e780f2befa57cd25718299883f40359fa86"} Mar 12 18:25:36.592720 master-0 kubenswrapper[7337]: I0312 18:25:36.592656 7337 scope.go:117] "RemoveContainer" containerID="d457f16cde4caae54a1dbf33594546ae3dcb4fddad7eadd6df336af05fa29aa3" Mar 12 18:25:37.261729 master-0 kubenswrapper[7337]: I0312 18:25:37.261647 7337 status_manager.go:851] "Failed to get status for pod" podUID="30102cc9-45f8-46f8-bb34-eec48fdb297d" pod="openshift-etcd/installer-2-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-2-master-0)" Mar 12 18:25:37.491893 master-0 kubenswrapper[7337]: I0312 18:25:37.491792 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:37.491893 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:37.491893 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:37.491893 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:37.492982 master-0 kubenswrapper[7337]: I0312 18:25:37.492897 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:37.600742 master-0 kubenswrapper[7337]: I0312 18:25:37.600644 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ljw8b_062f1b21-2ffc-47da-8334-427c3b2a1a90/authentication-operator/2.log" Mar 12 18:25:37.722688 master-0 kubenswrapper[7337]: I0312 18:25:37.722639 7337 scope.go:117] "RemoveContainer" containerID="6c80cd8acac8db8cb197069003bd675e1b445a06ed49a0673a80b100538ac223" Mar 12 18:25:38.494208 master-0 kubenswrapper[7337]: I0312 18:25:38.494119 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:38.494208 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:38.494208 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:38.494208 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:38.494208 master-0 kubenswrapper[7337]: I0312 18:25:38.494194 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:38.614193 master-0 kubenswrapper[7337]: I0312 18:25:38.614117 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4527l_d94dc349-c5cb-4f12-8e48-867030af4981/ingress-operator/3.log" Mar 12 18:25:38.615068 master-0 kubenswrapper[7337]: I0312 18:25:38.614800 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" event={"ID":"d94dc349-c5cb-4f12-8e48-867030af4981","Type":"ContainerStarted","Data":"37c9fcab8917043972cb8da48f9b3a66fa98e29cb384d4ab82bdb89b8dd2d452"} Mar 12 18:25:39.519598 master-0 kubenswrapper[7337]: I0312 18:25:39.519544 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:39.519598 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:39.519598 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:39.519598 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:39.519842 master-0 kubenswrapper[7337]: I0312 18:25:39.519611 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:40.383016 master-0 kubenswrapper[7337]: E0312 18:25:40.382870 7337 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189c2adf0a14fadf openshift-kube-controller-manager 10019 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:39c441a05d91070efc538925475b0a44,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:18:10 +0000 UTC,LastTimestamp:2026-03-12 18:23:37.533911734 +0000 UTC m=+618.002512681,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:25:40.493108 master-0 kubenswrapper[7337]: I0312 18:25:40.492945 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:40.493108 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:40.493108 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:40.493108 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:40.493108 master-0 kubenswrapper[7337]: I0312 18:25:40.493002 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:40.669151 master-0 kubenswrapper[7337]: I0312 18:25:40.669064 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:25:41.493145 master-0 kubenswrapper[7337]: I0312 18:25:41.493077 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:41.493145 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:41.493145 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:41.493145 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:41.494011 master-0 kubenswrapper[7337]: I0312 18:25:41.493148 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:42.494206 master-0 kubenswrapper[7337]: I0312 18:25:42.494102 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:42.494206 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:42.494206 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:42.494206 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:42.495181 master-0 kubenswrapper[7337]: I0312 18:25:42.494219 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:42.572549 master-0 kubenswrapper[7337]: E0312 18:25:42.572387 7337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 18:25:43.493913 master-0 kubenswrapper[7337]: I0312 18:25:43.493811 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:43.493913 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:43.493913 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:43.493913 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:43.494928 master-0 kubenswrapper[7337]: I0312 18:25:43.493930 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:43.722819 master-0 kubenswrapper[7337]: I0312 18:25:43.722754 7337 scope.go:117] "RemoveContainer" containerID="68e99d6ea6e8d10062ce5f4ba00f8d6b01cd9c70d46c0f8c6da206028e5ae034" Mar 12 18:25:44.314005 master-0 kubenswrapper[7337]: E0312 18:25:44.313940 7337 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 12 18:25:44.493328 master-0 kubenswrapper[7337]: I0312 18:25:44.493248 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:44.493328 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:44.493328 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:44.493328 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:44.493328 master-0 kubenswrapper[7337]: I0312 18:25:44.493323 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:44.661535 master-0 kubenswrapper[7337]: I0312 18:25:44.661476 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-9nzsn_b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652/manager/1.log" Mar 12 18:25:44.661950 master-0 kubenswrapper[7337]: I0312 18:25:44.661817 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" event={"ID":"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652","Type":"ContainerStarted","Data":"d29a721addd795e72d4888be27b34684e8179c6b20e149419f63cf5da722668b"} Mar 12 18:25:44.662403 master-0 kubenswrapper[7337]: I0312 18:25:44.662379 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:25:44.664324 master-0 kubenswrapper[7337]: I0312 18:25:44.664295 7337 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="7543b93babb8e5c9d5cf6e5b32750ff43fa63df2a49a76caac539aefeccb417e" exitCode=0 Mar 12 18:25:44.664324 master-0 kubenswrapper[7337]: I0312 18:25:44.664320 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"7543b93babb8e5c9d5cf6e5b32750ff43fa63df2a49a76caac539aefeccb417e"} Mar 12 18:25:44.664528 master-0 kubenswrapper[7337]: I0312 18:25:44.664499 7337 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="901fa277-f91f-4050-9bba-fc5a7d764d16" Mar 12 18:25:44.664583 master-0 kubenswrapper[7337]: I0312 18:25:44.664533 7337 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="901fa277-f91f-4050-9bba-fc5a7d764d16" Mar 12 18:25:45.493442 master-0 kubenswrapper[7337]: I0312 18:25:45.493366 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:45.493442 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:45.493442 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:45.493442 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:45.493442 master-0 kubenswrapper[7337]: I0312 18:25:45.493421 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:46.492621 master-0 kubenswrapper[7337]: I0312 18:25:46.492564 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:46.492621 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:46.492621 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:46.492621 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:46.493194 master-0 kubenswrapper[7337]: I0312 18:25:46.492644 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:46.679312 master-0 kubenswrapper[7337]: I0312 18:25:46.679271 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-2psgb_e5fb0152-3efd-4000-bce3-fa90b75316ae/cluster-baremetal-operator/0.log" Mar 12 18:25:46.679312 master-0 kubenswrapper[7337]: I0312 18:25:46.679315 7337 generic.go:334] "Generic (PLEG): container finished" podID="e5fb0152-3efd-4000-bce3-fa90b75316ae" containerID="15d14268c6ae0aa2ad20f2093d09d878fa7d62076388ad39f3dddf0c18d45f03" exitCode=1 Mar 12 18:25:46.679555 master-0 kubenswrapper[7337]: I0312 18:25:46.679361 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" event={"ID":"e5fb0152-3efd-4000-bce3-fa90b75316ae","Type":"ContainerDied","Data":"15d14268c6ae0aa2ad20f2093d09d878fa7d62076388ad39f3dddf0c18d45f03"} Mar 12 18:25:46.679827 master-0 kubenswrapper[7337]: I0312 18:25:46.679804 7337 scope.go:117] "RemoveContainer" containerID="15d14268c6ae0aa2ad20f2093d09d878fa7d62076388ad39f3dddf0c18d45f03" Mar 12 18:25:46.683628 master-0 kubenswrapper[7337]: I0312 18:25:46.683612 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/kube-controller-manager/0.log" Mar 12 18:25:46.683733 master-0 kubenswrapper[7337]: I0312 18:25:46.683717 7337 generic.go:334] "Generic (PLEG): container finished" podID="39c441a05d91070efc538925475b0a44" containerID="7fb23a2c8c1ff62e8501ccd63993df169d80f53ec586abd8df0866b032126fb5" exitCode=0 Mar 12 18:25:46.683810 master-0 kubenswrapper[7337]: I0312 18:25:46.683795 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"39c441a05d91070efc538925475b0a44","Type":"ContainerDied","Data":"7fb23a2c8c1ff62e8501ccd63993df169d80f53ec586abd8df0866b032126fb5"} Mar 12 18:25:46.684170 master-0 kubenswrapper[7337]: I0312 18:25:46.684156 7337 scope.go:117] "RemoveContainer" containerID="7fb23a2c8c1ff62e8501ccd63993df169d80f53ec586abd8df0866b032126fb5" Mar 12 18:25:46.721988 master-0 kubenswrapper[7337]: I0312 18:25:46.721939 7337 scope.go:117] "RemoveContainer" containerID="b960a5afb85862022f06f7d612fd0f8b4c4023ddaec1e0fe6a309ca8f51ad930" Mar 12 18:25:46.722225 master-0 kubenswrapper[7337]: E0312 18:25:46.722130 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-2ltx9_openshift-cluster-storage-operator(bce831df-c604-4608-a24e-b14d62c5287a)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-2ltx9" podUID="bce831df-c604-4608-a24e-b14d62c5287a" Mar 12 18:25:47.489134 master-0 kubenswrapper[7337]: I0312 18:25:47.489069 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:25:47.489134 master-0 kubenswrapper[7337]: I0312 18:25:47.489126 7337 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:25:47.489134 master-0 kubenswrapper[7337]: I0312 18:25:47.489177 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:25:47.493677 master-0 kubenswrapper[7337]: I0312 18:25:47.493580 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:47.493677 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:47.493677 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:47.493677 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:47.494730 master-0 kubenswrapper[7337]: I0312 18:25:47.493696 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:47.694109 master-0 kubenswrapper[7337]: I0312 18:25:47.693986 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-2psgb_e5fb0152-3efd-4000-bce3-fa90b75316ae/cluster-baremetal-operator/0.log" Mar 12 18:25:47.694269 master-0 kubenswrapper[7337]: I0312 18:25:47.694088 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" event={"ID":"e5fb0152-3efd-4000-bce3-fa90b75316ae","Type":"ContainerStarted","Data":"84549a6a4b2b7f4a99c232b78b53c64bb94831244f578e91d0ebfcc22961117f"} Mar 12 18:25:47.696007 master-0 kubenswrapper[7337]: I0312 18:25:47.695956 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-754bdc9f9d-4w5z7_030160af-c915-4f00-903a-1c4b5c2b719a/machine-approver-controller/0.log" Mar 12 18:25:47.696494 master-0 kubenswrapper[7337]: I0312 18:25:47.696460 7337 generic.go:334] "Generic (PLEG): container finished" podID="030160af-c915-4f00-903a-1c4b5c2b719a" containerID="896aa2273ca1ba7df7cc5c10fd0e284e882d24c2714f3848133288e9eccfa795" exitCode=255 Mar 12 18:25:47.696581 master-0 kubenswrapper[7337]: I0312 18:25:47.696490 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7" event={"ID":"030160af-c915-4f00-903a-1c4b5c2b719a","Type":"ContainerDied","Data":"896aa2273ca1ba7df7cc5c10fd0e284e882d24c2714f3848133288e9eccfa795"} Mar 12 18:25:47.697343 master-0 kubenswrapper[7337]: I0312 18:25:47.697317 7337 scope.go:117] "RemoveContainer" containerID="896aa2273ca1ba7df7cc5c10fd0e284e882d24c2714f3848133288e9eccfa795" Mar 12 18:25:47.701482 master-0 kubenswrapper[7337]: I0312 18:25:47.701459 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/kube-controller-manager/0.log" Mar 12 18:25:47.701542 master-0 kubenswrapper[7337]: I0312 18:25:47.701504 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"39c441a05d91070efc538925475b0a44","Type":"ContainerStarted","Data":"98178e315548a34bcbbf14893436a04d14e894141a4d4ad668fb7f628d400331"} Mar 12 18:25:48.492446 master-0 kubenswrapper[7337]: I0312 18:25:48.492363 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:48.492446 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:48.492446 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:48.492446 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:48.492446 master-0 kubenswrapper[7337]: I0312 18:25:48.492443 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:48.711038 master-0 kubenswrapper[7337]: I0312 18:25:48.710963 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-754bdc9f9d-4w5z7_030160af-c915-4f00-903a-1c4b5c2b719a/machine-approver-controller/0.log" Mar 12 18:25:48.711700 master-0 kubenswrapper[7337]: I0312 18:25:48.711606 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7" event={"ID":"030160af-c915-4f00-903a-1c4b5c2b719a","Type":"ContainerStarted","Data":"a56ac36314aa1b7472cf55eacb426f6e2d860291fca93f69a76b61dc8960ec4d"} Mar 12 18:25:49.492219 master-0 kubenswrapper[7337]: I0312 18:25:49.492162 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:49.492219 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:49.492219 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:49.492219 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:49.492530 master-0 kubenswrapper[7337]: I0312 18:25:49.492234 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:50.492390 master-0 kubenswrapper[7337]: I0312 18:25:50.492255 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:50.492390 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:50.492390 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:50.492390 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:50.492390 master-0 kubenswrapper[7337]: I0312 18:25:50.492333 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:51.355273 master-0 kubenswrapper[7337]: I0312 18:25:51.355198 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:25:51.492009 master-0 kubenswrapper[7337]: I0312 18:25:51.491964 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:51.492009 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:51.492009 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:51.492009 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:51.492418 master-0 kubenswrapper[7337]: I0312 18:25:51.492380 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:52.492785 master-0 kubenswrapper[7337]: I0312 18:25:52.492747 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:52.492785 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:52.492785 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:52.492785 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:52.493361 master-0 kubenswrapper[7337]: I0312 18:25:52.493334 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:53.493665 master-0 kubenswrapper[7337]: I0312 18:25:53.493570 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:53.493665 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:53.493665 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:53.493665 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:53.494697 master-0 kubenswrapper[7337]: I0312 18:25:53.493685 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:54.493888 master-0 kubenswrapper[7337]: I0312 18:25:54.493799 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:54.493888 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:54.493888 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:54.493888 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:54.494833 master-0 kubenswrapper[7337]: I0312 18:25:54.493900 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:55.492668 master-0 kubenswrapper[7337]: I0312 18:25:55.492545 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:55.492668 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:55.492668 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:55.492668 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:55.492668 master-0 kubenswrapper[7337]: I0312 18:25:55.492645 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:56.492830 master-0 kubenswrapper[7337]: I0312 18:25:56.492756 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:56.492830 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:56.492830 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:56.492830 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:56.492830 master-0 kubenswrapper[7337]: I0312 18:25:56.492809 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:57.489773 master-0 kubenswrapper[7337]: I0312 18:25:57.489708 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:25:57.489773 master-0 kubenswrapper[7337]: I0312 18:25:57.489766 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:25:57.493408 master-0 kubenswrapper[7337]: I0312 18:25:57.493348 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:57.493408 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:57.493408 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:57.493408 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:57.493930 master-0 kubenswrapper[7337]: I0312 18:25:57.493439 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:58.492810 master-0 kubenswrapper[7337]: I0312 18:25:58.492699 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:58.492810 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:58.492810 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:58.492810 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:58.494461 master-0 kubenswrapper[7337]: I0312 18:25:58.492838 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:58.722766 master-0 kubenswrapper[7337]: I0312 18:25:58.722679 7337 scope.go:117] "RemoveContainer" containerID="b960a5afb85862022f06f7d612fd0f8b4c4023ddaec1e0fe6a309ca8f51ad930" Mar 12 18:25:59.493320 master-0 kubenswrapper[7337]: I0312 18:25:59.493225 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:25:59.493320 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:25:59.493320 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:25:59.493320 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:25:59.493320 master-0 kubenswrapper[7337]: I0312 18:25:59.493315 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:25:59.573814 master-0 kubenswrapper[7337]: E0312 18:25:59.573714 7337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 18:25:59.793118 master-0 kubenswrapper[7337]: I0312 18:25:59.793056 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-2ltx9_bce831df-c604-4608-a24e-b14d62c5287a/snapshot-controller/2.log" Mar 12 18:25:59.793365 master-0 kubenswrapper[7337]: I0312 18:25:59.793142 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-2ltx9" event={"ID":"bce831df-c604-4608-a24e-b14d62c5287a","Type":"ContainerStarted","Data":"3ee6889d81d43029dd6623714d782675e77b0ac4d47d44b5e698b3218f31c69c"} Mar 12 18:26:00.490337 master-0 kubenswrapper[7337]: I0312 18:26:00.490249 7337 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 18:26:00.490598 master-0 kubenswrapper[7337]: I0312 18:26:00.490337 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="39c441a05d91070efc538925475b0a44" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 18:26:00.493186 master-0 kubenswrapper[7337]: I0312 18:26:00.493090 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:00.493186 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:00.493186 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:00.493186 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:00.493186 master-0 kubenswrapper[7337]: I0312 18:26:00.493159 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:01.492014 master-0 kubenswrapper[7337]: I0312 18:26:01.491933 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:01.492014 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:01.492014 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:01.492014 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:01.492707 master-0 kubenswrapper[7337]: I0312 18:26:01.492018 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:02.493182 master-0 kubenswrapper[7337]: I0312 18:26:02.492995 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:02.493182 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:02.493182 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:02.493182 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:02.493182 master-0 kubenswrapper[7337]: I0312 18:26:02.493070 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:03.492923 master-0 kubenswrapper[7337]: I0312 18:26:03.492844 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:03.492923 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:03.492923 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:03.492923 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:03.493822 master-0 kubenswrapper[7337]: I0312 18:26:03.492934 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:04.492122 master-0 kubenswrapper[7337]: I0312 18:26:04.492059 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:04.492122 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:04.492122 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:04.492122 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:04.492460 master-0 kubenswrapper[7337]: I0312 18:26:04.492151 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:04.832389 master-0 kubenswrapper[7337]: I0312 18:26:04.832331 7337 generic.go:334] "Generic (PLEG): container finished" podID="30c5dc4b-f1c8-4773-b961-985740fcc503" containerID="4c8710948dd5b2bc4491d4be8e860204494bc7d0e3f8f5ee9d0d17c5e1a20b86" exitCode=0 Mar 12 18:26:04.832389 master-0 kubenswrapper[7337]: I0312 18:26:04.832401 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" event={"ID":"30c5dc4b-f1c8-4773-b961-985740fcc503","Type":"ContainerDied","Data":"4c8710948dd5b2bc4491d4be8e860204494bc7d0e3f8f5ee9d0d17c5e1a20b86"} Mar 12 18:26:04.833480 master-0 kubenswrapper[7337]: I0312 18:26:04.832433 7337 scope.go:117] "RemoveContainer" containerID="f0410fcdb7f021e073b091992c982ea0c6dd9257aa500e76a08b26054e3f730d" Mar 12 18:26:04.833672 master-0 kubenswrapper[7337]: I0312 18:26:04.833597 7337 scope.go:117] "RemoveContainer" containerID="4c8710948dd5b2bc4491d4be8e860204494bc7d0e3f8f5ee9d0d17c5e1a20b86" Mar 12 18:26:04.834417 master-0 kubenswrapper[7337]: E0312 18:26:04.834240 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=controller-manager pod=controller-manager-5b55d98459-sr4hk_openshift-controller-manager(30c5dc4b-f1c8-4773-b961-985740fcc503)\"" pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" podUID="30c5dc4b-f1c8-4773-b961-985740fcc503" Mar 12 18:26:04.835150 master-0 kubenswrapper[7337]: I0312 18:26:04.835093 7337 generic.go:334] "Generic (PLEG): container finished" podID="74eb1407-de29-42e5-9e6c-ce1bec3a9d80" containerID="cf7ec04355ab534ccfb643cab9c3d22d23f3ccdda0dc0dcaa6f049053cf3267f" exitCode=0 Mar 12 18:26:04.835150 master-0 kubenswrapper[7337]: I0312 18:26:04.835146 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" event={"ID":"74eb1407-de29-42e5-9e6c-ce1bec3a9d80","Type":"ContainerDied","Data":"cf7ec04355ab534ccfb643cab9c3d22d23f3ccdda0dc0dcaa6f049053cf3267f"} Mar 12 18:26:04.835751 master-0 kubenswrapper[7337]: I0312 18:26:04.835707 7337 scope.go:117] "RemoveContainer" containerID="cf7ec04355ab534ccfb643cab9c3d22d23f3ccdda0dc0dcaa6f049053cf3267f" Mar 12 18:26:04.835940 master-0 kubenswrapper[7337]: E0312 18:26:04.835898 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-cluster-manager pod=ovnkube-control-plane-66b55d57d-w7wj9_openshift-ovn-kubernetes(74eb1407-de29-42e5-9e6c-ce1bec3a9d80)\"" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" podUID="74eb1407-de29-42e5-9e6c-ce1bec3a9d80" Mar 12 18:26:04.838282 master-0 kubenswrapper[7337]: I0312 18:26:04.838231 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-zd9gm_34cbf061-4c76-476e-bed9-0a133c744862/control-plane-machine-set-operator/0.log" Mar 12 18:26:04.838282 master-0 kubenswrapper[7337]: I0312 18:26:04.838264 7337 generic.go:334] "Generic (PLEG): container finished" podID="34cbf061-4c76-476e-bed9-0a133c744862" containerID="d72afaed4f952dfc1603764d86ef509711bb42af6ee8dbbfe68a46a833266739" exitCode=1 Mar 12 18:26:04.838282 master-0 kubenswrapper[7337]: I0312 18:26:04.838281 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zd9gm" event={"ID":"34cbf061-4c76-476e-bed9-0a133c744862","Type":"ContainerDied","Data":"d72afaed4f952dfc1603764d86ef509711bb42af6ee8dbbfe68a46a833266739"} Mar 12 18:26:04.838662 master-0 kubenswrapper[7337]: I0312 18:26:04.838505 7337 scope.go:117] "RemoveContainer" containerID="d72afaed4f952dfc1603764d86ef509711bb42af6ee8dbbfe68a46a833266739" Mar 12 18:26:04.934590 master-0 kubenswrapper[7337]: I0312 18:26:04.934334 7337 scope.go:117] "RemoveContainer" containerID="b68bb8a45412c32b722e21748839c3672ba272871c5c90f6c3a4e4de1a85ff86" Mar 12 18:26:05.492908 master-0 kubenswrapper[7337]: I0312 18:26:05.492844 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:05.492908 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:05.492908 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:05.492908 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:05.493240 master-0 kubenswrapper[7337]: I0312 18:26:05.492922 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:05.849534 master-0 kubenswrapper[7337]: I0312 18:26:05.849482 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-zd9gm_34cbf061-4c76-476e-bed9-0a133c744862/control-plane-machine-set-operator/0.log" Mar 12 18:26:05.850063 master-0 kubenswrapper[7337]: I0312 18:26:05.849590 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zd9gm" event={"ID":"34cbf061-4c76-476e-bed9-0a133c744862","Type":"ContainerStarted","Data":"998f92e08d42c81ea604a72e04b1be13ee01a148705cfddebdc2deebe964624e"} Mar 12 18:26:06.492169 master-0 kubenswrapper[7337]: I0312 18:26:06.492102 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:06.492169 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:06.492169 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:06.492169 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:06.492634 master-0 kubenswrapper[7337]: I0312 18:26:06.492603 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:07.227022 master-0 kubenswrapper[7337]: E0312 18:26:07.226905 7337 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T18:25:57Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T18:25:57Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T18:25:57Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T18:25:57Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:26:07.491862 master-0 kubenswrapper[7337]: I0312 18:26:07.491687 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:07.491862 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:07.491862 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:07.491862 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:07.491862 master-0 kubenswrapper[7337]: I0312 18:26:07.491786 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:08.492862 master-0 kubenswrapper[7337]: I0312 18:26:08.492790 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:08.492862 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:08.492862 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:08.492862 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:08.493875 master-0 kubenswrapper[7337]: I0312 18:26:08.492879 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:09.492847 master-0 kubenswrapper[7337]: I0312 18:26:09.492777 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:09.492847 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:09.492847 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:09.492847 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:09.493482 master-0 kubenswrapper[7337]: I0312 18:26:09.492846 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:10.489407 master-0 kubenswrapper[7337]: I0312 18:26:10.489346 7337 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 18:26:10.489407 master-0 kubenswrapper[7337]: I0312 18:26:10.489404 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="39c441a05d91070efc538925475b0a44" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 18:26:10.492377 master-0 kubenswrapper[7337]: I0312 18:26:10.492324 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:10.492377 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:10.492377 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:10.492377 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:10.492559 master-0 kubenswrapper[7337]: I0312 18:26:10.492390 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:11.492361 master-0 kubenswrapper[7337]: I0312 18:26:11.492260 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:11.492361 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:11.492361 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:11.492361 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:11.492361 master-0 kubenswrapper[7337]: I0312 18:26:11.492319 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:12.493369 master-0 kubenswrapper[7337]: I0312 18:26:12.493244 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:12.493369 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:12.493369 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:12.493369 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:12.494410 master-0 kubenswrapper[7337]: I0312 18:26:12.493367 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:13.026704 master-0 kubenswrapper[7337]: I0312 18:26:13.026592 7337 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" Mar 12 18:26:13.026704 master-0 kubenswrapper[7337]: I0312 18:26:13.026665 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" Mar 12 18:26:13.027456 master-0 kubenswrapper[7337]: I0312 18:26:13.027401 7337 scope.go:117] "RemoveContainer" containerID="4c8710948dd5b2bc4491d4be8e860204494bc7d0e3f8f5ee9d0d17c5e1a20b86" Mar 12 18:26:13.027851 master-0 kubenswrapper[7337]: E0312 18:26:13.027795 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=controller-manager pod=controller-manager-5b55d98459-sr4hk_openshift-controller-manager(30c5dc4b-f1c8-4773-b961-985740fcc503)\"" pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" podUID="30c5dc4b-f1c8-4773-b961-985740fcc503" Mar 12 18:26:13.493108 master-0 kubenswrapper[7337]: I0312 18:26:13.493006 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:13.493108 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:13.493108 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:13.493108 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:13.493108 master-0 kubenswrapper[7337]: I0312 18:26:13.493095 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:14.385937 master-0 kubenswrapper[7337]: E0312 18:26:14.385761 7337 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189c2adf1d779cb1 openshift-kube-controller-manager 10058 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:39c441a05d91070efc538925475b0a44,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:18:11 +0000 UTC,LastTimestamp:2026-03-12 18:23:37.544357997 +0000 UTC m=+618.012958944,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:26:14.493648 master-0 kubenswrapper[7337]: I0312 18:26:14.493541 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:14.493648 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:14.493648 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:14.493648 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:14.493648 master-0 kubenswrapper[7337]: I0312 18:26:14.493625 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:15.492000 master-0 kubenswrapper[7337]: I0312 18:26:15.491932 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:15.492000 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:15.492000 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:15.492000 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:15.492000 master-0 kubenswrapper[7337]: I0312 18:26:15.491987 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:15.676211 master-0 kubenswrapper[7337]: I0312 18:26:15.676066 7337 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-ljw8b container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Mar 12 18:26:15.676211 master-0 kubenswrapper[7337]: I0312 18:26:15.676209 7337 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" podUID="062f1b21-2ffc-47da-8334-427c3b2a1a90" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Mar 12 18:26:16.492075 master-0 kubenswrapper[7337]: I0312 18:26:16.492010 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:16.492075 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:16.492075 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:16.492075 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:16.492075 master-0 kubenswrapper[7337]: I0312 18:26:16.492069 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:16.575204 master-0 kubenswrapper[7337]: E0312 18:26:16.575117 7337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 18:26:17.227830 master-0 kubenswrapper[7337]: E0312 18:26:17.227682 7337 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:26:17.490430 master-0 kubenswrapper[7337]: I0312 18:26:17.490247 7337 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" start-of-body= Mar 12 18:26:17.490430 master-0 kubenswrapper[7337]: I0312 18:26:17.490334 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="39c441a05d91070efc538925475b0a44" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" Mar 12 18:26:17.490430 master-0 kubenswrapper[7337]: I0312 18:26:17.490410 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:26:17.491431 master-0 kubenswrapper[7337]: I0312 18:26:17.491290 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:17.491431 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:17.491431 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:17.491431 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:17.491431 master-0 kubenswrapper[7337]: I0312 18:26:17.491366 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:17.491835 master-0 kubenswrapper[7337]: I0312 18:26:17.491473 7337 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"98178e315548a34bcbbf14893436a04d14e894141a4d4ad668fb7f628d400331"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 12 18:26:17.491835 master-0 kubenswrapper[7337]: I0312 18:26:17.491657 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="39c441a05d91070efc538925475b0a44" containerName="cluster-policy-controller" containerID="cri-o://98178e315548a34bcbbf14893436a04d14e894141a4d4ad668fb7f628d400331" gracePeriod=30 Mar 12 18:26:17.937728 master-0 kubenswrapper[7337]: I0312 18:26:17.937682 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/cluster-policy-controller/1.log" Mar 12 18:26:17.941844 master-0 kubenswrapper[7337]: I0312 18:26:17.941818 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/kube-controller-manager/0.log" Mar 12 18:26:17.941985 master-0 kubenswrapper[7337]: I0312 18:26:17.941862 7337 generic.go:334] "Generic (PLEG): container finished" podID="39c441a05d91070efc538925475b0a44" containerID="98178e315548a34bcbbf14893436a04d14e894141a4d4ad668fb7f628d400331" exitCode=255 Mar 12 18:26:17.941985 master-0 kubenswrapper[7337]: I0312 18:26:17.941893 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"39c441a05d91070efc538925475b0a44","Type":"ContainerDied","Data":"98178e315548a34bcbbf14893436a04d14e894141a4d4ad668fb7f628d400331"} Mar 12 18:26:17.941985 master-0 kubenswrapper[7337]: I0312 18:26:17.941920 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"39c441a05d91070efc538925475b0a44","Type":"ContainerStarted","Data":"67b90f2dc0521383a52970ef838becfa7a4fd4ded7c3c75b11d7b851c57c011b"} Mar 12 18:26:17.941985 master-0 kubenswrapper[7337]: I0312 18:26:17.941936 7337 scope.go:117] "RemoveContainer" containerID="7fb23a2c8c1ff62e8501ccd63993df169d80f53ec586abd8df0866b032126fb5" Mar 12 18:26:18.494074 master-0 kubenswrapper[7337]: I0312 18:26:18.493972 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:18.494074 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:18.494074 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:18.494074 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:18.494656 master-0 kubenswrapper[7337]: I0312 18:26:18.494113 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:18.667895 master-0 kubenswrapper[7337]: E0312 18:26:18.667796 7337 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 12 18:26:18.951819 master-0 kubenswrapper[7337]: I0312 18:26:18.951755 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/cluster-policy-controller/1.log" Mar 12 18:26:18.953699 master-0 kubenswrapper[7337]: I0312 18:26:18.953647 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/kube-controller-manager/0.log" Mar 12 18:26:19.492628 master-0 kubenswrapper[7337]: I0312 18:26:19.492557 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:19.492628 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:19.492628 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:19.492628 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:19.492896 master-0 kubenswrapper[7337]: I0312 18:26:19.492666 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:19.722579 master-0 kubenswrapper[7337]: I0312 18:26:19.722409 7337 scope.go:117] "RemoveContainer" containerID="cf7ec04355ab534ccfb643cab9c3d22d23f3ccdda0dc0dcaa6f049053cf3267f" Mar 12 18:26:19.983758 master-0 kubenswrapper[7337]: I0312 18:26:19.983609 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"79b096245a2c0849e1eb752ac1cc93728bb4af45cc62d990fa23de0f9691630c"} Mar 12 18:26:19.983758 master-0 kubenswrapper[7337]: I0312 18:26:19.983684 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"57a2e66698dd7b2085962d87cba4600292cd8ca3813c497d51b91333d17177c0"} Mar 12 18:26:19.983758 master-0 kubenswrapper[7337]: I0312 18:26:19.983710 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"b639416601ad7bd99978e18cd7f92b4a72dd1d06449a777a4af899e4c60f21ba"} Mar 12 18:26:19.983758 master-0 kubenswrapper[7337]: I0312 18:26:19.983728 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"b7750a2045b8f5019e76849e41c480529ce009b9060d700116c60f3a22cfe61b"} Mar 12 18:26:19.986635 master-0 kubenswrapper[7337]: I0312 18:26:19.986553 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" event={"ID":"74eb1407-de29-42e5-9e6c-ce1bec3a9d80","Type":"ContainerStarted","Data":"628a568f6a9a99d41ac6d21215324f6bc1af98bb874015be59525410b6663648"} Mar 12 18:26:20.493673 master-0 kubenswrapper[7337]: I0312 18:26:20.493555 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:20.493673 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:20.493673 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:20.493673 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:20.494183 master-0 kubenswrapper[7337]: I0312 18:26:20.493693 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:21.003406 master-0 kubenswrapper[7337]: I0312 18:26:21.003325 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"ce272961794705fb82908bdcd8d6c34bb765939cf5756a00b858a75975bfb3ec"} Mar 12 18:26:21.004925 master-0 kubenswrapper[7337]: I0312 18:26:21.003915 7337 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="901fa277-f91f-4050-9bba-fc5a7d764d16" Mar 12 18:26:21.004925 master-0 kubenswrapper[7337]: I0312 18:26:21.003962 7337 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="901fa277-f91f-4050-9bba-fc5a7d764d16" Mar 12 18:26:21.494254 master-0 kubenswrapper[7337]: I0312 18:26:21.494123 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:21.494254 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:21.494254 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:21.494254 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:21.494254 master-0 kubenswrapper[7337]: I0312 18:26:21.494246 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:22.493040 master-0 kubenswrapper[7337]: I0312 18:26:22.492936 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:22.493040 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:22.493040 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:22.493040 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:22.493967 master-0 kubenswrapper[7337]: I0312 18:26:22.493039 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:23.493256 master-0 kubenswrapper[7337]: I0312 18:26:23.493139 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:23.493256 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:23.493256 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:23.493256 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:23.494210 master-0 kubenswrapper[7337]: I0312 18:26:23.493280 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:23.761784 master-0 kubenswrapper[7337]: I0312 18:26:23.761556 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 12 18:26:23.761784 master-0 kubenswrapper[7337]: I0312 18:26:23.761686 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 12 18:26:24.492463 master-0 kubenswrapper[7337]: I0312 18:26:24.492346 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:24.492463 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:24.492463 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:24.492463 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:24.492463 master-0 kubenswrapper[7337]: I0312 18:26:24.492449 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:25.493642 master-0 kubenswrapper[7337]: I0312 18:26:25.493500 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:25.493642 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:25.493642 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:25.493642 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:25.494683 master-0 kubenswrapper[7337]: I0312 18:26:25.493655 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:25.676137 master-0 kubenswrapper[7337]: I0312 18:26:25.676012 7337 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-ljw8b container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Mar 12 18:26:25.676420 master-0 kubenswrapper[7337]: I0312 18:26:25.676124 7337 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" podUID="062f1b21-2ffc-47da-8334-427c3b2a1a90" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Mar 12 18:26:26.492902 master-0 kubenswrapper[7337]: I0312 18:26:26.492815 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:26.492902 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:26.492902 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:26.492902 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:26.493360 master-0 kubenswrapper[7337]: I0312 18:26:26.492915 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:26.722437 master-0 kubenswrapper[7337]: I0312 18:26:26.722355 7337 scope.go:117] "RemoveContainer" containerID="4c8710948dd5b2bc4491d4be8e860204494bc7d0e3f8f5ee9d0d17c5e1a20b86" Mar 12 18:26:27.056721 master-0 kubenswrapper[7337]: I0312 18:26:27.056654 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" event={"ID":"30c5dc4b-f1c8-4773-b961-985740fcc503","Type":"ContainerStarted","Data":"c5312d3b71b96a4fb9ea85e4c97ffeef667430d9cd85a43bb5055d9732684c96"} Mar 12 18:26:27.057047 master-0 kubenswrapper[7337]: I0312 18:26:27.056994 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" Mar 12 18:26:27.062019 master-0 kubenswrapper[7337]: I0312 18:26:27.061970 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" Mar 12 18:26:27.228721 master-0 kubenswrapper[7337]: E0312 18:26:27.228619 7337 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:26:27.489882 master-0 kubenswrapper[7337]: I0312 18:26:27.489716 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:26:27.490563 master-0 kubenswrapper[7337]: I0312 18:26:27.490193 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:26:27.495212 master-0 kubenswrapper[7337]: I0312 18:26:27.493984 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:27.495212 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:27.495212 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:27.495212 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:27.495212 master-0 kubenswrapper[7337]: I0312 18:26:27.494052 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:28.492972 master-0 kubenswrapper[7337]: I0312 18:26:28.492860 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:28.492972 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:28.492972 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:28.492972 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:28.492972 master-0 kubenswrapper[7337]: I0312 18:26:28.492964 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:29.074847 master-0 kubenswrapper[7337]: I0312 18:26:29.074769 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-2ltx9_bce831df-c604-4608-a24e-b14d62c5287a/snapshot-controller/3.log" Mar 12 18:26:29.075440 master-0 kubenswrapper[7337]: I0312 18:26:29.075395 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-2ltx9_bce831df-c604-4608-a24e-b14d62c5287a/snapshot-controller/2.log" Mar 12 18:26:29.075499 master-0 kubenswrapper[7337]: I0312 18:26:29.075462 7337 generic.go:334] "Generic (PLEG): container finished" podID="bce831df-c604-4608-a24e-b14d62c5287a" containerID="3ee6889d81d43029dd6623714d782675e77b0ac4d47d44b5e698b3218f31c69c" exitCode=1 Mar 12 18:26:29.075627 master-0 kubenswrapper[7337]: I0312 18:26:29.075576 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-2ltx9" event={"ID":"bce831df-c604-4608-a24e-b14d62c5287a","Type":"ContainerDied","Data":"3ee6889d81d43029dd6623714d782675e77b0ac4d47d44b5e698b3218f31c69c"} Mar 12 18:26:29.075678 master-0 kubenswrapper[7337]: I0312 18:26:29.075654 7337 scope.go:117] "RemoveContainer" containerID="b960a5afb85862022f06f7d612fd0f8b4c4023ddaec1e0fe6a309ca8f51ad930" Mar 12 18:26:29.076409 master-0 kubenswrapper[7337]: I0312 18:26:29.076358 7337 scope.go:117] "RemoveContainer" containerID="3ee6889d81d43029dd6623714d782675e77b0ac4d47d44b5e698b3218f31c69c" Mar 12 18:26:29.076902 master-0 kubenswrapper[7337]: E0312 18:26:29.076852 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-2ltx9_openshift-cluster-storage-operator(bce831df-c604-4608-a24e-b14d62c5287a)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-2ltx9" podUID="bce831df-c604-4608-a24e-b14d62c5287a" Mar 12 18:26:29.493051 master-0 kubenswrapper[7337]: I0312 18:26:29.492958 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:29.493051 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:29.493051 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:29.493051 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:29.493051 master-0 kubenswrapper[7337]: I0312 18:26:29.493040 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:30.087689 master-0 kubenswrapper[7337]: I0312 18:26:30.087603 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-2ltx9_bce831df-c604-4608-a24e-b14d62c5287a/snapshot-controller/3.log" Mar 12 18:26:30.490534 master-0 kubenswrapper[7337]: I0312 18:26:30.490354 7337 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 18:26:30.490534 master-0 kubenswrapper[7337]: I0312 18:26:30.490440 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="39c441a05d91070efc538925475b0a44" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 18:26:30.493710 master-0 kubenswrapper[7337]: I0312 18:26:30.493638 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:30.493710 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:30.493710 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:30.493710 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:30.494162 master-0 kubenswrapper[7337]: I0312 18:26:30.493754 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:31.495349 master-0 kubenswrapper[7337]: I0312 18:26:31.495252 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:31.495349 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:31.495349 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:31.495349 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:31.496505 master-0 kubenswrapper[7337]: I0312 18:26:31.495369 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:32.494148 master-0 kubenswrapper[7337]: I0312 18:26:32.494020 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:32.494148 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:32.494148 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:32.494148 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:32.494861 master-0 kubenswrapper[7337]: I0312 18:26:32.494160 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:33.492648 master-0 kubenswrapper[7337]: I0312 18:26:33.492551 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:33.492648 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:33.492648 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:33.492648 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:33.492648 master-0 kubenswrapper[7337]: I0312 18:26:33.492620 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:33.576940 master-0 kubenswrapper[7337]: E0312 18:26:33.576814 7337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 18:26:33.800833 master-0 kubenswrapper[7337]: I0312 18:26:33.800749 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 12 18:26:34.493373 master-0 kubenswrapper[7337]: I0312 18:26:34.493247 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:34.493373 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:34.493373 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:34.493373 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:34.494443 master-0 kubenswrapper[7337]: I0312 18:26:34.493378 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:35.492985 master-0 kubenswrapper[7337]: I0312 18:26:35.492902 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:35.492985 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:35.492985 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:35.492985 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:35.493346 master-0 kubenswrapper[7337]: I0312 18:26:35.493007 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:35.676088 master-0 kubenswrapper[7337]: I0312 18:26:35.676016 7337 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-ljw8b container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Mar 12 18:26:35.676088 master-0 kubenswrapper[7337]: I0312 18:26:35.676070 7337 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" podUID="062f1b21-2ffc-47da-8334-427c3b2a1a90" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Mar 12 18:26:35.676779 master-0 kubenswrapper[7337]: I0312 18:26:35.676110 7337 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:26:35.676779 master-0 kubenswrapper[7337]: I0312 18:26:35.676633 7337 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"fd76ce4ce8b52711507badb364936e780f2befa57cd25718299883f40359fa86"} pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Mar 12 18:26:35.676779 master-0 kubenswrapper[7337]: I0312 18:26:35.676676 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" podUID="062f1b21-2ffc-47da-8334-427c3b2a1a90" containerName="authentication-operator" containerID="cri-o://fd76ce4ce8b52711507badb364936e780f2befa57cd25718299883f40359fa86" gracePeriod=30 Mar 12 18:26:36.493025 master-0 kubenswrapper[7337]: I0312 18:26:36.492951 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:36.493025 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:36.493025 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:36.493025 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:36.493434 master-0 kubenswrapper[7337]: I0312 18:26:36.493052 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:37.148433 master-0 kubenswrapper[7337]: I0312 18:26:37.148327 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ljw8b_062f1b21-2ffc-47da-8334-427c3b2a1a90/authentication-operator/3.log" Mar 12 18:26:37.149325 master-0 kubenswrapper[7337]: I0312 18:26:37.149054 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ljw8b_062f1b21-2ffc-47da-8334-427c3b2a1a90/authentication-operator/2.log" Mar 12 18:26:37.149325 master-0 kubenswrapper[7337]: I0312 18:26:37.149122 7337 generic.go:334] "Generic (PLEG): container finished" podID="062f1b21-2ffc-47da-8334-427c3b2a1a90" containerID="fd76ce4ce8b52711507badb364936e780f2befa57cd25718299883f40359fa86" exitCode=255 Mar 12 18:26:37.149325 master-0 kubenswrapper[7337]: I0312 18:26:37.149157 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" event={"ID":"062f1b21-2ffc-47da-8334-427c3b2a1a90","Type":"ContainerDied","Data":"fd76ce4ce8b52711507badb364936e780f2befa57cd25718299883f40359fa86"} Mar 12 18:26:37.149325 master-0 kubenswrapper[7337]: I0312 18:26:37.149187 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" event={"ID":"062f1b21-2ffc-47da-8334-427c3b2a1a90","Type":"ContainerStarted","Data":"0947e2aa600f02809bc0e80e74513829af30ca5ce847f92836ff0aa87ebf24ca"} Mar 12 18:26:37.149325 master-0 kubenswrapper[7337]: I0312 18:26:37.149204 7337 scope.go:117] "RemoveContainer" containerID="d487597f8c9bef7ba2d4e09c3b10fc797aeda33ac566bd3fb174bfbc58571eff" Mar 12 18:26:37.263716 master-0 kubenswrapper[7337]: I0312 18:26:37.263637 7337 status_manager.go:851] "Failed to get status for pod" podUID="062f1b21-2ffc-47da-8334-427c3b2a1a90" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods authentication-operator-7c6989d6c4-ljw8b)" Mar 12 18:26:37.493111 master-0 kubenswrapper[7337]: I0312 18:26:37.492147 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:37.493111 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:37.493111 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:37.493111 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:37.493111 master-0 kubenswrapper[7337]: I0312 18:26:37.493036 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:38.160397 master-0 kubenswrapper[7337]: I0312 18:26:38.160315 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ljw8b_062f1b21-2ffc-47da-8334-427c3b2a1a90/authentication-operator/3.log" Mar 12 18:26:38.492392 master-0 kubenswrapper[7337]: I0312 18:26:38.492245 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:38.492392 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:38.492392 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:38.492392 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:38.492392 master-0 kubenswrapper[7337]: I0312 18:26:38.492341 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:38.792654 master-0 kubenswrapper[7337]: I0312 18:26:38.792564 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 12 18:26:39.492432 master-0 kubenswrapper[7337]: I0312 18:26:39.492352 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:39.492432 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:39.492432 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:39.492432 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:39.492432 master-0 kubenswrapper[7337]: I0312 18:26:39.492424 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:39.722490 master-0 kubenswrapper[7337]: I0312 18:26:39.722425 7337 scope.go:117] "RemoveContainer" containerID="3ee6889d81d43029dd6623714d782675e77b0ac4d47d44b5e698b3218f31c69c" Mar 12 18:26:39.722794 master-0 kubenswrapper[7337]: E0312 18:26:39.722748 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-2ltx9_openshift-cluster-storage-operator(bce831df-c604-4608-a24e-b14d62c5287a)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-2ltx9" podUID="bce831df-c604-4608-a24e-b14d62c5287a" Mar 12 18:26:40.490008 master-0 kubenswrapper[7337]: I0312 18:26:40.489899 7337 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 18:26:40.490008 master-0 kubenswrapper[7337]: I0312 18:26:40.490006 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="39c441a05d91070efc538925475b0a44" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 18:26:40.493259 master-0 kubenswrapper[7337]: I0312 18:26:40.493198 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:40.493259 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:40.493259 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:40.493259 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:40.494176 master-0 kubenswrapper[7337]: I0312 18:26:40.493265 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:41.492251 master-0 kubenswrapper[7337]: I0312 18:26:41.492186 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:41.492251 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:41.492251 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:41.492251 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:41.492590 master-0 kubenswrapper[7337]: I0312 18:26:41.492270 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:42.492419 master-0 kubenswrapper[7337]: I0312 18:26:42.492369 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:42.492419 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:42.492419 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:42.492419 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:42.493383 master-0 kubenswrapper[7337]: I0312 18:26:42.492444 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:43.493486 master-0 kubenswrapper[7337]: I0312 18:26:43.493360 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:26:43.493486 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:26:43.493486 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:26:43.493486 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:26:43.494665 master-0 kubenswrapper[7337]: I0312 18:26:43.493475 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:26:43.494665 master-0 kubenswrapper[7337]: I0312 18:26:43.493603 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:26:43.494665 master-0 kubenswrapper[7337]: I0312 18:26:43.494590 7337 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"769cd1e8b5824a316a70fa02fbd72a61b282feb96440b00bc150d9a2b430b6d3"} pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" containerMessage="Container router failed startup probe, will be restarted" Mar 12 18:26:43.494851 master-0 kubenswrapper[7337]: I0312 18:26:43.494686 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" containerID="cri-o://769cd1e8b5824a316a70fa02fbd72a61b282feb96440b00bc150d9a2b430b6d3" gracePeriod=3600 Mar 12 18:26:47.241255 master-0 kubenswrapper[7337]: I0312 18:26:47.241067 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-2psgb_e5fb0152-3efd-4000-bce3-fa90b75316ae/cluster-baremetal-operator/1.log" Mar 12 18:26:47.242377 master-0 kubenswrapper[7337]: I0312 18:26:47.242321 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-2psgb_e5fb0152-3efd-4000-bce3-fa90b75316ae/cluster-baremetal-operator/0.log" Mar 12 18:26:47.242508 master-0 kubenswrapper[7337]: I0312 18:26:47.242441 7337 generic.go:334] "Generic (PLEG): container finished" podID="e5fb0152-3efd-4000-bce3-fa90b75316ae" containerID="84549a6a4b2b7f4a99c232b78b53c64bb94831244f578e91d0ebfcc22961117f" exitCode=1 Mar 12 18:26:47.242670 master-0 kubenswrapper[7337]: I0312 18:26:47.242501 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" event={"ID":"e5fb0152-3efd-4000-bce3-fa90b75316ae","Type":"ContainerDied","Data":"84549a6a4b2b7f4a99c232b78b53c64bb94831244f578e91d0ebfcc22961117f"} Mar 12 18:26:47.242670 master-0 kubenswrapper[7337]: I0312 18:26:47.242634 7337 scope.go:117] "RemoveContainer" containerID="15d14268c6ae0aa2ad20f2093d09d878fa7d62076388ad39f3dddf0c18d45f03" Mar 12 18:26:47.243778 master-0 kubenswrapper[7337]: I0312 18:26:47.243700 7337 scope.go:117] "RemoveContainer" containerID="84549a6a4b2b7f4a99c232b78b53c64bb94831244f578e91d0ebfcc22961117f" Mar 12 18:26:47.244834 master-0 kubenswrapper[7337]: E0312 18:26:47.244302 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-5cdb4c5598-2psgb_openshift-machine-api(e5fb0152-3efd-4000-bce3-fa90b75316ae)\"" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" podUID="e5fb0152-3efd-4000-bce3-fa90b75316ae" Mar 12 18:26:48.254711 master-0 kubenswrapper[7337]: I0312 18:26:48.254616 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-2psgb_e5fb0152-3efd-4000-bce3-fa90b75316ae/cluster-baremetal-operator/1.log" Mar 12 18:26:48.380502 master-0 kubenswrapper[7337]: I0312 18:26:48.380064 7337 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:34578->127.0.0.1:10357: read: connection reset by peer" start-of-body= Mar 12 18:26:48.380502 master-0 kubenswrapper[7337]: I0312 18:26:48.380128 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="39c441a05d91070efc538925475b0a44" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:34578->127.0.0.1:10357: read: connection reset by peer" Mar 12 18:26:48.380502 master-0 kubenswrapper[7337]: I0312 18:26:48.380185 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:26:48.381007 master-0 kubenswrapper[7337]: I0312 18:26:48.380966 7337 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"67b90f2dc0521383a52970ef838becfa7a4fd4ded7c3c75b11d7b851c57c011b"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 12 18:26:48.381070 master-0 kubenswrapper[7337]: I0312 18:26:48.381055 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="39c441a05d91070efc538925475b0a44" containerName="cluster-policy-controller" containerID="cri-o://67b90f2dc0521383a52970ef838becfa7a4fd4ded7c3c75b11d7b851c57c011b" gracePeriod=30 Mar 12 18:26:48.389793 master-0 kubenswrapper[7337]: E0312 18:26:48.389623 7337 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189c2b2c021e067d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:BackOff,Message:Back-off restarting failed container kube-scheduler in pod bootstrap-kube-scheduler-master-0_kube-system(a1a56802af72ce1aac6b5077f1695ac0),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:23:41.291210365 +0000 UTC m=+621.759811312,LastTimestamp:2026-03-12 18:23:41.291210365 +0000 UTC m=+621.759811312,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:26:49.263711 master-0 kubenswrapper[7337]: I0312 18:26:49.263623 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/cluster-policy-controller/2.log" Mar 12 18:26:49.264969 master-0 kubenswrapper[7337]: I0312 18:26:49.264039 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/cluster-policy-controller/1.log" Mar 12 18:26:49.265613 master-0 kubenswrapper[7337]: I0312 18:26:49.265577 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/kube-controller-manager/0.log" Mar 12 18:26:49.265758 master-0 kubenswrapper[7337]: I0312 18:26:49.265614 7337 generic.go:334] "Generic (PLEG): container finished" podID="39c441a05d91070efc538925475b0a44" containerID="67b90f2dc0521383a52970ef838becfa7a4fd4ded7c3c75b11d7b851c57c011b" exitCode=255 Mar 12 18:26:49.265758 master-0 kubenswrapper[7337]: I0312 18:26:49.265640 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"39c441a05d91070efc538925475b0a44","Type":"ContainerDied","Data":"67b90f2dc0521383a52970ef838becfa7a4fd4ded7c3c75b11d7b851c57c011b"} Mar 12 18:26:49.265758 master-0 kubenswrapper[7337]: I0312 18:26:49.265664 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"39c441a05d91070efc538925475b0a44","Type":"ContainerStarted","Data":"70a56fe639b2e29fe225557a46043eef2856d3d81e45b284d14769be0d5eb9f5"} Mar 12 18:26:49.265758 master-0 kubenswrapper[7337]: I0312 18:26:49.265682 7337 scope.go:117] "RemoveContainer" containerID="98178e315548a34bcbbf14893436a04d14e894141a4d4ad668fb7f628d400331" Mar 12 18:26:50.275673 master-0 kubenswrapper[7337]: I0312 18:26:50.275627 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/cluster-policy-controller/2.log" Mar 12 18:26:50.277717 master-0 kubenswrapper[7337]: I0312 18:26:50.277676 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/kube-controller-manager/0.log" Mar 12 18:26:50.578531 master-0 kubenswrapper[7337]: E0312 18:26:50.578451 7337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 18:26:51.722647 master-0 kubenswrapper[7337]: I0312 18:26:51.722577 7337 scope.go:117] "RemoveContainer" containerID="3ee6889d81d43029dd6623714d782675e77b0ac4d47d44b5e698b3218f31c69c" Mar 12 18:26:51.723187 master-0 kubenswrapper[7337]: E0312 18:26:51.722947 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-2ltx9_openshift-cluster-storage-operator(bce831df-c604-4608-a24e-b14d62c5287a)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-2ltx9" podUID="bce831df-c604-4608-a24e-b14d62c5287a" Mar 12 18:26:55.007378 master-0 kubenswrapper[7337]: E0312 18:26:55.007279 7337 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 12 18:26:55.310917 master-0 kubenswrapper[7337]: I0312 18:26:55.310848 7337 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="901fa277-f91f-4050-9bba-fc5a7d764d16" Mar 12 18:26:55.310917 master-0 kubenswrapper[7337]: I0312 18:26:55.310892 7337 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="901fa277-f91f-4050-9bba-fc5a7d764d16" Mar 12 18:26:57.489365 master-0 kubenswrapper[7337]: I0312 18:26:57.489211 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:26:57.490293 master-0 kubenswrapper[7337]: I0312 18:26:57.489883 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:27:00.490081 master-0 kubenswrapper[7337]: I0312 18:27:00.489941 7337 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 18:27:00.490081 master-0 kubenswrapper[7337]: I0312 18:27:00.490050 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="39c441a05d91070efc538925475b0a44" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 18:27:01.722467 master-0 kubenswrapper[7337]: I0312 18:27:01.722398 7337 scope.go:117] "RemoveContainer" containerID="84549a6a4b2b7f4a99c232b78b53c64bb94831244f578e91d0ebfcc22961117f" Mar 12 18:27:02.367134 master-0 kubenswrapper[7337]: I0312 18:27:02.367048 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-2psgb_e5fb0152-3efd-4000-bce3-fa90b75316ae/cluster-baremetal-operator/1.log" Mar 12 18:27:02.367503 master-0 kubenswrapper[7337]: I0312 18:27:02.367445 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" event={"ID":"e5fb0152-3efd-4000-bce3-fa90b75316ae","Type":"ContainerStarted","Data":"a2b26c62b9f6c98c92beecd149d44e6763377e53417bf7236ed48cc7741bf7a7"} Mar 12 18:27:02.722596 master-0 kubenswrapper[7337]: I0312 18:27:02.722337 7337 scope.go:117] "RemoveContainer" containerID="3ee6889d81d43029dd6623714d782675e77b0ac4d47d44b5e698b3218f31c69c" Mar 12 18:27:02.723760 master-0 kubenswrapper[7337]: E0312 18:27:02.722747 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-2ltx9_openshift-cluster-storage-operator(bce831df-c604-4608-a24e-b14d62c5287a)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-2ltx9" podUID="bce831df-c604-4608-a24e-b14d62c5287a" Mar 12 18:27:07.580336 master-0 kubenswrapper[7337]: E0312 18:27:07.580216 7337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 18:27:10.489895 master-0 kubenswrapper[7337]: I0312 18:27:10.489773 7337 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 18:27:10.489895 master-0 kubenswrapper[7337]: I0312 18:27:10.489887 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="39c441a05d91070efc538925475b0a44" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 18:27:14.723103 master-0 kubenswrapper[7337]: I0312 18:27:14.723036 7337 scope.go:117] "RemoveContainer" containerID="3ee6889d81d43029dd6623714d782675e77b0ac4d47d44b5e698b3218f31c69c" Mar 12 18:27:15.466897 master-0 kubenswrapper[7337]: I0312 18:27:15.466833 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-2ltx9_bce831df-c604-4608-a24e-b14d62c5287a/snapshot-controller/3.log" Mar 12 18:27:15.466897 master-0 kubenswrapper[7337]: I0312 18:27:15.466901 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-2ltx9" event={"ID":"bce831df-c604-4608-a24e-b14d62c5287a","Type":"ContainerStarted","Data":"4e8d7c61e395835ed934137e23d288387573ec27b30bf825dec0dc340c0e6b4b"} Mar 12 18:27:15.677371 master-0 kubenswrapper[7337]: I0312 18:27:15.677285 7337 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-ljw8b container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Mar 12 18:27:15.677371 master-0 kubenswrapper[7337]: I0312 18:27:15.677343 7337 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" podUID="062f1b21-2ffc-47da-8334-427c3b2a1a90" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Mar 12 18:27:19.073656 master-0 kubenswrapper[7337]: I0312 18:27:19.073571 7337 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:35270->127.0.0.1:10357: read: connection reset by peer" start-of-body= Mar 12 18:27:19.073656 master-0 kubenswrapper[7337]: I0312 18:27:19.073626 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="39c441a05d91070efc538925475b0a44" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:35270->127.0.0.1:10357: read: connection reset by peer" Mar 12 18:27:19.073656 master-0 kubenswrapper[7337]: I0312 18:27:19.073673 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:27:19.075185 master-0 kubenswrapper[7337]: I0312 18:27:19.074205 7337 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"70a56fe639b2e29fe225557a46043eef2856d3d81e45b284d14769be0d5eb9f5"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 12 18:27:19.075185 master-0 kubenswrapper[7337]: I0312 18:27:19.074287 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="39c441a05d91070efc538925475b0a44" containerName="cluster-policy-controller" containerID="cri-o://70a56fe639b2e29fe225557a46043eef2856d3d81e45b284d14769be0d5eb9f5" gracePeriod=30 Mar 12 18:27:19.102799 master-0 kubenswrapper[7337]: E0312 18:27:19.102756 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(39c441a05d91070efc538925475b0a44)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="39c441a05d91070efc538925475b0a44" Mar 12 18:27:19.371106 master-0 kubenswrapper[7337]: I0312 18:27:19.370960 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/cluster-policy-controller/3.log" Mar 12 18:27:19.371807 master-0 kubenswrapper[7337]: I0312 18:27:19.371764 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/cluster-policy-controller/2.log" Mar 12 18:27:19.374067 master-0 kubenswrapper[7337]: I0312 18:27:19.374014 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/kube-controller-manager/0.log" Mar 12 18:27:19.374223 master-0 kubenswrapper[7337]: I0312 18:27:19.374097 7337 generic.go:334] "Generic (PLEG): container finished" podID="39c441a05d91070efc538925475b0a44" containerID="70a56fe639b2e29fe225557a46043eef2856d3d81e45b284d14769be0d5eb9f5" exitCode=255 Mar 12 18:27:19.374223 master-0 kubenswrapper[7337]: I0312 18:27:19.374161 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"39c441a05d91070efc538925475b0a44","Type":"ContainerDied","Data":"70a56fe639b2e29fe225557a46043eef2856d3d81e45b284d14769be0d5eb9f5"} Mar 12 18:27:19.374473 master-0 kubenswrapper[7337]: I0312 18:27:19.374261 7337 scope.go:117] "RemoveContainer" containerID="67b90f2dc0521383a52970ef838becfa7a4fd4ded7c3c75b11d7b851c57c011b" Mar 12 18:27:19.375405 master-0 kubenswrapper[7337]: I0312 18:27:19.375359 7337 scope.go:117] "RemoveContainer" containerID="70a56fe639b2e29fe225557a46043eef2856d3d81e45b284d14769be0d5eb9f5" Mar 12 18:27:19.375998 master-0 kubenswrapper[7337]: E0312 18:27:19.375891 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(39c441a05d91070efc538925475b0a44)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="39c441a05d91070efc538925475b0a44" Mar 12 18:27:20.384717 master-0 kubenswrapper[7337]: I0312 18:27:20.384643 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/cluster-policy-controller/3.log" Mar 12 18:27:20.387941 master-0 kubenswrapper[7337]: I0312 18:27:20.387896 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/kube-controller-manager/0.log" Mar 12 18:27:22.391882 master-0 kubenswrapper[7337]: E0312 18:27:22.391737 7337 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{authentication-operator-7c6989d6c4-ljw8b.189c2b2c0262d7c4 openshift-authentication-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-authentication-operator,Name:authentication-operator-7c6989d6c4-ljw8b,UID:062f1b21-2ffc-47da-8334-427c3b2a1a90,APIVersion:v1,ResourceVersion:3753,FieldPath:spec.containers{authentication-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:23:41.295720388 +0000 UTC m=+621.764321345,LastTimestamp:2026-03-12 18:23:41.295720388 +0000 UTC m=+621.764321345,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:27:24.588558 master-0 kubenswrapper[7337]: E0312 18:27:24.585462 7337 controller.go:145] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io master-0)" interval="7s" Mar 12 18:27:25.676575 master-0 kubenswrapper[7337]: I0312 18:27:25.676449 7337 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-ljw8b container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Mar 12 18:27:25.677565 master-0 kubenswrapper[7337]: I0312 18:27:25.676619 7337 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" podUID="062f1b21-2ffc-47da-8334-427c3b2a1a90" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Mar 12 18:27:27.489484 master-0 kubenswrapper[7337]: I0312 18:27:27.489392 7337 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:27:27.490434 master-0 kubenswrapper[7337]: I0312 18:27:27.490376 7337 scope.go:117] "RemoveContainer" containerID="70a56fe639b2e29fe225557a46043eef2856d3d81e45b284d14769be0d5eb9f5" Mar 12 18:27:27.490897 master-0 kubenswrapper[7337]: E0312 18:27:27.490791 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(39c441a05d91070efc538925475b0a44)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="39c441a05d91070efc538925475b0a44" Mar 12 18:27:29.313096 master-0 kubenswrapper[7337]: E0312 18:27:29.312996 7337 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 12 18:27:30.471462 master-0 kubenswrapper[7337]: I0312 18:27:30.471287 7337 generic.go:334] "Generic (PLEG): container finished" podID="518ffff8-8119-41be-8b76-ce49d5751254" containerID="769cd1e8b5824a316a70fa02fbd72a61b282feb96440b00bc150d9a2b430b6d3" exitCode=0 Mar 12 18:27:30.471462 master-0 kubenswrapper[7337]: I0312 18:27:30.471435 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" event={"ID":"518ffff8-8119-41be-8b76-ce49d5751254","Type":"ContainerDied","Data":"769cd1e8b5824a316a70fa02fbd72a61b282feb96440b00bc150d9a2b430b6d3"} Mar 12 18:27:30.471462 master-0 kubenswrapper[7337]: I0312 18:27:30.471478 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" event={"ID":"518ffff8-8119-41be-8b76-ce49d5751254","Type":"ContainerStarted","Data":"2789b77883f53d7f30aaec4bb48caff5d717174d7bb20c3d97c3b9573ac6987f"} Mar 12 18:27:30.472550 master-0 kubenswrapper[7337]: I0312 18:27:30.471504 7337 scope.go:117] "RemoveContainer" containerID="2c667ded264eac2ef91dbced4e1d0c451c7fcbbd73ca41017073728ca29d6478" Mar 12 18:27:30.490111 master-0 kubenswrapper[7337]: I0312 18:27:30.490045 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:27:30.493203 master-0 kubenswrapper[7337]: I0312 18:27:30.493144 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:27:30.493203 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:27:30.493203 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:27:30.493203 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:27:30.493586 master-0 kubenswrapper[7337]: I0312 18:27:30.493199 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:27:31.493348 master-0 kubenswrapper[7337]: I0312 18:27:31.493268 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:27:31.493348 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:27:31.493348 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:27:31.493348 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:27:31.493348 master-0 kubenswrapper[7337]: I0312 18:27:31.493329 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:27:32.493386 master-0 kubenswrapper[7337]: I0312 18:27:32.493287 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:27:32.493386 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:27:32.493386 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:27:32.493386 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:27:32.493386 master-0 kubenswrapper[7337]: I0312 18:27:32.493370 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:27:33.492990 master-0 kubenswrapper[7337]: I0312 18:27:33.492905 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:27:33.492990 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:27:33.492990 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:27:33.492990 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:27:33.494157 master-0 kubenswrapper[7337]: I0312 18:27:33.493003 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:27:34.492872 master-0 kubenswrapper[7337]: I0312 18:27:34.492793 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:27:34.492872 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:27:34.492872 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:27:34.492872 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:27:34.493360 master-0 kubenswrapper[7337]: I0312 18:27:34.492875 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:27:35.493412 master-0 kubenswrapper[7337]: I0312 18:27:35.493344 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:27:35.493412 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:27:35.493412 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:27:35.493412 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:27:35.494334 master-0 kubenswrapper[7337]: I0312 18:27:35.493437 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:27:35.676357 master-0 kubenswrapper[7337]: I0312 18:27:35.676254 7337 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-ljw8b container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Mar 12 18:27:35.676357 master-0 kubenswrapper[7337]: I0312 18:27:35.676343 7337 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" podUID="062f1b21-2ffc-47da-8334-427c3b2a1a90" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Mar 12 18:27:35.676842 master-0 kubenswrapper[7337]: I0312 18:27:35.676422 7337 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:27:35.677443 master-0 kubenswrapper[7337]: I0312 18:27:35.677373 7337 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"0947e2aa600f02809bc0e80e74513829af30ca5ce847f92836ff0aa87ebf24ca"} pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Mar 12 18:27:35.677658 master-0 kubenswrapper[7337]: I0312 18:27:35.677457 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" podUID="062f1b21-2ffc-47da-8334-427c3b2a1a90" containerName="authentication-operator" containerID="cri-o://0947e2aa600f02809bc0e80e74513829af30ca5ce847f92836ff0aa87ebf24ca" gracePeriod=30 Mar 12 18:27:36.094960 master-0 kubenswrapper[7337]: E0312 18:27:36.094902 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-7c6989d6c4-ljw8b_openshift-authentication-operator(062f1b21-2ffc-47da-8334-427c3b2a1a90)\"" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" podUID="062f1b21-2ffc-47da-8334-427c3b2a1a90" Mar 12 18:27:36.493472 master-0 kubenswrapper[7337]: I0312 18:27:36.493402 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:27:36.493472 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:27:36.493472 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:27:36.493472 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:27:36.493472 master-0 kubenswrapper[7337]: I0312 18:27:36.493470 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:27:36.526046 master-0 kubenswrapper[7337]: I0312 18:27:36.525952 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ljw8b_062f1b21-2ffc-47da-8334-427c3b2a1a90/authentication-operator/4.log" Mar 12 18:27:36.526848 master-0 kubenswrapper[7337]: I0312 18:27:36.526788 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ljw8b_062f1b21-2ffc-47da-8334-427c3b2a1a90/authentication-operator/3.log" Mar 12 18:27:36.526975 master-0 kubenswrapper[7337]: I0312 18:27:36.526880 7337 generic.go:334] "Generic (PLEG): container finished" podID="062f1b21-2ffc-47da-8334-427c3b2a1a90" containerID="0947e2aa600f02809bc0e80e74513829af30ca5ce847f92836ff0aa87ebf24ca" exitCode=255 Mar 12 18:27:36.526975 master-0 kubenswrapper[7337]: I0312 18:27:36.526930 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" event={"ID":"062f1b21-2ffc-47da-8334-427c3b2a1a90","Type":"ContainerDied","Data":"0947e2aa600f02809bc0e80e74513829af30ca5ce847f92836ff0aa87ebf24ca"} Mar 12 18:27:36.527213 master-0 kubenswrapper[7337]: I0312 18:27:36.526981 7337 scope.go:117] "RemoveContainer" containerID="fd76ce4ce8b52711507badb364936e780f2befa57cd25718299883f40359fa86" Mar 12 18:27:36.527797 master-0 kubenswrapper[7337]: I0312 18:27:36.527726 7337 scope.go:117] "RemoveContainer" containerID="0947e2aa600f02809bc0e80e74513829af30ca5ce847f92836ff0aa87ebf24ca" Mar 12 18:27:36.528250 master-0 kubenswrapper[7337]: E0312 18:27:36.528130 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-7c6989d6c4-ljw8b_openshift-authentication-operator(062f1b21-2ffc-47da-8334-427c3b2a1a90)\"" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" podUID="062f1b21-2ffc-47da-8334-427c3b2a1a90" Mar 12 18:27:37.265390 master-0 kubenswrapper[7337]: I0312 18:27:37.265296 7337 status_manager.go:851] "Failed to get status for pod" podUID="8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe" pod="openshift-network-node-identity/network-node-identity-hqrqt" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods network-node-identity-hqrqt)" Mar 12 18:27:37.490062 master-0 kubenswrapper[7337]: I0312 18:27:37.489816 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:27:37.492974 master-0 kubenswrapper[7337]: I0312 18:27:37.492881 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:27:37.492974 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:27:37.492974 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:27:37.492974 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:27:37.493428 master-0 kubenswrapper[7337]: I0312 18:27:37.492972 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:27:37.542356 master-0 kubenswrapper[7337]: I0312 18:27:37.542298 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ljw8b_062f1b21-2ffc-47da-8334-427c3b2a1a90/authentication-operator/4.log" Mar 12 18:27:38.493341 master-0 kubenswrapper[7337]: I0312 18:27:38.493177 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:27:38.493341 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:27:38.493341 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:27:38.493341 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:27:38.493341 master-0 kubenswrapper[7337]: I0312 18:27:38.493261 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:27:38.551714 master-0 kubenswrapper[7337]: I0312 18:27:38.551659 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4527l_d94dc349-c5cb-4f12-8e48-867030af4981/ingress-operator/4.log" Mar 12 18:27:38.552483 master-0 kubenswrapper[7337]: I0312 18:27:38.552446 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4527l_d94dc349-c5cb-4f12-8e48-867030af4981/ingress-operator/3.log" Mar 12 18:27:38.553166 master-0 kubenswrapper[7337]: I0312 18:27:38.553122 7337 generic.go:334] "Generic (PLEG): container finished" podID="d94dc349-c5cb-4f12-8e48-867030af4981" containerID="37c9fcab8917043972cb8da48f9b3a66fa98e29cb384d4ab82bdb89b8dd2d452" exitCode=1 Mar 12 18:27:38.553219 master-0 kubenswrapper[7337]: I0312 18:27:38.553195 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" event={"ID":"d94dc349-c5cb-4f12-8e48-867030af4981","Type":"ContainerDied","Data":"37c9fcab8917043972cb8da48f9b3a66fa98e29cb384d4ab82bdb89b8dd2d452"} Mar 12 18:27:38.553271 master-0 kubenswrapper[7337]: I0312 18:27:38.553248 7337 scope.go:117] "RemoveContainer" containerID="6c80cd8acac8db8cb197069003bd675e1b445a06ed49a0673a80b100538ac223" Mar 12 18:27:38.553985 master-0 kubenswrapper[7337]: I0312 18:27:38.553955 7337 scope.go:117] "RemoveContainer" containerID="37c9fcab8917043972cb8da48f9b3a66fa98e29cb384d4ab82bdb89b8dd2d452" Mar 12 18:27:38.554435 master-0 kubenswrapper[7337]: E0312 18:27:38.554400 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-4527l_openshift-ingress-operator(d94dc349-c5cb-4f12-8e48-867030af4981)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" podUID="d94dc349-c5cb-4f12-8e48-867030af4981" Mar 12 18:27:39.493132 master-0 kubenswrapper[7337]: I0312 18:27:39.493051 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:27:39.493132 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:27:39.493132 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:27:39.493132 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:27:39.493460 master-0 kubenswrapper[7337]: I0312 18:27:39.493220 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:27:39.563201 master-0 kubenswrapper[7337]: I0312 18:27:39.563147 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4527l_d94dc349-c5cb-4f12-8e48-867030af4981/ingress-operator/4.log" Mar 12 18:27:40.493333 master-0 kubenswrapper[7337]: I0312 18:27:40.493211 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:27:40.493333 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:27:40.493333 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:27:40.493333 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:27:40.493767 master-0 kubenswrapper[7337]: I0312 18:27:40.493390 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:27:41.492283 master-0 kubenswrapper[7337]: I0312 18:27:41.492198 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:27:41.492283 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:27:41.492283 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:27:41.492283 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:27:41.492283 master-0 kubenswrapper[7337]: I0312 18:27:41.492261 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:27:41.587374 master-0 kubenswrapper[7337]: E0312 18:27:41.587278 7337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 18:27:42.492614 master-0 kubenswrapper[7337]: I0312 18:27:42.492556 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:27:42.492614 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:27:42.492614 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:27:42.492614 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:27:42.493151 master-0 kubenswrapper[7337]: I0312 18:27:42.492622 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:27:42.722825 master-0 kubenswrapper[7337]: I0312 18:27:42.722769 7337 scope.go:117] "RemoveContainer" containerID="70a56fe639b2e29fe225557a46043eef2856d3d81e45b284d14769be0d5eb9f5" Mar 12 18:27:42.723038 master-0 kubenswrapper[7337]: E0312 18:27:42.722986 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(39c441a05d91070efc538925475b0a44)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="39c441a05d91070efc538925475b0a44" Mar 12 18:27:43.493469 master-0 kubenswrapper[7337]: I0312 18:27:43.493373 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:27:43.493469 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:27:43.493469 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:27:43.493469 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:27:43.493469 master-0 kubenswrapper[7337]: I0312 18:27:43.493454 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:27:44.493120 master-0 kubenswrapper[7337]: I0312 18:27:44.493031 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:27:44.493120 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:27:44.493120 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:27:44.493120 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:27:44.494235 master-0 kubenswrapper[7337]: I0312 18:27:44.493143 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:27:45.492473 master-0 kubenswrapper[7337]: I0312 18:27:45.492386 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:27:45.492473 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:27:45.492473 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:27:45.492473 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:27:45.493071 master-0 kubenswrapper[7337]: I0312 18:27:45.492486 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:27:46.492970 master-0 kubenswrapper[7337]: I0312 18:27:46.492888 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:27:46.492970 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:27:46.492970 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:27:46.492970 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:27:46.494007 master-0 kubenswrapper[7337]: I0312 18:27:46.492979 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:27:47.491566 master-0 kubenswrapper[7337]: I0312 18:27:47.491484 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:27:47.491566 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:27:47.491566 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:27:47.491566 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:27:47.491900 master-0 kubenswrapper[7337]: I0312 18:27:47.491595 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:27:47.722859 master-0 kubenswrapper[7337]: I0312 18:27:47.722766 7337 scope.go:117] "RemoveContainer" containerID="0947e2aa600f02809bc0e80e74513829af30ca5ce847f92836ff0aa87ebf24ca" Mar 12 18:27:47.723881 master-0 kubenswrapper[7337]: E0312 18:27:47.723036 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-7c6989d6c4-ljw8b_openshift-authentication-operator(062f1b21-2ffc-47da-8334-427c3b2a1a90)\"" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" podUID="062f1b21-2ffc-47da-8334-427c3b2a1a90" Mar 12 18:27:48.493676 master-0 kubenswrapper[7337]: I0312 18:27:48.493589 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:27:48.493676 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:27:48.493676 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:27:48.493676 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:27:48.494269 master-0 kubenswrapper[7337]: I0312 18:27:48.493713 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:27:49.492484 master-0 kubenswrapper[7337]: I0312 18:27:49.492376 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:27:49.492484 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:27:49.492484 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:27:49.492484 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:27:49.493558 master-0 kubenswrapper[7337]: I0312 18:27:49.492501 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:27:50.492386 master-0 kubenswrapper[7337]: I0312 18:27:50.492300 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:27:50.492386 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:27:50.492386 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:27:50.492386 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:27:50.493463 master-0 kubenswrapper[7337]: I0312 18:27:50.492413 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:27:51.492699 master-0 kubenswrapper[7337]: I0312 18:27:51.492618 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:27:51.492699 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:27:51.492699 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:27:51.492699 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:27:51.493536 master-0 kubenswrapper[7337]: I0312 18:27:51.492732 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:27:52.493158 master-0 kubenswrapper[7337]: I0312 18:27:52.493084 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:27:52.493158 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:27:52.493158 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:27:52.493158 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:27:52.493158 master-0 kubenswrapper[7337]: I0312 18:27:52.493151 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:27:53.492650 master-0 kubenswrapper[7337]: I0312 18:27:53.492582 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:27:53.492650 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:27:53.492650 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:27:53.492650 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:27:53.493109 master-0 kubenswrapper[7337]: I0312 18:27:53.492683 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:27:53.723316 master-0 kubenswrapper[7337]: I0312 18:27:53.723237 7337 scope.go:117] "RemoveContainer" containerID="37c9fcab8917043972cb8da48f9b3a66fa98e29cb384d4ab82bdb89b8dd2d452" Mar 12 18:27:53.724212 master-0 kubenswrapper[7337]: E0312 18:27:53.723694 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-4527l_openshift-ingress-operator(d94dc349-c5cb-4f12-8e48-867030af4981)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" podUID="d94dc349-c5cb-4f12-8e48-867030af4981" Mar 12 18:27:54.494482 master-0 kubenswrapper[7337]: I0312 18:27:54.494370 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:27:54.494482 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:27:54.494482 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:27:54.494482 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:27:54.494835 master-0 kubenswrapper[7337]: I0312 18:27:54.494499 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:27:55.493118 master-0 kubenswrapper[7337]: I0312 18:27:55.493009 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:27:55.493118 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:27:55.493118 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:27:55.493118 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:27:55.494727 master-0 kubenswrapper[7337]: I0312 18:27:55.493125 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:27:56.394350 master-0 kubenswrapper[7337]: E0312 18:27:56.394119 7337 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{authentication-operator-7c6989d6c4-ljw8b.189c2aa014a783ed openshift-authentication-operator 5611 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-authentication-operator,Name:authentication-operator-7c6989d6c4-ljw8b,UID:062f1b21-2ffc-47da-8334-427c3b2a1a90,APIVersion:v1,ResourceVersion:3753,FieldPath:spec.containers{authentication-operator},},Reason:Created,Message:Created container: authentication-operator,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:13:40 +0000 UTC,LastTimestamp:2026-03-12 18:23:41.437874334 +0000 UTC m=+621.906475301,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:27:56.493477 master-0 kubenswrapper[7337]: I0312 18:27:56.493386 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:27:56.493477 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:27:56.493477 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:27:56.493477 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:27:56.494428 master-0 kubenswrapper[7337]: I0312 18:27:56.493498 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:27:57.492648 master-0 kubenswrapper[7337]: I0312 18:27:57.492587 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:27:57.492648 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:27:57.492648 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:27:57.492648 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:27:57.493225 master-0 kubenswrapper[7337]: I0312 18:27:57.493179 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:27:57.722982 master-0 kubenswrapper[7337]: I0312 18:27:57.722905 7337 scope.go:117] "RemoveContainer" containerID="70a56fe639b2e29fe225557a46043eef2856d3d81e45b284d14769be0d5eb9f5" Mar 12 18:27:57.723797 master-0 kubenswrapper[7337]: E0312 18:27:57.723386 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(39c441a05d91070efc538925475b0a44)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="39c441a05d91070efc538925475b0a44" Mar 12 18:27:58.492891 master-0 kubenswrapper[7337]: I0312 18:27:58.492825 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:27:58.492891 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:27:58.492891 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:27:58.492891 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:27:58.493302 master-0 kubenswrapper[7337]: I0312 18:27:58.492916 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:27:58.589018 master-0 kubenswrapper[7337]: E0312 18:27:58.588943 7337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 18:27:58.722325 master-0 kubenswrapper[7337]: I0312 18:27:58.722255 7337 scope.go:117] "RemoveContainer" containerID="0947e2aa600f02809bc0e80e74513829af30ca5ce847f92836ff0aa87ebf24ca" Mar 12 18:27:58.722752 master-0 kubenswrapper[7337]: E0312 18:27:58.722700 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-7c6989d6c4-ljw8b_openshift-authentication-operator(062f1b21-2ffc-47da-8334-427c3b2a1a90)\"" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" podUID="062f1b21-2ffc-47da-8334-427c3b2a1a90" Mar 12 18:27:59.493277 master-0 kubenswrapper[7337]: I0312 18:27:59.493190 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:27:59.493277 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:27:59.493277 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:27:59.493277 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:27:59.494370 master-0 kubenswrapper[7337]: I0312 18:27:59.493285 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:00.494925 master-0 kubenswrapper[7337]: I0312 18:28:00.494787 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:00.494925 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:00.494925 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:00.494925 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:00.494925 master-0 kubenswrapper[7337]: I0312 18:28:00.494881 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:00.757136 master-0 kubenswrapper[7337]: I0312 18:28:00.756682 7337 generic.go:334] "Generic (PLEG): container finished" podID="45aa4887-c913-4ece-ae34-fcde33832621" containerID="713977d47dfecb905c7cc3c14de2a72254744fe363e6f7198ff24aaf349daf7b" exitCode=0 Mar 12 18:28:00.771666 master-0 kubenswrapper[7337]: I0312 18:28:00.756764 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-649db" event={"ID":"45aa4887-c913-4ece-ae34-fcde33832621","Type":"ContainerDied","Data":"713977d47dfecb905c7cc3c14de2a72254744fe363e6f7198ff24aaf349daf7b"} Mar 12 18:28:00.771666 master-0 kubenswrapper[7337]: I0312 18:28:00.762272 7337 generic.go:334] "Generic (PLEG): container finished" podID="604044f4-9b0b-4747-827d-843f3cfa7077" containerID="6b224901428e2ddbe12d7888c29aa663990f99e54eaab842f708f9d3489fa570" exitCode=0 Mar 12 18:28:00.771666 master-0 kubenswrapper[7337]: I0312 18:28:00.762311 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg" event={"ID":"604044f4-9b0b-4747-827d-843f3cfa7077","Type":"ContainerDied","Data":"6b224901428e2ddbe12d7888c29aa663990f99e54eaab842f708f9d3489fa570"} Mar 12 18:28:00.771666 master-0 kubenswrapper[7337]: I0312 18:28:00.764163 7337 scope.go:117] "RemoveContainer" containerID="713977d47dfecb905c7cc3c14de2a72254744fe363e6f7198ff24aaf349daf7b" Mar 12 18:28:00.771666 master-0 kubenswrapper[7337]: I0312 18:28:00.765021 7337 scope.go:117] "RemoveContainer" containerID="6b224901428e2ddbe12d7888c29aa663990f99e54eaab842f708f9d3489fa570" Mar 12 18:28:01.493024 master-0 kubenswrapper[7337]: I0312 18:28:01.492924 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:01.493024 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:01.493024 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:01.493024 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:01.493502 master-0 kubenswrapper[7337]: I0312 18:28:01.493014 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:01.535783 master-0 kubenswrapper[7337]: I0312 18:28:01.535703 7337 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tjp2j container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 12 18:28:01.536462 master-0 kubenswrapper[7337]: I0312 18:28:01.535796 7337 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" podUID="37cd9c0a-697e-4e67-932b-b331ff77c8c0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 12 18:28:01.775364 master-0 kubenswrapper[7337]: I0312 18:28:01.775188 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-649db" event={"ID":"45aa4887-c913-4ece-ae34-fcde33832621","Type":"ContainerStarted","Data":"b9047eb501f6a3ec111a7d2216c87adaede6d274ffec24372f3f5d4900526090"} Mar 12 18:28:01.778164 master-0 kubenswrapper[7337]: I0312 18:28:01.778061 7337 generic.go:334] "Generic (PLEG): container finished" podID="e27d2693-1a06-473e-a126-614b939bae33" containerID="a3dc2b7095b6c8ddb045125ce99fe767799ec6ab7d744f0b94e5707b821e152b" exitCode=0 Mar 12 18:28:01.778274 master-0 kubenswrapper[7337]: I0312 18:28:01.778213 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79884f6cc-tpdsd" event={"ID":"e27d2693-1a06-473e-a126-614b939bae33","Type":"ContainerDied","Data":"a3dc2b7095b6c8ddb045125ce99fe767799ec6ab7d744f0b94e5707b821e152b"} Mar 12 18:28:01.779124 master-0 kubenswrapper[7337]: I0312 18:28:01.779068 7337 scope.go:117] "RemoveContainer" containerID="a3dc2b7095b6c8ddb045125ce99fe767799ec6ab7d744f0b94e5707b821e152b" Mar 12 18:28:01.781164 master-0 kubenswrapper[7337]: I0312 18:28:01.781098 7337 generic.go:334] "Generic (PLEG): container finished" podID="8fe3d699-023e-4de7-8d42-6c9d8a5e68f3" containerID="7007abd6bd87f278095a5c5bea805876ca0e2532537842c0b1266ddd70ce3cd3" exitCode=0 Mar 12 18:28:01.781411 master-0 kubenswrapper[7337]: I0312 18:28:01.781185 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-769nb" event={"ID":"8fe3d699-023e-4de7-8d42-6c9d8a5e68f3","Type":"ContainerDied","Data":"7007abd6bd87f278095a5c5bea805876ca0e2532537842c0b1266ddd70ce3cd3"} Mar 12 18:28:01.781789 master-0 kubenswrapper[7337]: I0312 18:28:01.781690 7337 scope.go:117] "RemoveContainer" containerID="7007abd6bd87f278095a5c5bea805876ca0e2532537842c0b1266ddd70ce3cd3" Mar 12 18:28:01.784505 master-0 kubenswrapper[7337]: I0312 18:28:01.784263 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-vksss_b6d288e3-8e73-44d2-874d-64c6c98dd991/network-operator/1.log" Mar 12 18:28:01.784505 master-0 kubenswrapper[7337]: I0312 18:28:01.784332 7337 generic.go:334] "Generic (PLEG): container finished" podID="b6d288e3-8e73-44d2-874d-64c6c98dd991" containerID="61391e64ce8e20710a16e47ab514517643e782dc9c713a84a5cefd62cff8c6ad" exitCode=0 Mar 12 18:28:01.784505 master-0 kubenswrapper[7337]: I0312 18:28:01.784367 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" event={"ID":"b6d288e3-8e73-44d2-874d-64c6c98dd991","Type":"ContainerDied","Data":"61391e64ce8e20710a16e47ab514517643e782dc9c713a84a5cefd62cff8c6ad"} Mar 12 18:28:01.784505 master-0 kubenswrapper[7337]: I0312 18:28:01.784416 7337 scope.go:117] "RemoveContainer" containerID="6c849a8a5d45f345d2ba87802bb0e5ab5b7b8e621fde5662c4d921f30c496726" Mar 12 18:28:01.785434 master-0 kubenswrapper[7337]: I0312 18:28:01.785307 7337 scope.go:117] "RemoveContainer" containerID="61391e64ce8e20710a16e47ab514517643e782dc9c713a84a5cefd62cff8c6ad" Mar 12 18:28:01.787360 master-0 kubenswrapper[7337]: I0312 18:28:01.787305 7337 generic.go:334] "Generic (PLEG): container finished" podID="e22c7035-4b7a-48cb-9abb-db277b387842" containerID="a987d23905b82090084aa8d4e8ab172632e1e1833011544d548639c8ff18c467" exitCode=0 Mar 12 18:28:01.787360 master-0 kubenswrapper[7337]: I0312 18:28:01.787340 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" event={"ID":"e22c7035-4b7a-48cb-9abb-db277b387842","Type":"ContainerDied","Data":"a987d23905b82090084aa8d4e8ab172632e1e1833011544d548639c8ff18c467"} Mar 12 18:28:01.788037 master-0 kubenswrapper[7337]: I0312 18:28:01.787984 7337 scope.go:117] "RemoveContainer" containerID="a987d23905b82090084aa8d4e8ab172632e1e1833011544d548639c8ff18c467" Mar 12 18:28:01.791426 master-0 kubenswrapper[7337]: I0312 18:28:01.791134 7337 generic.go:334] "Generic (PLEG): container finished" podID="37cd9c0a-697e-4e67-932b-b331ff77c8c0" containerID="37af6c94f0de0a2163a4ac4e6ab6085ad4d71da179fad764b86f087db1506c46" exitCode=0 Mar 12 18:28:01.791426 master-0 kubenswrapper[7337]: I0312 18:28:01.791208 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" event={"ID":"37cd9c0a-697e-4e67-932b-b331ff77c8c0","Type":"ContainerDied","Data":"37af6c94f0de0a2163a4ac4e6ab6085ad4d71da179fad764b86f087db1506c46"} Mar 12 18:28:01.791830 master-0 kubenswrapper[7337]: I0312 18:28:01.791649 7337 scope.go:117] "RemoveContainer" containerID="37af6c94f0de0a2163a4ac4e6ab6085ad4d71da179fad764b86f087db1506c46" Mar 12 18:28:01.793701 master-0 kubenswrapper[7337]: I0312 18:28:01.793632 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77899cf6d-6nvn4_f3a2cda2-b70f-4128-a1be-48503f5aad6d/cluster-olm-operator/0.log" Mar 12 18:28:01.794827 master-0 kubenswrapper[7337]: I0312 18:28:01.794787 7337 generic.go:334] "Generic (PLEG): container finished" podID="f3a2cda2-b70f-4128-a1be-48503f5aad6d" containerID="14be846643126f0f684988fbee828e3ae28a2a3ed42495436ab25923fcd90c1e" exitCode=0 Mar 12 18:28:01.795018 master-0 kubenswrapper[7337]: I0312 18:28:01.794860 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4" event={"ID":"f3a2cda2-b70f-4128-a1be-48503f5aad6d","Type":"ContainerDied","Data":"14be846643126f0f684988fbee828e3ae28a2a3ed42495436ab25923fcd90c1e"} Mar 12 18:28:01.795777 master-0 kubenswrapper[7337]: I0312 18:28:01.795345 7337 scope.go:117] "RemoveContainer" containerID="14be846643126f0f684988fbee828e3ae28a2a3ed42495436ab25923fcd90c1e" Mar 12 18:28:01.799757 master-0 kubenswrapper[7337]: I0312 18:28:01.799665 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg" event={"ID":"604044f4-9b0b-4747-827d-843f3cfa7077","Type":"ContainerStarted","Data":"bf93ee9a8ea6130e16b71f217363eff1a7368e844872d3c2a0a6b626e96e64ea"} Mar 12 18:28:01.804568 master-0 kubenswrapper[7337]: I0312 18:28:01.804542 7337 generic.go:334] "Generic (PLEG): container finished" podID="d4ae1240-e04e-48e9-88df-9f1a53508da7" containerID="3bfdf8caec49323e35f87883171b05e3d1f44df1c027fc9a9977c37c9de794d7" exitCode=0 Mar 12 18:28:01.804653 master-0 kubenswrapper[7337]: I0312 18:28:01.804609 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpb6k" event={"ID":"d4ae1240-e04e-48e9-88df-9f1a53508da7","Type":"ContainerDied","Data":"3bfdf8caec49323e35f87883171b05e3d1f44df1c027fc9a9977c37c9de794d7"} Mar 12 18:28:01.805019 master-0 kubenswrapper[7337]: I0312 18:28:01.804951 7337 scope.go:117] "RemoveContainer" containerID="3bfdf8caec49323e35f87883171b05e3d1f44df1c027fc9a9977c37c9de794d7" Mar 12 18:28:01.807957 master-0 kubenswrapper[7337]: I0312 18:28:01.807900 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-lqpbp_306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed/cluster-node-tuning-operator/0.log" Mar 12 18:28:01.808091 master-0 kubenswrapper[7337]: I0312 18:28:01.807971 7337 generic.go:334] "Generic (PLEG): container finished" podID="306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed" containerID="bf6736acde3a261d2e1c8eec8be75f38ab871967029a8a7ea9d5bc1635fc75f5" exitCode=1 Mar 12 18:28:01.808091 master-0 kubenswrapper[7337]: I0312 18:28:01.808048 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" event={"ID":"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed","Type":"ContainerDied","Data":"bf6736acde3a261d2e1c8eec8be75f38ab871967029a8a7ea9d5bc1635fc75f5"} Mar 12 18:28:01.808870 master-0 kubenswrapper[7337]: I0312 18:28:01.808821 7337 scope.go:117] "RemoveContainer" containerID="bf6736acde3a261d2e1c8eec8be75f38ab871967029a8a7ea9d5bc1635fc75f5" Mar 12 18:28:01.812151 master-0 kubenswrapper[7337]: I0312 18:28:01.812106 7337 generic.go:334] "Generic (PLEG): container finished" podID="a1e2340b-ebca-40de-b1e0-8133999cd860" containerID="9de5a3b93eb3f1136dd34751bc0d652341fdfc646209d52ecaff219c3bdfc30b" exitCode=0 Mar 12 18:28:01.812440 master-0 kubenswrapper[7337]: I0312 18:28:01.812221 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-2xr98" event={"ID":"a1e2340b-ebca-40de-b1e0-8133999cd860","Type":"ContainerDied","Data":"9de5a3b93eb3f1136dd34751bc0d652341fdfc646209d52ecaff219c3bdfc30b"} Mar 12 18:28:01.813310 master-0 kubenswrapper[7337]: I0312 18:28:01.813067 7337 scope.go:117] "RemoveContainer" containerID="9de5a3b93eb3f1136dd34751bc0d652341fdfc646209d52ecaff219c3bdfc30b" Mar 12 18:28:01.814563 master-0 kubenswrapper[7337]: I0312 18:28:01.814487 7337 generic.go:334] "Generic (PLEG): container finished" podID="e697746f-fb9e-4d10-ab61-33c68e62cc0d" containerID="4322cdc97f321d2418571282b2d0a02572a0fe1f4c6c9ffe9fbcda76c46d48dc" exitCode=0 Mar 12 18:28:01.814643 master-0 kubenswrapper[7337]: I0312 18:28:01.814564 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" event={"ID":"e697746f-fb9e-4d10-ab61-33c68e62cc0d","Type":"ContainerDied","Data":"4322cdc97f321d2418571282b2d0a02572a0fe1f4c6c9ffe9fbcda76c46d48dc"} Mar 12 18:28:01.815097 master-0 kubenswrapper[7337]: I0312 18:28:01.815066 7337 scope.go:117] "RemoveContainer" containerID="4322cdc97f321d2418571282b2d0a02572a0fe1f4c6c9ffe9fbcda76c46d48dc" Mar 12 18:28:01.816968 master-0 kubenswrapper[7337]: I0312 18:28:01.816862 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-kwv7s_51eb717b-d11f-4bc3-8df6-deb51d5889f3/package-server-manager/0.log" Mar 12 18:28:01.817846 master-0 kubenswrapper[7337]: I0312 18:28:01.817422 7337 generic.go:334] "Generic (PLEG): container finished" podID="51eb717b-d11f-4bc3-8df6-deb51d5889f3" containerID="33470f162304f6a1c732da622d08f9a2cb10dfebe7eb3e1cc79d0a55f3c66c95" exitCode=1 Mar 12 18:28:01.817846 master-0 kubenswrapper[7337]: I0312 18:28:01.817482 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" event={"ID":"51eb717b-d11f-4bc3-8df6-deb51d5889f3","Type":"ContainerDied","Data":"33470f162304f6a1c732da622d08f9a2cb10dfebe7eb3e1cc79d0a55f3c66c95"} Mar 12 18:28:01.819906 master-0 kubenswrapper[7337]: I0312 18:28:01.819773 7337 scope.go:117] "RemoveContainer" containerID="33470f162304f6a1c732da622d08f9a2cb10dfebe7eb3e1cc79d0a55f3c66c95" Mar 12 18:28:01.821037 master-0 kubenswrapper[7337]: I0312 18:28:01.820978 7337 generic.go:334] "Generic (PLEG): container finished" podID="4048e453-a983-4708-89b6-a81af0067e29" containerID="570936a0a36edb0fda6b55c99e7f566dfd145b7b28da0dcae1b91148af7c1a36" exitCode=0 Mar 12 18:28:01.821037 master-0 kubenswrapper[7337]: I0312 18:28:01.821023 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k" event={"ID":"4048e453-a983-4708-89b6-a81af0067e29","Type":"ContainerDied","Data":"570936a0a36edb0fda6b55c99e7f566dfd145b7b28da0dcae1b91148af7c1a36"} Mar 12 18:28:01.833223 master-0 kubenswrapper[7337]: I0312 18:28:01.826468 7337 scope.go:117] "RemoveContainer" containerID="570936a0a36edb0fda6b55c99e7f566dfd145b7b28da0dcae1b91148af7c1a36" Mar 12 18:28:01.840698 master-0 kubenswrapper[7337]: I0312 18:28:01.840654 7337 scope.go:117] "RemoveContainer" containerID="3100d6853d6653605e1a09e2cf985a9ecb63a1450916f3d98d5854fad367310a" Mar 12 18:28:01.955498 master-0 kubenswrapper[7337]: I0312 18:28:01.955375 7337 scope.go:117] "RemoveContainer" containerID="3a34f5e0d15dd4e7c330e2c8919e65deba96f9d77b56fa794a4877221990e20a" Mar 12 18:28:02.040690 master-0 kubenswrapper[7337]: I0312 18:28:02.040561 7337 scope.go:117] "RemoveContainer" containerID="336f9bff957643e2b1614f5b9ab58d3286fac81af162d3e42ef2ab143bd1a53e" Mar 12 18:28:02.101591 master-0 kubenswrapper[7337]: I0312 18:28:02.101548 7337 scope.go:117] "RemoveContainer" containerID="fff98590531dfb71359f592b09852a158d9cf8cc7fff20e92644173e6e6819dc" Mar 12 18:28:02.150588 master-0 kubenswrapper[7337]: I0312 18:28:02.150531 7337 scope.go:117] "RemoveContainer" containerID="2a197e2fe83ed2e384dda0d8770ef6e8d98b56d89ae78066b100f526847a5d4c" Mar 12 18:28:02.491500 master-0 kubenswrapper[7337]: I0312 18:28:02.491416 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:02.491500 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:02.491500 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:02.491500 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:02.491500 master-0 kubenswrapper[7337]: I0312 18:28:02.491495 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:02.775120 master-0 kubenswrapper[7337]: I0312 18:28:02.775007 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:28:02.775120 master-0 kubenswrapper[7337]: I0312 18:28:02.775097 7337 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:28:02.832860 master-0 kubenswrapper[7337]: I0312 18:28:02.832771 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-2psgb_e5fb0152-3efd-4000-bce3-fa90b75316ae/cluster-baremetal-operator/2.log" Mar 12 18:28:02.833652 master-0 kubenswrapper[7337]: I0312 18:28:02.833600 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-2psgb_e5fb0152-3efd-4000-bce3-fa90b75316ae/cluster-baremetal-operator/1.log" Mar 12 18:28:02.834352 master-0 kubenswrapper[7337]: I0312 18:28:02.834302 7337 generic.go:334] "Generic (PLEG): container finished" podID="e5fb0152-3efd-4000-bce3-fa90b75316ae" containerID="a2b26c62b9f6c98c92beecd149d44e6763377e53417bf7236ed48cc7741bf7a7" exitCode=1 Mar 12 18:28:02.835158 master-0 kubenswrapper[7337]: I0312 18:28:02.834366 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" event={"ID":"e5fb0152-3efd-4000-bce3-fa90b75316ae","Type":"ContainerDied","Data":"a2b26c62b9f6c98c92beecd149d44e6763377e53417bf7236ed48cc7741bf7a7"} Mar 12 18:28:02.835274 master-0 kubenswrapper[7337]: I0312 18:28:02.835196 7337 scope.go:117] "RemoveContainer" containerID="84549a6a4b2b7f4a99c232b78b53c64bb94831244f578e91d0ebfcc22961117f" Mar 12 18:28:02.835724 master-0 kubenswrapper[7337]: I0312 18:28:02.835685 7337 scope.go:117] "RemoveContainer" containerID="a2b26c62b9f6c98c92beecd149d44e6763377e53417bf7236ed48cc7741bf7a7" Mar 12 18:28:02.836148 master-0 kubenswrapper[7337]: E0312 18:28:02.836108 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-5cdb4c5598-2psgb_openshift-machine-api(e5fb0152-3efd-4000-bce3-fa90b75316ae)\"" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" podUID="e5fb0152-3efd-4000-bce3-fa90b75316ae" Mar 12 18:28:02.838770 master-0 kubenswrapper[7337]: I0312 18:28:02.838704 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k" event={"ID":"4048e453-a983-4708-89b6-a81af0067e29","Type":"ContainerStarted","Data":"8d4adbf9a9521c707194f12fbc5510570431ed08dce83d2d430e491e53e24697"} Mar 12 18:28:02.841672 master-0 kubenswrapper[7337]: I0312 18:28:02.841611 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" event={"ID":"e22c7035-4b7a-48cb-9abb-db277b387842","Type":"ContainerStarted","Data":"30daf02b748c51c62b9db99c2683ac7f65ebf75ff1bebfddf3e03d89a7e3d6a3"} Mar 12 18:28:02.845650 master-0 kubenswrapper[7337]: I0312 18:28:02.845584 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpb6k" event={"ID":"d4ae1240-e04e-48e9-88df-9f1a53508da7","Type":"ContainerStarted","Data":"e48ff673bb00e46f898821f3f18799490b796ace924dc7ba5ea42ba6dd7546b4"} Mar 12 18:28:02.849667 master-0 kubenswrapper[7337]: I0312 18:28:02.849594 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79884f6cc-tpdsd" event={"ID":"e27d2693-1a06-473e-a126-614b939bae33","Type":"ContainerStarted","Data":"715de161614ec1ade5ad098ae303653912e79fce994a488539e724d05ccc0c72"} Mar 12 18:28:02.850561 master-0 kubenswrapper[7337]: I0312 18:28:02.850478 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-79884f6cc-tpdsd" Mar 12 18:28:02.857867 master-0 kubenswrapper[7337]: I0312 18:28:02.857800 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4" event={"ID":"f3a2cda2-b70f-4128-a1be-48503f5aad6d","Type":"ContainerStarted","Data":"9a3db52f16e072309a34a4bec539e9366a1992a8f5fc437f0586e9504fff082e"} Mar 12 18:28:02.860565 master-0 kubenswrapper[7337]: I0312 18:28:02.860457 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-769nb" event={"ID":"8fe3d699-023e-4de7-8d42-6c9d8a5e68f3","Type":"ContainerStarted","Data":"23c630aadc78bb71de4a4670e264cf95732881da9cc5c28f5a83e1183338a6de"} Mar 12 18:28:02.861992 master-0 kubenswrapper[7337]: I0312 18:28:02.861947 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-79884f6cc-tpdsd" Mar 12 18:28:02.864047 master-0 kubenswrapper[7337]: I0312 18:28:02.863995 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-kwv7s_51eb717b-d11f-4bc3-8df6-deb51d5889f3/package-server-manager/0.log" Mar 12 18:28:02.864699 master-0 kubenswrapper[7337]: I0312 18:28:02.864635 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" event={"ID":"51eb717b-d11f-4bc3-8df6-deb51d5889f3","Type":"ContainerStarted","Data":"7f3683672c78e4047eff3cfbcbbd67b61971dfcf8e2050fbce3a0d7a22f5c09c"} Mar 12 18:28:02.864951 master-0 kubenswrapper[7337]: I0312 18:28:02.864906 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:28:02.868073 master-0 kubenswrapper[7337]: I0312 18:28:02.868032 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-lqpbp_306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed/cluster-node-tuning-operator/0.log" Mar 12 18:28:02.868186 master-0 kubenswrapper[7337]: I0312 18:28:02.868140 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" event={"ID":"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed","Type":"ContainerStarted","Data":"7229de6d620faf108a1c1b6c2ee69c9d4eaeeeb1fbca31bf87409e274ac030b8"} Mar 12 18:28:02.871128 master-0 kubenswrapper[7337]: I0312 18:28:02.871052 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" event={"ID":"37cd9c0a-697e-4e67-932b-b331ff77c8c0","Type":"ContainerStarted","Data":"efd0f8bb0ebca8e4985d1f13c9ba9d356c0dcc91d47381baf756c886f3430aea"} Mar 12 18:28:02.871582 master-0 kubenswrapper[7337]: I0312 18:28:02.871486 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" Mar 12 18:28:02.873866 master-0 kubenswrapper[7337]: I0312 18:28:02.873783 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" event={"ID":"e697746f-fb9e-4d10-ab61-33c68e62cc0d","Type":"ContainerStarted","Data":"ce39385a61ee21776f250eaaf4f1307080da9b7e22687b1e02ba98ccd5f687f5"} Mar 12 18:28:02.876624 master-0 kubenswrapper[7337]: I0312 18:28:02.876566 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-2xr98" event={"ID":"a1e2340b-ebca-40de-b1e0-8133999cd860","Type":"ContainerStarted","Data":"2326e49d6e2a7cd05b9e7b3893eb352d54395b1fee44af620c4f86d19d972521"} Mar 12 18:28:02.880913 master-0 kubenswrapper[7337]: I0312 18:28:02.880864 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" event={"ID":"b6d288e3-8e73-44d2-874d-64c6c98dd991","Type":"ContainerStarted","Data":"3bf7fc62517fc347635184f5d3f5c2b6b66183268213602c39eb96a5f8b9654a"} Mar 12 18:28:03.496825 master-0 kubenswrapper[7337]: I0312 18:28:03.496749 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:03.496825 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:03.496825 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:03.496825 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:03.496825 master-0 kubenswrapper[7337]: I0312 18:28:03.496812 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:03.888097 master-0 kubenswrapper[7337]: I0312 18:28:03.888048 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-2psgb_e5fb0152-3efd-4000-bce3-fa90b75316ae/cluster-baremetal-operator/2.log" Mar 12 18:28:04.492264 master-0 kubenswrapper[7337]: I0312 18:28:04.492206 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:04.492264 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:04.492264 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:04.492264 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:04.492590 master-0 kubenswrapper[7337]: I0312 18:28:04.492274 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:04.722463 master-0 kubenswrapper[7337]: I0312 18:28:04.722383 7337 scope.go:117] "RemoveContainer" containerID="37c9fcab8917043972cb8da48f9b3a66fa98e29cb384d4ab82bdb89b8dd2d452" Mar 12 18:28:04.722953 master-0 kubenswrapper[7337]: E0312 18:28:04.722907 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-4527l_openshift-ingress-operator(d94dc349-c5cb-4f12-8e48-867030af4981)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" podUID="d94dc349-c5cb-4f12-8e48-867030af4981" Mar 12 18:28:05.492032 master-0 kubenswrapper[7337]: I0312 18:28:05.491949 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:05.492032 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:05.492032 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:05.492032 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:05.492927 master-0 kubenswrapper[7337]: I0312 18:28:05.492050 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:06.351506 master-0 kubenswrapper[7337]: I0312 18:28:06.351424 7337 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tjp2j container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 12 18:28:06.351778 master-0 kubenswrapper[7337]: I0312 18:28:06.351510 7337 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" podUID="37cd9c0a-697e-4e67-932b-b331ff77c8c0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 12 18:28:06.492181 master-0 kubenswrapper[7337]: I0312 18:28:06.492110 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:06.492181 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:06.492181 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:06.492181 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:06.492181 master-0 kubenswrapper[7337]: I0312 18:28:06.492176 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:07.492135 master-0 kubenswrapper[7337]: I0312 18:28:07.492057 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:07.492135 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:07.492135 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:07.492135 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:07.492135 master-0 kubenswrapper[7337]: I0312 18:28:07.492127 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:07.535953 master-0 kubenswrapper[7337]: I0312 18:28:07.535863 7337 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tjp2j container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 12 18:28:07.536195 master-0 kubenswrapper[7337]: I0312 18:28:07.535959 7337 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" podUID="37cd9c0a-697e-4e67-932b-b331ff77c8c0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 12 18:28:08.492389 master-0 kubenswrapper[7337]: I0312 18:28:08.492302 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:08.492389 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:08.492389 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:08.492389 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:08.493330 master-0 kubenswrapper[7337]: I0312 18:28:08.492409 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:09.352283 master-0 kubenswrapper[7337]: I0312 18:28:09.352179 7337 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tjp2j container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 12 18:28:09.352283 master-0 kubenswrapper[7337]: I0312 18:28:09.352260 7337 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" podUID="37cd9c0a-697e-4e67-932b-b331ff77c8c0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 12 18:28:09.493296 master-0 kubenswrapper[7337]: I0312 18:28:09.493205 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:09.493296 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:09.493296 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:09.493296 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:09.494278 master-0 kubenswrapper[7337]: I0312 18:28:09.493313 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:09.723097 master-0 kubenswrapper[7337]: I0312 18:28:09.722948 7337 scope.go:117] "RemoveContainer" containerID="70a56fe639b2e29fe225557a46043eef2856d3d81e45b284d14769be0d5eb9f5" Mar 12 18:28:10.492858 master-0 kubenswrapper[7337]: I0312 18:28:10.492789 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:10.492858 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:10.492858 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:10.492858 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:10.493115 master-0 kubenswrapper[7337]: I0312 18:28:10.492888 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:10.535454 master-0 kubenswrapper[7337]: I0312 18:28:10.535403 7337 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tjp2j container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 12 18:28:10.535878 master-0 kubenswrapper[7337]: I0312 18:28:10.535461 7337 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" podUID="37cd9c0a-697e-4e67-932b-b331ff77c8c0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 12 18:28:10.535878 master-0 kubenswrapper[7337]: I0312 18:28:10.535546 7337 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" Mar 12 18:28:10.536335 master-0 kubenswrapper[7337]: I0312 18:28:10.536289 7337 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"efd0f8bb0ebca8e4985d1f13c9ba9d356c0dcc91d47381baf756c886f3430aea"} pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 12 18:28:10.536395 master-0 kubenswrapper[7337]: I0312 18:28:10.536358 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" podUID="37cd9c0a-697e-4e67-932b-b331ff77c8c0" containerName="openshift-config-operator" containerID="cri-o://efd0f8bb0ebca8e4985d1f13c9ba9d356c0dcc91d47381baf756c886f3430aea" gracePeriod=30 Mar 12 18:28:10.536484 master-0 kubenswrapper[7337]: I0312 18:28:10.536412 7337 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tjp2j container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 12 18:28:10.536628 master-0 kubenswrapper[7337]: I0312 18:28:10.536565 7337 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" podUID="37cd9c0a-697e-4e67-932b-b331ff77c8c0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 12 18:28:10.722953 master-0 kubenswrapper[7337]: I0312 18:28:10.722786 7337 scope.go:117] "RemoveContainer" containerID="0947e2aa600f02809bc0e80e74513829af30ca5ce847f92836ff0aa87ebf24ca" Mar 12 18:28:10.723204 master-0 kubenswrapper[7337]: E0312 18:28:10.723153 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-7c6989d6c4-ljw8b_openshift-authentication-operator(062f1b21-2ffc-47da-8334-427c3b2a1a90)\"" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" podUID="062f1b21-2ffc-47da-8334-427c3b2a1a90" Mar 12 18:28:10.944009 master-0 kubenswrapper[7337]: I0312 18:28:10.943954 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/cluster-policy-controller/3.log" Mar 12 18:28:10.946072 master-0 kubenswrapper[7337]: I0312 18:28:10.946032 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/kube-controller-manager/0.log" Mar 12 18:28:10.946219 master-0 kubenswrapper[7337]: I0312 18:28:10.946109 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"39c441a05d91070efc538925475b0a44","Type":"ContainerStarted","Data":"6013ae8778b6f3db082ecdee07bf998643391f13699e5ddf7a85c9b9ddf833c3"} Mar 12 18:28:11.492382 master-0 kubenswrapper[7337]: I0312 18:28:11.492271 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:11.492382 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:11.492382 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:11.492382 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:11.492382 master-0 kubenswrapper[7337]: I0312 18:28:11.492365 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:11.956551 master-0 kubenswrapper[7337]: I0312 18:28:11.956445 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-tjp2j_37cd9c0a-697e-4e67-932b-b331ff77c8c0/openshift-config-operator/2.log" Mar 12 18:28:11.957864 master-0 kubenswrapper[7337]: I0312 18:28:11.957807 7337 generic.go:334] "Generic (PLEG): container finished" podID="37cd9c0a-697e-4e67-932b-b331ff77c8c0" containerID="efd0f8bb0ebca8e4985d1f13c9ba9d356c0dcc91d47381baf756c886f3430aea" exitCode=255 Mar 12 18:28:11.957981 master-0 kubenswrapper[7337]: I0312 18:28:11.957861 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" event={"ID":"37cd9c0a-697e-4e67-932b-b331ff77c8c0","Type":"ContainerDied","Data":"efd0f8bb0ebca8e4985d1f13c9ba9d356c0dcc91d47381baf756c886f3430aea"} Mar 12 18:28:11.957981 master-0 kubenswrapper[7337]: I0312 18:28:11.957902 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" event={"ID":"37cd9c0a-697e-4e67-932b-b331ff77c8c0","Type":"ContainerStarted","Data":"5c2909604b564565c130e11a26734fb24530df302ef3e8d19b07ce431f8e00a1"} Mar 12 18:28:11.957981 master-0 kubenswrapper[7337]: I0312 18:28:11.957927 7337 scope.go:117] "RemoveContainer" containerID="37af6c94f0de0a2163a4ac4e6ab6085ad4d71da179fad764b86f087db1506c46" Mar 12 18:28:11.958436 master-0 kubenswrapper[7337]: I0312 18:28:11.958387 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" Mar 12 18:28:12.493392 master-0 kubenswrapper[7337]: I0312 18:28:12.493293 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:12.493392 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:12.493392 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:12.493392 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:12.493872 master-0 kubenswrapper[7337]: I0312 18:28:12.493393 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:12.969203 master-0 kubenswrapper[7337]: I0312 18:28:12.969119 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-tjp2j_37cd9c0a-697e-4e67-932b-b331ff77c8c0/openshift-config-operator/2.log" Mar 12 18:28:13.493569 master-0 kubenswrapper[7337]: I0312 18:28:13.493450 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:13.493569 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:13.493569 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:13.493569 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:13.494049 master-0 kubenswrapper[7337]: I0312 18:28:13.493588 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:13.722358 master-0 kubenswrapper[7337]: I0312 18:28:13.722272 7337 scope.go:117] "RemoveContainer" containerID="a2b26c62b9f6c98c92beecd149d44e6763377e53417bf7236ed48cc7741bf7a7" Mar 12 18:28:13.722861 master-0 kubenswrapper[7337]: E0312 18:28:13.722807 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-5cdb4c5598-2psgb_openshift-machine-api(e5fb0152-3efd-4000-bce3-fa90b75316ae)\"" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" podUID="e5fb0152-3efd-4000-bce3-fa90b75316ae" Mar 12 18:28:14.492505 master-0 kubenswrapper[7337]: I0312 18:28:14.492425 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:14.492505 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:14.492505 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:14.492505 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:14.493650 master-0 kubenswrapper[7337]: I0312 18:28:14.492592 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:14.659549 master-0 kubenswrapper[7337]: I0312 18:28:14.659429 7337 patch_prober.go:28] interesting pod/etcd-operator-5884b9cd56-bfq7b container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" start-of-body= Mar 12 18:28:14.659549 master-0 kubenswrapper[7337]: I0312 18:28:14.659508 7337 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" podUID="e697746f-fb9e-4d10-ab61-33c68e62cc0d" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" Mar 12 18:28:15.352036 master-0 kubenswrapper[7337]: I0312 18:28:15.351914 7337 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tjp2j container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 12 18:28:15.352347 master-0 kubenswrapper[7337]: I0312 18:28:15.352071 7337 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" podUID="37cd9c0a-697e-4e67-932b-b331ff77c8c0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 12 18:28:15.492990 master-0 kubenswrapper[7337]: I0312 18:28:15.492886 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:15.492990 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:15.492990 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:15.492990 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:15.493856 master-0 kubenswrapper[7337]: I0312 18:28:15.493026 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:15.927786 master-0 kubenswrapper[7337]: I0312 18:28:15.927754 7337 scope.go:117] "RemoveContainer" containerID="37c9fcab8917043972cb8da48f9b3a66fa98e29cb384d4ab82bdb89b8dd2d452" Mar 12 18:28:15.928385 master-0 kubenswrapper[7337]: E0312 18:28:15.928365 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-4527l_openshift-ingress-operator(d94dc349-c5cb-4f12-8e48-867030af4981)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" podUID="d94dc349-c5cb-4f12-8e48-867030af4981" Mar 12 18:28:16.499553 master-0 kubenswrapper[7337]: I0312 18:28:16.497543 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:16.499553 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:16.499553 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:16.499553 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:16.499553 master-0 kubenswrapper[7337]: I0312 18:28:16.497615 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:17.489567 master-0 kubenswrapper[7337]: I0312 18:28:17.489429 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:28:17.489567 master-0 kubenswrapper[7337]: I0312 18:28:17.489537 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:28:17.496783 master-0 kubenswrapper[7337]: I0312 18:28:17.496685 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:17.496783 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:17.496783 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:17.496783 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:17.497012 master-0 kubenswrapper[7337]: I0312 18:28:17.496820 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:17.497312 master-0 kubenswrapper[7337]: I0312 18:28:17.497260 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:28:18.354450 master-0 kubenswrapper[7337]: I0312 18:28:18.354340 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" Mar 12 18:28:18.493960 master-0 kubenswrapper[7337]: I0312 18:28:18.493884 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:18.493960 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:18.493960 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:18.493960 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:18.493960 master-0 kubenswrapper[7337]: I0312 18:28:18.493948 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:19.192922 master-0 kubenswrapper[7337]: I0312 18:28:19.192849 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 12 18:28:19.193243 master-0 kubenswrapper[7337]: E0312 18:28:19.193222 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2bbd04e-d147-4343-9e5d-300e42de9dbb" containerName="installer" Mar 12 18:28:19.193364 master-0 kubenswrapper[7337]: I0312 18:28:19.193245 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2bbd04e-d147-4343-9e5d-300e42de9dbb" containerName="installer" Mar 12 18:28:19.193364 master-0 kubenswrapper[7337]: E0312 18:28:19.193297 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99f63924-b198-4954-ba14-5c48e8830ec0" containerName="installer" Mar 12 18:28:19.193364 master-0 kubenswrapper[7337]: I0312 18:28:19.193311 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="99f63924-b198-4954-ba14-5c48e8830ec0" containerName="installer" Mar 12 18:28:19.193364 master-0 kubenswrapper[7337]: E0312 18:28:19.193340 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30102cc9-45f8-46f8-bb34-eec48fdb297d" containerName="installer" Mar 12 18:28:19.193364 master-0 kubenswrapper[7337]: I0312 18:28:19.193352 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="30102cc9-45f8-46f8-bb34-eec48fdb297d" containerName="installer" Mar 12 18:28:19.193798 master-0 kubenswrapper[7337]: I0312 18:28:19.193578 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="99f63924-b198-4954-ba14-5c48e8830ec0" containerName="installer" Mar 12 18:28:19.193798 master-0 kubenswrapper[7337]: I0312 18:28:19.193609 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="30102cc9-45f8-46f8-bb34-eec48fdb297d" containerName="installer" Mar 12 18:28:19.193798 master-0 kubenswrapper[7337]: I0312 18:28:19.193643 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2bbd04e-d147-4343-9e5d-300e42de9dbb" containerName="installer" Mar 12 18:28:19.194321 master-0 kubenswrapper[7337]: I0312 18:28:19.194263 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 18:28:19.196237 master-0 kubenswrapper[7337]: I0312 18:28:19.196107 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 12 18:28:19.197320 master-0 kubenswrapper[7337]: I0312 18:28:19.197272 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-tr8hr" Mar 12 18:28:19.201767 master-0 kubenswrapper[7337]: I0312 18:28:19.201703 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 12 18:28:19.281261 master-0 kubenswrapper[7337]: I0312 18:28:19.281196 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50322fdb-6d3f-4237-92d2-a170e2071de5-kube-api-access\") pod \"installer-3-master-0\" (UID: \"50322fdb-6d3f-4237-92d2-a170e2071de5\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 18:28:19.281261 master-0 kubenswrapper[7337]: I0312 18:28:19.281259 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50322fdb-6d3f-4237-92d2-a170e2071de5-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"50322fdb-6d3f-4237-92d2-a170e2071de5\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 18:28:19.281572 master-0 kubenswrapper[7337]: I0312 18:28:19.281312 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/50322fdb-6d3f-4237-92d2-a170e2071de5-var-lock\") pod \"installer-3-master-0\" (UID: \"50322fdb-6d3f-4237-92d2-a170e2071de5\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 18:28:19.382499 master-0 kubenswrapper[7337]: I0312 18:28:19.382441 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50322fdb-6d3f-4237-92d2-a170e2071de5-kube-api-access\") pod \"installer-3-master-0\" (UID: \"50322fdb-6d3f-4237-92d2-a170e2071de5\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 18:28:19.383131 master-0 kubenswrapper[7337]: I0312 18:28:19.382501 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50322fdb-6d3f-4237-92d2-a170e2071de5-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"50322fdb-6d3f-4237-92d2-a170e2071de5\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 18:28:19.383131 master-0 kubenswrapper[7337]: I0312 18:28:19.382598 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/50322fdb-6d3f-4237-92d2-a170e2071de5-var-lock\") pod \"installer-3-master-0\" (UID: \"50322fdb-6d3f-4237-92d2-a170e2071de5\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 18:28:19.383131 master-0 kubenswrapper[7337]: I0312 18:28:19.382666 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/50322fdb-6d3f-4237-92d2-a170e2071de5-var-lock\") pod \"installer-3-master-0\" (UID: \"50322fdb-6d3f-4237-92d2-a170e2071de5\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 18:28:19.383131 master-0 kubenswrapper[7337]: I0312 18:28:19.382665 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50322fdb-6d3f-4237-92d2-a170e2071de5-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"50322fdb-6d3f-4237-92d2-a170e2071de5\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 18:28:19.410220 master-0 kubenswrapper[7337]: I0312 18:28:19.410167 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50322fdb-6d3f-4237-92d2-a170e2071de5-kube-api-access\") pod \"installer-3-master-0\" (UID: \"50322fdb-6d3f-4237-92d2-a170e2071de5\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 18:28:19.495616 master-0 kubenswrapper[7337]: I0312 18:28:19.491587 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:19.495616 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:19.495616 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:19.495616 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:19.495616 master-0 kubenswrapper[7337]: I0312 18:28:19.491656 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:19.517623 master-0 kubenswrapper[7337]: I0312 18:28:19.517414 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 18:28:19.529543 master-0 kubenswrapper[7337]: I0312 18:28:19.526136 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b55d98459-sr4hk"] Mar 12 18:28:19.529543 master-0 kubenswrapper[7337]: I0312 18:28:19.526384 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" podUID="30c5dc4b-f1c8-4773-b961-985740fcc503" containerName="controller-manager" containerID="cri-o://c5312d3b71b96a4fb9ea85e4c97ffeef667430d9cd85a43bb5055d9732684c96" gracePeriod=30 Mar 12 18:28:19.558252 master-0 kubenswrapper[7337]: I0312 18:28:19.558202 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79884f6cc-tpdsd"] Mar 12 18:28:19.558503 master-0 kubenswrapper[7337]: I0312 18:28:19.558469 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-79884f6cc-tpdsd" podUID="e27d2693-1a06-473e-a126-614b939bae33" containerName="route-controller-manager" containerID="cri-o://715de161614ec1ade5ad098ae303653912e79fce994a488539e724d05ccc0c72" gracePeriod=30 Mar 12 18:28:19.954575 master-0 kubenswrapper[7337]: I0312 18:28:19.953613 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 12 18:28:19.967767 master-0 kubenswrapper[7337]: W0312 18:28:19.967722 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod50322fdb_6d3f_4237_92d2_a170e2071de5.slice/crio-9b35a0992276a5a57f41cc10a07b53753228078f2cd4dbc9d5d05061bb670327 WatchSource:0}: Error finding container 9b35a0992276a5a57f41cc10a07b53753228078f2cd4dbc9d5d05061bb670327: Status 404 returned error can't find the container with id 9b35a0992276a5a57f41cc10a07b53753228078f2cd4dbc9d5d05061bb670327 Mar 12 18:28:19.993240 master-0 kubenswrapper[7337]: I0312 18:28:19.993208 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" Mar 12 18:28:20.041538 master-0 kubenswrapper[7337]: I0312 18:28:20.038276 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/cluster-policy-controller/3.log" Mar 12 18:28:20.041538 master-0 kubenswrapper[7337]: I0312 18:28:20.038493 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79884f6cc-tpdsd" Mar 12 18:28:20.041538 master-0 kubenswrapper[7337]: I0312 18:28:20.039794 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/kube-controller-manager-cert-syncer/0.log" Mar 12 18:28:20.043999 master-0 kubenswrapper[7337]: I0312 18:28:20.043966 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/kube-controller-manager/0.log" Mar 12 18:28:20.044089 master-0 kubenswrapper[7337]: I0312 18:28:20.044018 7337 generic.go:334] "Generic (PLEG): container finished" podID="39c441a05d91070efc538925475b0a44" containerID="40bb332af0befdec702043170fe44c9cb61f64fd323636de64adc0352f5c7576" exitCode=1 Mar 12 18:28:20.044138 master-0 kubenswrapper[7337]: I0312 18:28:20.044119 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"39c441a05d91070efc538925475b0a44","Type":"ContainerDied","Data":"40bb332af0befdec702043170fe44c9cb61f64fd323636de64adc0352f5c7576"} Mar 12 18:28:20.045002 master-0 kubenswrapper[7337]: I0312 18:28:20.044778 7337 scope.go:117] "RemoveContainer" containerID="40bb332af0befdec702043170fe44c9cb61f64fd323636de64adc0352f5c7576" Mar 12 18:28:20.059413 master-0 kubenswrapper[7337]: I0312 18:28:20.059219 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:28:20.063558 master-0 kubenswrapper[7337]: I0312 18:28:20.063256 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" Mar 12 18:28:20.063558 master-0 kubenswrapper[7337]: I0312 18:28:20.063310 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" event={"ID":"30c5dc4b-f1c8-4773-b961-985740fcc503","Type":"ContainerDied","Data":"c5312d3b71b96a4fb9ea85e4c97ffeef667430d9cd85a43bb5055d9732684c96"} Mar 12 18:28:20.063558 master-0 kubenswrapper[7337]: I0312 18:28:20.063378 7337 scope.go:117] "RemoveContainer" containerID="c5312d3b71b96a4fb9ea85e4c97ffeef667430d9cd85a43bb5055d9732684c96" Mar 12 18:28:20.063558 master-0 kubenswrapper[7337]: I0312 18:28:20.063166 7337 generic.go:334] "Generic (PLEG): container finished" podID="30c5dc4b-f1c8-4773-b961-985740fcc503" containerID="c5312d3b71b96a4fb9ea85e4c97ffeef667430d9cd85a43bb5055d9732684c96" exitCode=0 Mar 12 18:28:20.063739 master-0 kubenswrapper[7337]: I0312 18:28:20.063589 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b55d98459-sr4hk" event={"ID":"30c5dc4b-f1c8-4773-b961-985740fcc503","Type":"ContainerDied","Data":"25f85e056d06f9c263ff08ea4e6565ed3acaba0d2d09deb206fc0df16bc25d83"} Mar 12 18:28:20.084192 master-0 kubenswrapper[7337]: I0312 18:28:20.084168 7337 scope.go:117] "RemoveContainer" containerID="4c8710948dd5b2bc4491d4be8e860204494bc7d0e3f8f5ee9d0d17c5e1a20b86" Mar 12 18:28:20.085671 master-0 kubenswrapper[7337]: I0312 18:28:20.085649 7337 generic.go:334] "Generic (PLEG): container finished" podID="e27d2693-1a06-473e-a126-614b939bae33" containerID="715de161614ec1ade5ad098ae303653912e79fce994a488539e724d05ccc0c72" exitCode=0 Mar 12 18:28:20.085795 master-0 kubenswrapper[7337]: I0312 18:28:20.085779 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79884f6cc-tpdsd" event={"ID":"e27d2693-1a06-473e-a126-614b939bae33","Type":"ContainerDied","Data":"715de161614ec1ade5ad098ae303653912e79fce994a488539e724d05ccc0c72"} Mar 12 18:28:20.085877 master-0 kubenswrapper[7337]: I0312 18:28:20.085864 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79884f6cc-tpdsd" event={"ID":"e27d2693-1a06-473e-a126-614b939bae33","Type":"ContainerDied","Data":"ecd8c1c23b06ddd0380989b80e906934916e1b0c7ecc136590e1c93fd774ae5b"} Mar 12 18:28:20.085937 master-0 kubenswrapper[7337]: I0312 18:28:20.085866 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79884f6cc-tpdsd" Mar 12 18:28:20.095826 master-0 kubenswrapper[7337]: I0312 18:28:20.095799 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"50322fdb-6d3f-4237-92d2-a170e2071de5","Type":"ContainerStarted","Data":"9b35a0992276a5a57f41cc10a07b53753228078f2cd4dbc9d5d05061bb670327"} Mar 12 18:28:20.113814 master-0 kubenswrapper[7337]: I0312 18:28:20.113724 7337 scope.go:117] "RemoveContainer" containerID="4c8710948dd5b2bc4491d4be8e860204494bc7d0e3f8f5ee9d0d17c5e1a20b86" Mar 12 18:28:20.130012 master-0 kubenswrapper[7337]: E0312 18:28:20.129970 7337 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_controller-manager_controller-manager-5b55d98459-sr4hk_openshift-controller-manager_30c5dc4b-f1c8-4773-b961-985740fcc503_1 in pod sandbox 25f85e056d06f9c263ff08ea4e6565ed3acaba0d2d09deb206fc0df16bc25d83 from index: no such id: '4c8710948dd5b2bc4491d4be8e860204494bc7d0e3f8f5ee9d0d17c5e1a20b86'" containerID="4c8710948dd5b2bc4491d4be8e860204494bc7d0e3f8f5ee9d0d17c5e1a20b86" Mar 12 18:28:20.130313 master-0 kubenswrapper[7337]: I0312 18:28:20.130245 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c8710948dd5b2bc4491d4be8e860204494bc7d0e3f8f5ee9d0d17c5e1a20b86"} err="rpc error: code = Unknown desc = failed to delete container k8s_controller-manager_controller-manager-5b55d98459-sr4hk_openshift-controller-manager_30c5dc4b-f1c8-4773-b961-985740fcc503_1 in pod sandbox 25f85e056d06f9c263ff08ea4e6565ed3acaba0d2d09deb206fc0df16bc25d83 from index: no such id: '4c8710948dd5b2bc4491d4be8e860204494bc7d0e3f8f5ee9d0d17c5e1a20b86'" Mar 12 18:28:20.130422 master-0 kubenswrapper[7337]: I0312 18:28:20.130410 7337 scope.go:117] "RemoveContainer" containerID="c5312d3b71b96a4fb9ea85e4c97ffeef667430d9cd85a43bb5055d9732684c96" Mar 12 18:28:20.130642 master-0 kubenswrapper[7337]: I0312 18:28:20.130630 7337 scope.go:117] "RemoveContainer" containerID="a3dc2b7095b6c8ddb045125ce99fe767799ec6ab7d744f0b94e5707b821e152b" Mar 12 18:28:20.131386 master-0 kubenswrapper[7337]: E0312 18:28:20.131346 7337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5312d3b71b96a4fb9ea85e4c97ffeef667430d9cd85a43bb5055d9732684c96\": container with ID starting with c5312d3b71b96a4fb9ea85e4c97ffeef667430d9cd85a43bb5055d9732684c96 not found: ID does not exist" containerID="c5312d3b71b96a4fb9ea85e4c97ffeef667430d9cd85a43bb5055d9732684c96" Mar 12 18:28:20.131577 master-0 kubenswrapper[7337]: I0312 18:28:20.131549 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5312d3b71b96a4fb9ea85e4c97ffeef667430d9cd85a43bb5055d9732684c96"} err="failed to get container status \"c5312d3b71b96a4fb9ea85e4c97ffeef667430d9cd85a43bb5055d9732684c96\": rpc error: code = NotFound desc = could not find container \"c5312d3b71b96a4fb9ea85e4c97ffeef667430d9cd85a43bb5055d9732684c96\": container with ID starting with c5312d3b71b96a4fb9ea85e4c97ffeef667430d9cd85a43bb5055d9732684c96 not found: ID does not exist" Mar 12 18:28:20.131658 master-0 kubenswrapper[7337]: I0312 18:28:20.131647 7337 scope.go:117] "RemoveContainer" containerID="4c8710948dd5b2bc4491d4be8e860204494bc7d0e3f8f5ee9d0d17c5e1a20b86" Mar 12 18:28:20.132876 master-0 kubenswrapper[7337]: E0312 18:28:20.132851 7337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c8710948dd5b2bc4491d4be8e860204494bc7d0e3f8f5ee9d0d17c5e1a20b86\": container with ID starting with 4c8710948dd5b2bc4491d4be8e860204494bc7d0e3f8f5ee9d0d17c5e1a20b86 not found: ID does not exist" containerID="4c8710948dd5b2bc4491d4be8e860204494bc7d0e3f8f5ee9d0d17c5e1a20b86" Mar 12 18:28:20.132982 master-0 kubenswrapper[7337]: I0312 18:28:20.132964 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c8710948dd5b2bc4491d4be8e860204494bc7d0e3f8f5ee9d0d17c5e1a20b86"} err="failed to get container status \"4c8710948dd5b2bc4491d4be8e860204494bc7d0e3f8f5ee9d0d17c5e1a20b86\": rpc error: code = NotFound desc = could not find container \"4c8710948dd5b2bc4491d4be8e860204494bc7d0e3f8f5ee9d0d17c5e1a20b86\": container with ID starting with 4c8710948dd5b2bc4491d4be8e860204494bc7d0e3f8f5ee9d0d17c5e1a20b86 not found: ID does not exist" Mar 12 18:28:20.133079 master-0 kubenswrapper[7337]: I0312 18:28:20.133067 7337 scope.go:117] "RemoveContainer" containerID="715de161614ec1ade5ad098ae303653912e79fce994a488539e724d05ccc0c72" Mar 12 18:28:20.147548 master-0 kubenswrapper[7337]: I0312 18:28:20.147334 7337 scope.go:117] "RemoveContainer" containerID="a3dc2b7095b6c8ddb045125ce99fe767799ec6ab7d744f0b94e5707b821e152b" Mar 12 18:28:20.156057 master-0 kubenswrapper[7337]: E0312 18:28:20.156017 7337 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_route-controller-manager_route-controller-manager-79884f6cc-tpdsd_openshift-route-controller-manager_e27d2693-1a06-473e-a126-614b939bae33_0 in pod sandbox ecd8c1c23b06ddd0380989b80e906934916e1b0c7ecc136590e1c93fd774ae5b from index: no such id: 'a3dc2b7095b6c8ddb045125ce99fe767799ec6ab7d744f0b94e5707b821e152b'" containerID="a3dc2b7095b6c8ddb045125ce99fe767799ec6ab7d744f0b94e5707b821e152b" Mar 12 18:28:20.156131 master-0 kubenswrapper[7337]: I0312 18:28:20.156065 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3dc2b7095b6c8ddb045125ce99fe767799ec6ab7d744f0b94e5707b821e152b"} err="rpc error: code = Unknown desc = failed to delete container k8s_route-controller-manager_route-controller-manager-79884f6cc-tpdsd_openshift-route-controller-manager_e27d2693-1a06-473e-a126-614b939bae33_0 in pod sandbox ecd8c1c23b06ddd0380989b80e906934916e1b0c7ecc136590e1c93fd774ae5b from index: no such id: 'a3dc2b7095b6c8ddb045125ce99fe767799ec6ab7d744f0b94e5707b821e152b'" Mar 12 18:28:20.156131 master-0 kubenswrapper[7337]: I0312 18:28:20.156090 7337 scope.go:117] "RemoveContainer" containerID="715de161614ec1ade5ad098ae303653912e79fce994a488539e724d05ccc0c72" Mar 12 18:28:20.156552 master-0 kubenswrapper[7337]: E0312 18:28:20.156422 7337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"715de161614ec1ade5ad098ae303653912e79fce994a488539e724d05ccc0c72\": container with ID starting with 715de161614ec1ade5ad098ae303653912e79fce994a488539e724d05ccc0c72 not found: ID does not exist" containerID="715de161614ec1ade5ad098ae303653912e79fce994a488539e724d05ccc0c72" Mar 12 18:28:20.156552 master-0 kubenswrapper[7337]: I0312 18:28:20.156466 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"715de161614ec1ade5ad098ae303653912e79fce994a488539e724d05ccc0c72"} err="failed to get container status \"715de161614ec1ade5ad098ae303653912e79fce994a488539e724d05ccc0c72\": rpc error: code = NotFound desc = could not find container \"715de161614ec1ade5ad098ae303653912e79fce994a488539e724d05ccc0c72\": container with ID starting with 715de161614ec1ade5ad098ae303653912e79fce994a488539e724d05ccc0c72 not found: ID does not exist" Mar 12 18:28:20.156552 master-0 kubenswrapper[7337]: I0312 18:28:20.156493 7337 scope.go:117] "RemoveContainer" containerID="a3dc2b7095b6c8ddb045125ce99fe767799ec6ab7d744f0b94e5707b821e152b" Mar 12 18:28:20.157229 master-0 kubenswrapper[7337]: E0312 18:28:20.157188 7337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3dc2b7095b6c8ddb045125ce99fe767799ec6ab7d744f0b94e5707b821e152b\": container with ID starting with a3dc2b7095b6c8ddb045125ce99fe767799ec6ab7d744f0b94e5707b821e152b not found: ID does not exist" containerID="a3dc2b7095b6c8ddb045125ce99fe767799ec6ab7d744f0b94e5707b821e152b" Mar 12 18:28:20.157285 master-0 kubenswrapper[7337]: I0312 18:28:20.157234 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3dc2b7095b6c8ddb045125ce99fe767799ec6ab7d744f0b94e5707b821e152b"} err="failed to get container status \"a3dc2b7095b6c8ddb045125ce99fe767799ec6ab7d744f0b94e5707b821e152b\": rpc error: code = NotFound desc = could not find container \"a3dc2b7095b6c8ddb045125ce99fe767799ec6ab7d744f0b94e5707b821e152b\": container with ID starting with a3dc2b7095b6c8ddb045125ce99fe767799ec6ab7d744f0b94e5707b821e152b not found: ID does not exist" Mar 12 18:28:20.199588 master-0 kubenswrapper[7337]: I0312 18:28:20.198987 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lm82\" (UniqueName: \"kubernetes.io/projected/e27d2693-1a06-473e-a126-614b939bae33-kube-api-access-6lm82\") pod \"e27d2693-1a06-473e-a126-614b939bae33\" (UID: \"e27d2693-1a06-473e-a126-614b939bae33\") " Mar 12 18:28:20.199588 master-0 kubenswrapper[7337]: I0312 18:28:20.199074 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e27d2693-1a06-473e-a126-614b939bae33-client-ca\") pod \"e27d2693-1a06-473e-a126-614b939bae33\" (UID: \"e27d2693-1a06-473e-a126-614b939bae33\") " Mar 12 18:28:20.199588 master-0 kubenswrapper[7337]: I0312 18:28:20.199125 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30c5dc4b-f1c8-4773-b961-985740fcc503-serving-cert\") pod \"30c5dc4b-f1c8-4773-b961-985740fcc503\" (UID: \"30c5dc4b-f1c8-4773-b961-985740fcc503\") " Mar 12 18:28:20.199588 master-0 kubenswrapper[7337]: I0312 18:28:20.199162 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/30c5dc4b-f1c8-4773-b961-985740fcc503-proxy-ca-bundles\") pod \"30c5dc4b-f1c8-4773-b961-985740fcc503\" (UID: \"30c5dc4b-f1c8-4773-b961-985740fcc503\") " Mar 12 18:28:20.199588 master-0 kubenswrapper[7337]: I0312 18:28:20.199202 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e27d2693-1a06-473e-a126-614b939bae33-serving-cert\") pod \"e27d2693-1a06-473e-a126-614b939bae33\" (UID: \"e27d2693-1a06-473e-a126-614b939bae33\") " Mar 12 18:28:20.199588 master-0 kubenswrapper[7337]: I0312 18:28:20.199234 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flnvn\" (UniqueName: \"kubernetes.io/projected/30c5dc4b-f1c8-4773-b961-985740fcc503-kube-api-access-flnvn\") pod \"30c5dc4b-f1c8-4773-b961-985740fcc503\" (UID: \"30c5dc4b-f1c8-4773-b961-985740fcc503\") " Mar 12 18:28:20.199588 master-0 kubenswrapper[7337]: I0312 18:28:20.199280 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30c5dc4b-f1c8-4773-b961-985740fcc503-config\") pod \"30c5dc4b-f1c8-4773-b961-985740fcc503\" (UID: \"30c5dc4b-f1c8-4773-b961-985740fcc503\") " Mar 12 18:28:20.199588 master-0 kubenswrapper[7337]: I0312 18:28:20.199330 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30c5dc4b-f1c8-4773-b961-985740fcc503-client-ca\") pod \"30c5dc4b-f1c8-4773-b961-985740fcc503\" (UID: \"30c5dc4b-f1c8-4773-b961-985740fcc503\") " Mar 12 18:28:20.199588 master-0 kubenswrapper[7337]: I0312 18:28:20.199383 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e27d2693-1a06-473e-a126-614b939bae33-config\") pod \"e27d2693-1a06-473e-a126-614b939bae33\" (UID: \"e27d2693-1a06-473e-a126-614b939bae33\") " Mar 12 18:28:20.200574 master-0 kubenswrapper[7337]: I0312 18:28:20.200547 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30c5dc4b-f1c8-4773-b961-985740fcc503-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "30c5dc4b-f1c8-4773-b961-985740fcc503" (UID: "30c5dc4b-f1c8-4773-b961-985740fcc503"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:28:20.201220 master-0 kubenswrapper[7337]: I0312 18:28:20.201164 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30c5dc4b-f1c8-4773-b961-985740fcc503-client-ca" (OuterVolumeSpecName: "client-ca") pod "30c5dc4b-f1c8-4773-b961-985740fcc503" (UID: "30c5dc4b-f1c8-4773-b961-985740fcc503"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:28:20.201272 master-0 kubenswrapper[7337]: I0312 18:28:20.200598 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e27d2693-1a06-473e-a126-614b939bae33-config" (OuterVolumeSpecName: "config") pod "e27d2693-1a06-473e-a126-614b939bae33" (UID: "e27d2693-1a06-473e-a126-614b939bae33"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:28:20.201562 master-0 kubenswrapper[7337]: I0312 18:28:20.201499 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e27d2693-1a06-473e-a126-614b939bae33-client-ca" (OuterVolumeSpecName: "client-ca") pod "e27d2693-1a06-473e-a126-614b939bae33" (UID: "e27d2693-1a06-473e-a126-614b939bae33"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:28:20.201753 master-0 kubenswrapper[7337]: I0312 18:28:20.201726 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30c5dc4b-f1c8-4773-b961-985740fcc503-config" (OuterVolumeSpecName: "config") pod "30c5dc4b-f1c8-4773-b961-985740fcc503" (UID: "30c5dc4b-f1c8-4773-b961-985740fcc503"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:28:20.205951 master-0 kubenswrapper[7337]: I0312 18:28:20.205864 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30c5dc4b-f1c8-4773-b961-985740fcc503-kube-api-access-flnvn" (OuterVolumeSpecName: "kube-api-access-flnvn") pod "30c5dc4b-f1c8-4773-b961-985740fcc503" (UID: "30c5dc4b-f1c8-4773-b961-985740fcc503"). InnerVolumeSpecName "kube-api-access-flnvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:28:20.205951 master-0 kubenswrapper[7337]: I0312 18:28:20.205916 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e27d2693-1a06-473e-a126-614b939bae33-kube-api-access-6lm82" (OuterVolumeSpecName: "kube-api-access-6lm82") pod "e27d2693-1a06-473e-a126-614b939bae33" (UID: "e27d2693-1a06-473e-a126-614b939bae33"). InnerVolumeSpecName "kube-api-access-6lm82". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:28:20.208162 master-0 kubenswrapper[7337]: I0312 18:28:20.208123 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e27d2693-1a06-473e-a126-614b939bae33-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e27d2693-1a06-473e-a126-614b939bae33" (UID: "e27d2693-1a06-473e-a126-614b939bae33"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:28:20.210706 master-0 kubenswrapper[7337]: I0312 18:28:20.210684 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30c5dc4b-f1c8-4773-b961-985740fcc503-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "30c5dc4b-f1c8-4773-b961-985740fcc503" (UID: "30c5dc4b-f1c8-4773-b961-985740fcc503"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:28:20.301780 master-0 kubenswrapper[7337]: I0312 18:28:20.301684 7337 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30c5dc4b-f1c8-4773-b961-985740fcc503-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 18:28:20.301879 master-0 kubenswrapper[7337]: I0312 18:28:20.301818 7337 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e27d2693-1a06-473e-a126-614b939bae33-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:28:20.301916 master-0 kubenswrapper[7337]: I0312 18:28:20.301900 7337 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lm82\" (UniqueName: \"kubernetes.io/projected/e27d2693-1a06-473e-a126-614b939bae33-kube-api-access-6lm82\") on node \"master-0\" DevicePath \"\"" Mar 12 18:28:20.301953 master-0 kubenswrapper[7337]: I0312 18:28:20.301928 7337 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e27d2693-1a06-473e-a126-614b939bae33-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 18:28:20.301953 master-0 kubenswrapper[7337]: I0312 18:28:20.301948 7337 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30c5dc4b-f1c8-4773-b961-985740fcc503-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 18:28:20.302112 master-0 kubenswrapper[7337]: I0312 18:28:20.301966 7337 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/30c5dc4b-f1c8-4773-b961-985740fcc503-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 12 18:28:20.302112 master-0 kubenswrapper[7337]: I0312 18:28:20.301983 7337 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e27d2693-1a06-473e-a126-614b939bae33-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 18:28:20.302112 master-0 kubenswrapper[7337]: I0312 18:28:20.302000 7337 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flnvn\" (UniqueName: \"kubernetes.io/projected/30c5dc4b-f1c8-4773-b961-985740fcc503-kube-api-access-flnvn\") on node \"master-0\" DevicePath \"\"" Mar 12 18:28:20.302208 master-0 kubenswrapper[7337]: I0312 18:28:20.302088 7337 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30c5dc4b-f1c8-4773-b961-985740fcc503-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:28:20.405393 master-0 kubenswrapper[7337]: I0312 18:28:20.405348 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b55d98459-sr4hk"] Mar 12 18:28:20.411183 master-0 kubenswrapper[7337]: I0312 18:28:20.411143 7337 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5b55d98459-sr4hk"] Mar 12 18:28:20.426920 master-0 kubenswrapper[7337]: I0312 18:28:20.426825 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79884f6cc-tpdsd"] Mar 12 18:28:20.434453 master-0 kubenswrapper[7337]: I0312 18:28:20.434398 7337 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79884f6cc-tpdsd"] Mar 12 18:28:20.492308 master-0 kubenswrapper[7337]: I0312 18:28:20.492242 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:20.492308 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:20.492308 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:20.492308 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:20.492308 master-0 kubenswrapper[7337]: I0312 18:28:20.492302 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:20.787214 master-0 kubenswrapper[7337]: I0312 18:28:20.787169 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7cd74f9776-2rmc9"] Mar 12 18:28:20.787431 master-0 kubenswrapper[7337]: E0312 18:28:20.787384 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e27d2693-1a06-473e-a126-614b939bae33" containerName="route-controller-manager" Mar 12 18:28:20.787431 master-0 kubenswrapper[7337]: I0312 18:28:20.787396 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="e27d2693-1a06-473e-a126-614b939bae33" containerName="route-controller-manager" Mar 12 18:28:20.787431 master-0 kubenswrapper[7337]: E0312 18:28:20.787410 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30c5dc4b-f1c8-4773-b961-985740fcc503" containerName="controller-manager" Mar 12 18:28:20.787431 master-0 kubenswrapper[7337]: I0312 18:28:20.787416 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="30c5dc4b-f1c8-4773-b961-985740fcc503" containerName="controller-manager" Mar 12 18:28:20.787431 master-0 kubenswrapper[7337]: E0312 18:28:20.787426 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e27d2693-1a06-473e-a126-614b939bae33" containerName="route-controller-manager" Mar 12 18:28:20.787431 master-0 kubenswrapper[7337]: I0312 18:28:20.787432 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="e27d2693-1a06-473e-a126-614b939bae33" containerName="route-controller-manager" Mar 12 18:28:20.787775 master-0 kubenswrapper[7337]: E0312 18:28:20.787457 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30c5dc4b-f1c8-4773-b961-985740fcc503" containerName="controller-manager" Mar 12 18:28:20.787775 master-0 kubenswrapper[7337]: I0312 18:28:20.787463 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="30c5dc4b-f1c8-4773-b961-985740fcc503" containerName="controller-manager" Mar 12 18:28:20.787775 master-0 kubenswrapper[7337]: I0312 18:28:20.787578 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="e27d2693-1a06-473e-a126-614b939bae33" containerName="route-controller-manager" Mar 12 18:28:20.787775 master-0 kubenswrapper[7337]: I0312 18:28:20.787591 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="30c5dc4b-f1c8-4773-b961-985740fcc503" containerName="controller-manager" Mar 12 18:28:20.787775 master-0 kubenswrapper[7337]: I0312 18:28:20.787604 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="30c5dc4b-f1c8-4773-b961-985740fcc503" containerName="controller-manager" Mar 12 18:28:20.788065 master-0 kubenswrapper[7337]: I0312 18:28:20.787986 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:28:20.794878 master-0 kubenswrapper[7337]: I0312 18:28:20.794813 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs"] Mar 12 18:28:20.795170 master-0 kubenswrapper[7337]: E0312 18:28:20.795124 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30c5dc4b-f1c8-4773-b961-985740fcc503" containerName="controller-manager" Mar 12 18:28:20.795170 master-0 kubenswrapper[7337]: I0312 18:28:20.795147 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="30c5dc4b-f1c8-4773-b961-985740fcc503" containerName="controller-manager" Mar 12 18:28:20.795397 master-0 kubenswrapper[7337]: I0312 18:28:20.795326 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="e27d2693-1a06-473e-a126-614b939bae33" containerName="route-controller-manager" Mar 12 18:28:20.795397 master-0 kubenswrapper[7337]: I0312 18:28:20.795348 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="30c5dc4b-f1c8-4773-b961-985740fcc503" containerName="controller-manager" Mar 12 18:28:20.795860 master-0 kubenswrapper[7337]: I0312 18:28:20.795827 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" Mar 12 18:28:20.796496 master-0 kubenswrapper[7337]: I0312 18:28:20.796444 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 18:28:20.796642 master-0 kubenswrapper[7337]: I0312 18:28:20.796551 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 18:28:20.798415 master-0 kubenswrapper[7337]: I0312 18:28:20.798340 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-6d4tt" Mar 12 18:28:20.798595 master-0 kubenswrapper[7337]: I0312 18:28:20.798348 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 18:28:20.799122 master-0 kubenswrapper[7337]: I0312 18:28:20.799035 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-ldpgf" Mar 12 18:28:20.801060 master-0 kubenswrapper[7337]: I0312 18:28:20.801020 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 18:28:20.801411 master-0 kubenswrapper[7337]: I0312 18:28:20.801379 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 18:28:20.801853 master-0 kubenswrapper[7337]: I0312 18:28:20.801660 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 18:28:20.802003 master-0 kubenswrapper[7337]: I0312 18:28:20.801943 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 18:28:20.802267 master-0 kubenswrapper[7337]: I0312 18:28:20.802198 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 18:28:20.802435 master-0 kubenswrapper[7337]: I0312 18:28:20.802405 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 18:28:20.809741 master-0 kubenswrapper[7337]: I0312 18:28:20.809681 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 18:28:20.812156 master-0 kubenswrapper[7337]: I0312 18:28:20.812103 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 18:28:20.821731 master-0 kubenswrapper[7337]: I0312 18:28:20.821643 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cd74f9776-2rmc9"] Mar 12 18:28:20.825640 master-0 kubenswrapper[7337]: I0312 18:28:20.825591 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs"] Mar 12 18:28:20.911926 master-0 kubenswrapper[7337]: I0312 18:28:20.911853 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-config\") pod \"controller-manager-7cd74f9776-2rmc9\" (UID: \"1c016b1e-d47c-47d4-a15f-4160e7731c82\") " pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:28:20.912104 master-0 kubenswrapper[7337]: I0312 18:28:20.912048 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c016b1e-d47c-47d4-a15f-4160e7731c82-serving-cert\") pod \"controller-manager-7cd74f9776-2rmc9\" (UID: \"1c016b1e-d47c-47d4-a15f-4160e7731c82\") " pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:28:20.912172 master-0 kubenswrapper[7337]: I0312 18:28:20.912140 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfp84\" (UniqueName: \"kubernetes.io/projected/be2da107-a419-423f-a657-44d681291f28-kube-api-access-jfp84\") pod \"route-controller-manager-7db5456fb7-csszs\" (UID: \"be2da107-a419-423f-a657-44d681291f28\") " pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" Mar 12 18:28:20.912217 master-0 kubenswrapper[7337]: I0312 18:28:20.912202 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be2da107-a419-423f-a657-44d681291f28-client-ca\") pod \"route-controller-manager-7db5456fb7-csszs\" (UID: \"be2da107-a419-423f-a657-44d681291f28\") " pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" Mar 12 18:28:20.912322 master-0 kubenswrapper[7337]: I0312 18:28:20.912292 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be2da107-a419-423f-a657-44d681291f28-serving-cert\") pod \"route-controller-manager-7db5456fb7-csszs\" (UID: \"be2da107-a419-423f-a657-44d681291f28\") " pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" Mar 12 18:28:20.912362 master-0 kubenswrapper[7337]: I0312 18:28:20.912341 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clz8x\" (UniqueName: \"kubernetes.io/projected/1c016b1e-d47c-47d4-a15f-4160e7731c82-kube-api-access-clz8x\") pod \"controller-manager-7cd74f9776-2rmc9\" (UID: \"1c016b1e-d47c-47d4-a15f-4160e7731c82\") " pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:28:20.912467 master-0 kubenswrapper[7337]: I0312 18:28:20.912435 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-client-ca\") pod \"controller-manager-7cd74f9776-2rmc9\" (UID: \"1c016b1e-d47c-47d4-a15f-4160e7731c82\") " pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:28:20.912538 master-0 kubenswrapper[7337]: I0312 18:28:20.912479 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be2da107-a419-423f-a657-44d681291f28-config\") pod \"route-controller-manager-7db5456fb7-csszs\" (UID: \"be2da107-a419-423f-a657-44d681291f28\") " pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" Mar 12 18:28:20.912590 master-0 kubenswrapper[7337]: I0312 18:28:20.912565 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-proxy-ca-bundles\") pod \"controller-manager-7cd74f9776-2rmc9\" (UID: \"1c016b1e-d47c-47d4-a15f-4160e7731c82\") " pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:28:21.013359 master-0 kubenswrapper[7337]: I0312 18:28:21.013307 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be2da107-a419-423f-a657-44d681291f28-serving-cert\") pod \"route-controller-manager-7db5456fb7-csszs\" (UID: \"be2da107-a419-423f-a657-44d681291f28\") " pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" Mar 12 18:28:21.013359 master-0 kubenswrapper[7337]: I0312 18:28:21.013353 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clz8x\" (UniqueName: \"kubernetes.io/projected/1c016b1e-d47c-47d4-a15f-4160e7731c82-kube-api-access-clz8x\") pod \"controller-manager-7cd74f9776-2rmc9\" (UID: \"1c016b1e-d47c-47d4-a15f-4160e7731c82\") " pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:28:21.013616 master-0 kubenswrapper[7337]: I0312 18:28:21.013403 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-client-ca\") pod \"controller-manager-7cd74f9776-2rmc9\" (UID: \"1c016b1e-d47c-47d4-a15f-4160e7731c82\") " pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:28:21.013652 master-0 kubenswrapper[7337]: I0312 18:28:21.013600 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be2da107-a419-423f-a657-44d681291f28-config\") pod \"route-controller-manager-7db5456fb7-csszs\" (UID: \"be2da107-a419-423f-a657-44d681291f28\") " pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" Mar 12 18:28:21.013688 master-0 kubenswrapper[7337]: I0312 18:28:21.013676 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-proxy-ca-bundles\") pod \"controller-manager-7cd74f9776-2rmc9\" (UID: \"1c016b1e-d47c-47d4-a15f-4160e7731c82\") " pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:28:21.013755 master-0 kubenswrapper[7337]: I0312 18:28:21.013737 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-config\") pod \"controller-manager-7cd74f9776-2rmc9\" (UID: \"1c016b1e-d47c-47d4-a15f-4160e7731c82\") " pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:28:21.013891 master-0 kubenswrapper[7337]: I0312 18:28:21.013868 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c016b1e-d47c-47d4-a15f-4160e7731c82-serving-cert\") pod \"controller-manager-7cd74f9776-2rmc9\" (UID: \"1c016b1e-d47c-47d4-a15f-4160e7731c82\") " pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:28:21.015135 master-0 kubenswrapper[7337]: I0312 18:28:21.014085 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfp84\" (UniqueName: \"kubernetes.io/projected/be2da107-a419-423f-a657-44d681291f28-kube-api-access-jfp84\") pod \"route-controller-manager-7db5456fb7-csszs\" (UID: \"be2da107-a419-423f-a657-44d681291f28\") " pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" Mar 12 18:28:21.015135 master-0 kubenswrapper[7337]: I0312 18:28:21.014168 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be2da107-a419-423f-a657-44d681291f28-client-ca\") pod \"route-controller-manager-7db5456fb7-csszs\" (UID: \"be2da107-a419-423f-a657-44d681291f28\") " pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" Mar 12 18:28:21.015135 master-0 kubenswrapper[7337]: I0312 18:28:21.014364 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-client-ca\") pod \"controller-manager-7cd74f9776-2rmc9\" (UID: \"1c016b1e-d47c-47d4-a15f-4160e7731c82\") " pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:28:21.015288 master-0 kubenswrapper[7337]: I0312 18:28:21.015253 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be2da107-a419-423f-a657-44d681291f28-config\") pod \"route-controller-manager-7db5456fb7-csszs\" (UID: \"be2da107-a419-423f-a657-44d681291f28\") " pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" Mar 12 18:28:21.015531 master-0 kubenswrapper[7337]: I0312 18:28:21.015486 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-config\") pod \"controller-manager-7cd74f9776-2rmc9\" (UID: \"1c016b1e-d47c-47d4-a15f-4160e7731c82\") " pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:28:21.016194 master-0 kubenswrapper[7337]: I0312 18:28:21.015798 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-proxy-ca-bundles\") pod \"controller-manager-7cd74f9776-2rmc9\" (UID: \"1c016b1e-d47c-47d4-a15f-4160e7731c82\") " pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:28:21.016401 master-0 kubenswrapper[7337]: I0312 18:28:21.016384 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be2da107-a419-423f-a657-44d681291f28-serving-cert\") pod \"route-controller-manager-7db5456fb7-csszs\" (UID: \"be2da107-a419-423f-a657-44d681291f28\") " pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" Mar 12 18:28:21.016910 master-0 kubenswrapper[7337]: I0312 18:28:21.016554 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be2da107-a419-423f-a657-44d681291f28-client-ca\") pod \"route-controller-manager-7db5456fb7-csszs\" (UID: \"be2da107-a419-423f-a657-44d681291f28\") " pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" Mar 12 18:28:21.019481 master-0 kubenswrapper[7337]: I0312 18:28:21.019212 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c016b1e-d47c-47d4-a15f-4160e7731c82-serving-cert\") pod \"controller-manager-7cd74f9776-2rmc9\" (UID: \"1c016b1e-d47c-47d4-a15f-4160e7731c82\") " pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:28:21.029877 master-0 kubenswrapper[7337]: I0312 18:28:21.029831 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clz8x\" (UniqueName: \"kubernetes.io/projected/1c016b1e-d47c-47d4-a15f-4160e7731c82-kube-api-access-clz8x\") pod \"controller-manager-7cd74f9776-2rmc9\" (UID: \"1c016b1e-d47c-47d4-a15f-4160e7731c82\") " pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:28:21.030796 master-0 kubenswrapper[7337]: I0312 18:28:21.030768 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfp84\" (UniqueName: \"kubernetes.io/projected/be2da107-a419-423f-a657-44d681291f28-kube-api-access-jfp84\") pod \"route-controller-manager-7db5456fb7-csszs\" (UID: \"be2da107-a419-423f-a657-44d681291f28\") " pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" Mar 12 18:28:21.104076 master-0 kubenswrapper[7337]: I0312 18:28:21.104022 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/cluster-policy-controller/3.log" Mar 12 18:28:21.105780 master-0 kubenswrapper[7337]: I0312 18:28:21.105744 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/kube-controller-manager-cert-syncer/0.log" Mar 12 18:28:21.106486 master-0 kubenswrapper[7337]: I0312 18:28:21.106435 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/kube-controller-manager/0.log" Mar 12 18:28:21.106642 master-0 kubenswrapper[7337]: I0312 18:28:21.106604 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"39c441a05d91070efc538925475b0a44","Type":"ContainerStarted","Data":"feb7a0602e16521ca8f037d98e053563e5dfd7b3fed109ded127b4e56a4c158c"} Mar 12 18:28:21.108544 master-0 kubenswrapper[7337]: I0312 18:28:21.108444 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"50322fdb-6d3f-4237-92d2-a170e2071de5","Type":"ContainerStarted","Data":"6ad45a6ad32331be3b965a6445d2f9fd3e17e7a370bbd99490dcb4dc21bb6f9f"} Mar 12 18:28:21.117230 master-0 kubenswrapper[7337]: I0312 18:28:21.117152 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:28:21.158262 master-0 kubenswrapper[7337]: I0312 18:28:21.158198 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-master-0" podStartSLOduration=2.158182098 podStartE2EDuration="2.158182098s" podCreationTimestamp="2026-03-12 18:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:28:21.156065704 +0000 UTC m=+901.624666651" watchObservedRunningTime="2026-03-12 18:28:21.158182098 +0000 UTC m=+901.626783045" Mar 12 18:28:21.164755 master-0 kubenswrapper[7337]: I0312 18:28:21.164661 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" Mar 12 18:28:21.492444 master-0 kubenswrapper[7337]: I0312 18:28:21.492388 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:21.492444 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:21.492444 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:21.492444 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:21.492994 master-0 kubenswrapper[7337]: I0312 18:28:21.492446 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:21.584972 master-0 kubenswrapper[7337]: I0312 18:28:21.584893 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cd74f9776-2rmc9"] Mar 12 18:28:21.690597 master-0 kubenswrapper[7337]: I0312 18:28:21.690529 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs"] Mar 12 18:28:21.748476 master-0 kubenswrapper[7337]: I0312 18:28:21.748438 7337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30c5dc4b-f1c8-4773-b961-985740fcc503" path="/var/lib/kubelet/pods/30c5dc4b-f1c8-4773-b961-985740fcc503/volumes" Mar 12 18:28:21.749045 master-0 kubenswrapper[7337]: I0312 18:28:21.749019 7337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e27d2693-1a06-473e-a126-614b939bae33" path="/var/lib/kubelet/pods/e27d2693-1a06-473e-a126-614b939bae33/volumes" Mar 12 18:28:22.117654 master-0 kubenswrapper[7337]: I0312 18:28:22.117577 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" event={"ID":"1c016b1e-d47c-47d4-a15f-4160e7731c82","Type":"ContainerStarted","Data":"969c0db5141344b4b23c0b0781fbe97e28190fa1a6362ee204322812779aa447"} Mar 12 18:28:22.117654 master-0 kubenswrapper[7337]: I0312 18:28:22.117621 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" event={"ID":"1c016b1e-d47c-47d4-a15f-4160e7731c82","Type":"ContainerStarted","Data":"cbd303c81d220cd5ed6e63d675881c37da5cce6a8a3c62add5c0bf5721b5fd9f"} Mar 12 18:28:22.118722 master-0 kubenswrapper[7337]: I0312 18:28:22.118672 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:28:22.121314 master-0 kubenswrapper[7337]: I0312 18:28:22.121273 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" event={"ID":"be2da107-a419-423f-a657-44d681291f28","Type":"ContainerStarted","Data":"0a30549c5c55928a6706fd117cdf0c1612cf9d20d7c9ff067345f1b6073c1c45"} Mar 12 18:28:22.121382 master-0 kubenswrapper[7337]: I0312 18:28:22.121322 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" Mar 12 18:28:22.121382 master-0 kubenswrapper[7337]: I0312 18:28:22.121340 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" event={"ID":"be2da107-a419-423f-a657-44d681291f28","Type":"ContainerStarted","Data":"a91d85c0ce3e6a8b926dbcc4b0882326fc962f35e4dc2d7cda43fa3db3301729"} Mar 12 18:28:22.128071 master-0 kubenswrapper[7337]: I0312 18:28:22.128031 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:28:22.149906 master-0 kubenswrapper[7337]: I0312 18:28:22.149845 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" podStartSLOduration=3.149824101 podStartE2EDuration="3.149824101s" podCreationTimestamp="2026-03-12 18:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:28:22.145806019 +0000 UTC m=+902.614406966" watchObservedRunningTime="2026-03-12 18:28:22.149824101 +0000 UTC m=+902.618425048" Mar 12 18:28:22.198532 master-0 kubenswrapper[7337]: I0312 18:28:22.194791 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" podStartSLOduration=3.194776832 podStartE2EDuration="3.194776832s" podCreationTimestamp="2026-03-12 18:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:28:22.192916315 +0000 UTC m=+902.661517272" watchObservedRunningTime="2026-03-12 18:28:22.194776832 +0000 UTC m=+902.663377779" Mar 12 18:28:22.491594 master-0 kubenswrapper[7337]: I0312 18:28:22.491462 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:22.491594 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:22.491594 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:22.491594 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:22.491594 master-0 kubenswrapper[7337]: I0312 18:28:22.491532 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:22.548000 master-0 kubenswrapper[7337]: I0312 18:28:22.547928 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" Mar 12 18:28:23.492725 master-0 kubenswrapper[7337]: I0312 18:28:23.492673 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:23.492725 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:23.492725 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:23.492725 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:23.493178 master-0 kubenswrapper[7337]: I0312 18:28:23.492747 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:24.493117 master-0 kubenswrapper[7337]: I0312 18:28:24.493038 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:24.493117 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:24.493117 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:24.493117 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:24.493866 master-0 kubenswrapper[7337]: I0312 18:28:24.493138 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:24.722200 master-0 kubenswrapper[7337]: I0312 18:28:24.722140 7337 scope.go:117] "RemoveContainer" containerID="0947e2aa600f02809bc0e80e74513829af30ca5ce847f92836ff0aa87ebf24ca" Mar 12 18:28:24.722410 master-0 kubenswrapper[7337]: E0312 18:28:24.722360 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-7c6989d6c4-ljw8b_openshift-authentication-operator(062f1b21-2ffc-47da-8334-427c3b2a1a90)\"" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" podUID="062f1b21-2ffc-47da-8334-427c3b2a1a90" Mar 12 18:28:25.493446 master-0 kubenswrapper[7337]: I0312 18:28:25.493377 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:25.493446 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:25.493446 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:25.493446 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:25.494398 master-0 kubenswrapper[7337]: I0312 18:28:25.493460 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:25.723352 master-0 kubenswrapper[7337]: I0312 18:28:25.723288 7337 scope.go:117] "RemoveContainer" containerID="a2b26c62b9f6c98c92beecd149d44e6763377e53417bf7236ed48cc7741bf7a7" Mar 12 18:28:26.151548 master-0 kubenswrapper[7337]: I0312 18:28:26.151469 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-2psgb_e5fb0152-3efd-4000-bce3-fa90b75316ae/cluster-baremetal-operator/2.log" Mar 12 18:28:26.152040 master-0 kubenswrapper[7337]: I0312 18:28:26.151993 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" event={"ID":"e5fb0152-3efd-4000-bce3-fa90b75316ae","Type":"ContainerStarted","Data":"31a9a487be7f598349eb137bb7ce13f7a4be42da1b936cda9a9a9b7c60e8ccb6"} Mar 12 18:28:26.492680 master-0 kubenswrapper[7337]: I0312 18:28:26.492565 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:26.492680 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:26.492680 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:26.492680 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:26.492680 master-0 kubenswrapper[7337]: I0312 18:28:26.492644 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:27.492198 master-0 kubenswrapper[7337]: I0312 18:28:27.492135 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:27.492198 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:27.492198 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:27.492198 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:27.492889 master-0 kubenswrapper[7337]: I0312 18:28:27.492199 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:28.492941 master-0 kubenswrapper[7337]: I0312 18:28:28.492825 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:28.492941 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:28.492941 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:28.492941 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:28.494136 master-0 kubenswrapper[7337]: I0312 18:28:28.492940 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:29.492773 master-0 kubenswrapper[7337]: I0312 18:28:29.492728 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:29.492773 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:29.492773 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:29.492773 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:29.493339 master-0 kubenswrapper[7337]: I0312 18:28:29.492786 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:30.492603 master-0 kubenswrapper[7337]: I0312 18:28:30.492510 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:30.492603 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:30.492603 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:30.492603 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:30.493673 master-0 kubenswrapper[7337]: I0312 18:28:30.492618 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:30.722712 master-0 kubenswrapper[7337]: I0312 18:28:30.722490 7337 scope.go:117] "RemoveContainer" containerID="37c9fcab8917043972cb8da48f9b3a66fa98e29cb384d4ab82bdb89b8dd2d452" Mar 12 18:28:30.722980 master-0 kubenswrapper[7337]: E0312 18:28:30.722907 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-4527l_openshift-ingress-operator(d94dc349-c5cb-4f12-8e48-867030af4981)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" podUID="d94dc349-c5cb-4f12-8e48-867030af4981" Mar 12 18:28:31.375257 master-0 kubenswrapper[7337]: I0312 18:28:31.375199 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/telemeter-client-d597fb65b-cc7cs"] Mar 12 18:28:31.375560 master-0 kubenswrapper[7337]: I0312 18:28:31.375428 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" podUID="933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3" containerName="telemeter-client" containerID="cri-o://c8c23b25538b9891d025b0dfc35feff302ed68aa1be434420fcec13631f52f01" gracePeriod=30 Mar 12 18:28:31.375810 master-0 kubenswrapper[7337]: I0312 18:28:31.375744 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" podUID="933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3" containerName="kube-rbac-proxy" containerID="cri-o://11462116dd241c4ae57d6e4ef957827246728175975c48e36b7a2f25f8d19909" gracePeriod=30 Mar 12 18:28:31.375918 master-0 kubenswrapper[7337]: I0312 18:28:31.375826 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" podUID="933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3" containerName="reload" containerID="cri-o://dd46c94c9d4bc1fcc6d89a3965b8b23166df4c096bf0eff2d6a38e0a007e8d7b" gracePeriod=30 Mar 12 18:28:31.452411 master-0 kubenswrapper[7337]: I0312 18:28:31.452354 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-xflrp"] Mar 12 18:28:31.453275 master-0 kubenswrapper[7337]: I0312 18:28:31.453246 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-xflrp" Mar 12 18:28:31.455418 master-0 kubenswrapper[7337]: I0312 18:28:31.455361 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Mar 12 18:28:31.456476 master-0 kubenswrapper[7337]: I0312 18:28:31.456437 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-bb95d" Mar 12 18:28:31.495352 master-0 kubenswrapper[7337]: I0312 18:28:31.495131 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:31.495352 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:31.495352 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:31.495352 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:31.495352 master-0 kubenswrapper[7337]: I0312 18:28:31.495192 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:31.569146 master-0 kubenswrapper[7337]: I0312 18:28:31.569088 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4hnx\" (UniqueName: \"kubernetes.io/projected/f8b263f0-160c-4087-b047-90f1d53b9dba-kube-api-access-l4hnx\") pod \"cni-sysctl-allowlist-ds-xflrp\" (UID: \"f8b263f0-160c-4087-b047-90f1d53b9dba\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xflrp" Mar 12 18:28:31.569371 master-0 kubenswrapper[7337]: I0312 18:28:31.569163 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f8b263f0-160c-4087-b047-90f1d53b9dba-ready\") pod \"cni-sysctl-allowlist-ds-xflrp\" (UID: \"f8b263f0-160c-4087-b047-90f1d53b9dba\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xflrp" Mar 12 18:28:31.569371 master-0 kubenswrapper[7337]: I0312 18:28:31.569200 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f8b263f0-160c-4087-b047-90f1d53b9dba-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-xflrp\" (UID: \"f8b263f0-160c-4087-b047-90f1d53b9dba\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xflrp" Mar 12 18:28:31.569371 master-0 kubenswrapper[7337]: I0312 18:28:31.569223 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f8b263f0-160c-4087-b047-90f1d53b9dba-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-xflrp\" (UID: \"f8b263f0-160c-4087-b047-90f1d53b9dba\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xflrp" Mar 12 18:28:31.672849 master-0 kubenswrapper[7337]: I0312 18:28:31.670363 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f8b263f0-160c-4087-b047-90f1d53b9dba-ready\") pod \"cni-sysctl-allowlist-ds-xflrp\" (UID: \"f8b263f0-160c-4087-b047-90f1d53b9dba\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xflrp" Mar 12 18:28:31.672849 master-0 kubenswrapper[7337]: I0312 18:28:31.670440 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f8b263f0-160c-4087-b047-90f1d53b9dba-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-xflrp\" (UID: \"f8b263f0-160c-4087-b047-90f1d53b9dba\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xflrp" Mar 12 18:28:31.672849 master-0 kubenswrapper[7337]: I0312 18:28:31.670476 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f8b263f0-160c-4087-b047-90f1d53b9dba-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-xflrp\" (UID: \"f8b263f0-160c-4087-b047-90f1d53b9dba\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xflrp" Mar 12 18:28:31.672849 master-0 kubenswrapper[7337]: I0312 18:28:31.670749 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4hnx\" (UniqueName: \"kubernetes.io/projected/f8b263f0-160c-4087-b047-90f1d53b9dba-kube-api-access-l4hnx\") pod \"cni-sysctl-allowlist-ds-xflrp\" (UID: \"f8b263f0-160c-4087-b047-90f1d53b9dba\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xflrp" Mar 12 18:28:31.672849 master-0 kubenswrapper[7337]: I0312 18:28:31.670785 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f8b263f0-160c-4087-b047-90f1d53b9dba-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-xflrp\" (UID: \"f8b263f0-160c-4087-b047-90f1d53b9dba\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xflrp" Mar 12 18:28:31.672849 master-0 kubenswrapper[7337]: I0312 18:28:31.671401 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f8b263f0-160c-4087-b047-90f1d53b9dba-ready\") pod \"cni-sysctl-allowlist-ds-xflrp\" (UID: \"f8b263f0-160c-4087-b047-90f1d53b9dba\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xflrp" Mar 12 18:28:31.672849 master-0 kubenswrapper[7337]: I0312 18:28:31.672180 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f8b263f0-160c-4087-b047-90f1d53b9dba-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-xflrp\" (UID: \"f8b263f0-160c-4087-b047-90f1d53b9dba\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xflrp" Mar 12 18:28:31.688957 master-0 kubenswrapper[7337]: I0312 18:28:31.688902 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4hnx\" (UniqueName: \"kubernetes.io/projected/f8b263f0-160c-4087-b047-90f1d53b9dba-kube-api-access-l4hnx\") pod \"cni-sysctl-allowlist-ds-xflrp\" (UID: \"f8b263f0-160c-4087-b047-90f1d53b9dba\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xflrp" Mar 12 18:28:31.774650 master-0 kubenswrapper[7337]: I0312 18:28:31.774470 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-d597fb65b-cc7cs_933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3/telemeter-client/0.log" Mar 12 18:28:31.774650 master-0 kubenswrapper[7337]: I0312 18:28:31.774630 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" Mar 12 18:28:31.859235 master-0 kubenswrapper[7337]: I0312 18:28:31.859157 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-xflrp" Mar 12 18:28:31.881865 master-0 kubenswrapper[7337]: W0312 18:28:31.881781 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8b263f0_160c_4087_b047_90f1d53b9dba.slice/crio-2e2fa4f0bfc5fd3f9c2f36bc6607a6612764a91512ae0acd23e22f3b743e0528 WatchSource:0}: Error finding container 2e2fa4f0bfc5fd3f9c2f36bc6607a6612764a91512ae0acd23e22f3b743e0528: Status 404 returned error can't find the container with id 2e2fa4f0bfc5fd3f9c2f36bc6607a6612764a91512ae0acd23e22f3b743e0528 Mar 12 18:28:31.976012 master-0 kubenswrapper[7337]: I0312 18:28:31.975835 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-secret-telemeter-client\") pod \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\" (UID: \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\") " Mar 12 18:28:31.976012 master-0 kubenswrapper[7337]: I0312 18:28:31.975902 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-serving-certs-ca-bundle\") pod \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\" (UID: \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\") " Mar 12 18:28:31.976012 master-0 kubenswrapper[7337]: I0312 18:28:31.975936 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-secret-telemeter-client-kube-rbac-proxy-config\") pod \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\" (UID: \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\") " Mar 12 18:28:31.976384 master-0 kubenswrapper[7337]: I0312 18:28:31.976026 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-federate-client-tls\") pod \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\" (UID: \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\") " Mar 12 18:28:31.976384 master-0 kubenswrapper[7337]: I0312 18:28:31.976108 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-telemeter-client-tls\") pod \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\" (UID: \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\") " Mar 12 18:28:31.976384 master-0 kubenswrapper[7337]: I0312 18:28:31.976230 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-metrics-client-ca\") pod \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\" (UID: \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\") " Mar 12 18:28:31.976384 master-0 kubenswrapper[7337]: I0312 18:28:31.976300 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6drrk\" (UniqueName: \"kubernetes.io/projected/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-kube-api-access-6drrk\") pod \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\" (UID: \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\") " Mar 12 18:28:31.976384 master-0 kubenswrapper[7337]: I0312 18:28:31.976377 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-telemeter-trusted-ca-bundle\") pod \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\" (UID: \"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3\") " Mar 12 18:28:31.976745 master-0 kubenswrapper[7337]: I0312 18:28:31.976563 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-serving-certs-ca-bundle" (OuterVolumeSpecName: "serving-certs-ca-bundle") pod "933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3" (UID: "933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3"). InnerVolumeSpecName "serving-certs-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:28:31.976824 master-0 kubenswrapper[7337]: I0312 18:28:31.976796 7337 reconciler_common.go:293] "Volume detached for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-serving-certs-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:28:31.977612 master-0 kubenswrapper[7337]: I0312 18:28:31.977538 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-telemeter-trusted-ca-bundle" (OuterVolumeSpecName: "telemeter-trusted-ca-bundle") pod "933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3" (UID: "933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3"). InnerVolumeSpecName "telemeter-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:28:31.978560 master-0 kubenswrapper[7337]: I0312 18:28:31.978466 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3" (UID: "933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:28:31.980837 master-0 kubenswrapper[7337]: I0312 18:28:31.980760 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-federate-client-tls" (OuterVolumeSpecName: "federate-client-tls") pod "933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3" (UID: "933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3"). InnerVolumeSpecName "federate-client-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:28:31.981229 master-0 kubenswrapper[7337]: I0312 18:28:31.981175 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-telemeter-client-tls" (OuterVolumeSpecName: "telemeter-client-tls") pod "933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3" (UID: "933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3"). InnerVolumeSpecName "telemeter-client-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:28:31.981229 master-0 kubenswrapper[7337]: I0312 18:28:31.981280 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-secret-telemeter-client-kube-rbac-proxy-config" (OuterVolumeSpecName: "secret-telemeter-client-kube-rbac-proxy-config") pod "933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3" (UID: "933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3"). InnerVolumeSpecName "secret-telemeter-client-kube-rbac-proxy-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:28:31.981601 master-0 kubenswrapper[7337]: I0312 18:28:31.981450 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-kube-api-access-6drrk" (OuterVolumeSpecName: "kube-api-access-6drrk") pod "933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3" (UID: "933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3"). InnerVolumeSpecName "kube-api-access-6drrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:28:31.982835 master-0 kubenswrapper[7337]: I0312 18:28:31.982782 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-secret-telemeter-client" (OuterVolumeSpecName: "secret-telemeter-client") pod "933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3" (UID: "933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3"). InnerVolumeSpecName "secret-telemeter-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:28:32.078011 master-0 kubenswrapper[7337]: I0312 18:28:32.077940 7337 reconciler_common.go:293] "Volume detached for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-telemeter-client-tls\") on node \"master-0\" DevicePath \"\"" Mar 12 18:28:32.078011 master-0 kubenswrapper[7337]: I0312 18:28:32.077992 7337 reconciler_common.go:293] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-metrics-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 18:28:32.078011 master-0 kubenswrapper[7337]: I0312 18:28:32.078011 7337 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6drrk\" (UniqueName: \"kubernetes.io/projected/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-kube-api-access-6drrk\") on node \"master-0\" DevicePath \"\"" Mar 12 18:28:32.078011 master-0 kubenswrapper[7337]: I0312 18:28:32.078028 7337 reconciler_common.go:293] "Volume detached for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-telemeter-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:28:32.078444 master-0 kubenswrapper[7337]: I0312 18:28:32.078046 7337 reconciler_common.go:293] "Volume detached for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-secret-telemeter-client\") on node \"master-0\" DevicePath \"\"" Mar 12 18:28:32.078444 master-0 kubenswrapper[7337]: I0312 18:28:32.078065 7337 reconciler_common.go:293] "Volume detached for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-secret-telemeter-client-kube-rbac-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:28:32.078444 master-0 kubenswrapper[7337]: I0312 18:28:32.078081 7337 reconciler_common.go:293] "Volume detached for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3-federate-client-tls\") on node \"master-0\" DevicePath \"\"" Mar 12 18:28:32.216573 master-0 kubenswrapper[7337]: I0312 18:28:32.216479 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-xflrp" event={"ID":"f8b263f0-160c-4087-b047-90f1d53b9dba","Type":"ContainerStarted","Data":"2e2fa4f0bfc5fd3f9c2f36bc6607a6612764a91512ae0acd23e22f3b743e0528"} Mar 12 18:28:32.225571 master-0 kubenswrapper[7337]: I0312 18:28:32.225494 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-d597fb65b-cc7cs_933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3/telemeter-client/0.log" Mar 12 18:28:32.225685 master-0 kubenswrapper[7337]: I0312 18:28:32.225636 7337 generic.go:334] "Generic (PLEG): container finished" podID="933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3" containerID="11462116dd241c4ae57d6e4ef957827246728175975c48e36b7a2f25f8d19909" exitCode=0 Mar 12 18:28:32.225730 master-0 kubenswrapper[7337]: I0312 18:28:32.225684 7337 generic.go:334] "Generic (PLEG): container finished" podID="933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3" containerID="dd46c94c9d4bc1fcc6d89a3965b8b23166df4c096bf0eff2d6a38e0a007e8d7b" exitCode=0 Mar 12 18:28:32.225730 master-0 kubenswrapper[7337]: I0312 18:28:32.225711 7337 generic.go:334] "Generic (PLEG): container finished" podID="933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3" containerID="c8c23b25538b9891d025b0dfc35feff302ed68aa1be434420fcec13631f52f01" exitCode=2 Mar 12 18:28:32.225794 master-0 kubenswrapper[7337]: I0312 18:28:32.225752 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" event={"ID":"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3","Type":"ContainerDied","Data":"11462116dd241c4ae57d6e4ef957827246728175975c48e36b7a2f25f8d19909"} Mar 12 18:28:32.225870 master-0 kubenswrapper[7337]: I0312 18:28:32.225816 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" event={"ID":"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3","Type":"ContainerDied","Data":"dd46c94c9d4bc1fcc6d89a3965b8b23166df4c096bf0eff2d6a38e0a007e8d7b"} Mar 12 18:28:32.225870 master-0 kubenswrapper[7337]: I0312 18:28:32.225857 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" event={"ID":"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3","Type":"ContainerDied","Data":"c8c23b25538b9891d025b0dfc35feff302ed68aa1be434420fcec13631f52f01"} Mar 12 18:28:32.225939 master-0 kubenswrapper[7337]: I0312 18:28:32.225886 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" event={"ID":"933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3","Type":"ContainerDied","Data":"6af457970e39840a52de23503d79330946ae806d893079d022abefc97b2ba485"} Mar 12 18:28:32.225971 master-0 kubenswrapper[7337]: I0312 18:28:32.225879 7337 scope.go:117] "RemoveContainer" containerID="11462116dd241c4ae57d6e4ef957827246728175975c48e36b7a2f25f8d19909" Mar 12 18:28:32.227134 master-0 kubenswrapper[7337]: I0312 18:28:32.227060 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-d597fb65b-cc7cs" Mar 12 18:28:32.247118 master-0 kubenswrapper[7337]: I0312 18:28:32.247062 7337 scope.go:117] "RemoveContainer" containerID="dd46c94c9d4bc1fcc6d89a3965b8b23166df4c096bf0eff2d6a38e0a007e8d7b" Mar 12 18:28:32.273966 master-0 kubenswrapper[7337]: I0312 18:28:32.273895 7337 scope.go:117] "RemoveContainer" containerID="c8c23b25538b9891d025b0dfc35feff302ed68aa1be434420fcec13631f52f01" Mar 12 18:28:32.286202 master-0 kubenswrapper[7337]: I0312 18:28:32.286122 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/telemeter-client-d597fb65b-cc7cs"] Mar 12 18:28:32.290348 master-0 kubenswrapper[7337]: I0312 18:28:32.290295 7337 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/telemeter-client-d597fb65b-cc7cs"] Mar 12 18:28:32.292597 master-0 kubenswrapper[7337]: I0312 18:28:32.292565 7337 scope.go:117] "RemoveContainer" containerID="11462116dd241c4ae57d6e4ef957827246728175975c48e36b7a2f25f8d19909" Mar 12 18:28:32.293011 master-0 kubenswrapper[7337]: E0312 18:28:32.292983 7337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11462116dd241c4ae57d6e4ef957827246728175975c48e36b7a2f25f8d19909\": container with ID starting with 11462116dd241c4ae57d6e4ef957827246728175975c48e36b7a2f25f8d19909 not found: ID does not exist" containerID="11462116dd241c4ae57d6e4ef957827246728175975c48e36b7a2f25f8d19909" Mar 12 18:28:32.293062 master-0 kubenswrapper[7337]: I0312 18:28:32.293021 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11462116dd241c4ae57d6e4ef957827246728175975c48e36b7a2f25f8d19909"} err="failed to get container status \"11462116dd241c4ae57d6e4ef957827246728175975c48e36b7a2f25f8d19909\": rpc error: code = NotFound desc = could not find container \"11462116dd241c4ae57d6e4ef957827246728175975c48e36b7a2f25f8d19909\": container with ID starting with 11462116dd241c4ae57d6e4ef957827246728175975c48e36b7a2f25f8d19909 not found: ID does not exist" Mar 12 18:28:32.293062 master-0 kubenswrapper[7337]: I0312 18:28:32.293051 7337 scope.go:117] "RemoveContainer" containerID="dd46c94c9d4bc1fcc6d89a3965b8b23166df4c096bf0eff2d6a38e0a007e8d7b" Mar 12 18:28:32.293458 master-0 kubenswrapper[7337]: E0312 18:28:32.293423 7337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd46c94c9d4bc1fcc6d89a3965b8b23166df4c096bf0eff2d6a38e0a007e8d7b\": container with ID starting with dd46c94c9d4bc1fcc6d89a3965b8b23166df4c096bf0eff2d6a38e0a007e8d7b not found: ID does not exist" containerID="dd46c94c9d4bc1fcc6d89a3965b8b23166df4c096bf0eff2d6a38e0a007e8d7b" Mar 12 18:28:32.293602 master-0 kubenswrapper[7337]: I0312 18:28:32.293572 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd46c94c9d4bc1fcc6d89a3965b8b23166df4c096bf0eff2d6a38e0a007e8d7b"} err="failed to get container status \"dd46c94c9d4bc1fcc6d89a3965b8b23166df4c096bf0eff2d6a38e0a007e8d7b\": rpc error: code = NotFound desc = could not find container \"dd46c94c9d4bc1fcc6d89a3965b8b23166df4c096bf0eff2d6a38e0a007e8d7b\": container with ID starting with dd46c94c9d4bc1fcc6d89a3965b8b23166df4c096bf0eff2d6a38e0a007e8d7b not found: ID does not exist" Mar 12 18:28:32.293683 master-0 kubenswrapper[7337]: I0312 18:28:32.293670 7337 scope.go:117] "RemoveContainer" containerID="c8c23b25538b9891d025b0dfc35feff302ed68aa1be434420fcec13631f52f01" Mar 12 18:28:32.294134 master-0 kubenswrapper[7337]: E0312 18:28:32.294118 7337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8c23b25538b9891d025b0dfc35feff302ed68aa1be434420fcec13631f52f01\": container with ID starting with c8c23b25538b9891d025b0dfc35feff302ed68aa1be434420fcec13631f52f01 not found: ID does not exist" containerID="c8c23b25538b9891d025b0dfc35feff302ed68aa1be434420fcec13631f52f01" Mar 12 18:28:32.294252 master-0 kubenswrapper[7337]: I0312 18:28:32.294235 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8c23b25538b9891d025b0dfc35feff302ed68aa1be434420fcec13631f52f01"} err="failed to get container status \"c8c23b25538b9891d025b0dfc35feff302ed68aa1be434420fcec13631f52f01\": rpc error: code = NotFound desc = could not find container \"c8c23b25538b9891d025b0dfc35feff302ed68aa1be434420fcec13631f52f01\": container with ID starting with c8c23b25538b9891d025b0dfc35feff302ed68aa1be434420fcec13631f52f01 not found: ID does not exist" Mar 12 18:28:32.294342 master-0 kubenswrapper[7337]: I0312 18:28:32.294330 7337 scope.go:117] "RemoveContainer" containerID="11462116dd241c4ae57d6e4ef957827246728175975c48e36b7a2f25f8d19909" Mar 12 18:28:32.294707 master-0 kubenswrapper[7337]: I0312 18:28:32.294689 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11462116dd241c4ae57d6e4ef957827246728175975c48e36b7a2f25f8d19909"} err="failed to get container status \"11462116dd241c4ae57d6e4ef957827246728175975c48e36b7a2f25f8d19909\": rpc error: code = NotFound desc = could not find container \"11462116dd241c4ae57d6e4ef957827246728175975c48e36b7a2f25f8d19909\": container with ID starting with 11462116dd241c4ae57d6e4ef957827246728175975c48e36b7a2f25f8d19909 not found: ID does not exist" Mar 12 18:28:32.294789 master-0 kubenswrapper[7337]: I0312 18:28:32.294777 7337 scope.go:117] "RemoveContainer" containerID="dd46c94c9d4bc1fcc6d89a3965b8b23166df4c096bf0eff2d6a38e0a007e8d7b" Mar 12 18:28:32.295266 master-0 kubenswrapper[7337]: I0312 18:28:32.295207 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd46c94c9d4bc1fcc6d89a3965b8b23166df4c096bf0eff2d6a38e0a007e8d7b"} err="failed to get container status \"dd46c94c9d4bc1fcc6d89a3965b8b23166df4c096bf0eff2d6a38e0a007e8d7b\": rpc error: code = NotFound desc = could not find container \"dd46c94c9d4bc1fcc6d89a3965b8b23166df4c096bf0eff2d6a38e0a007e8d7b\": container with ID starting with dd46c94c9d4bc1fcc6d89a3965b8b23166df4c096bf0eff2d6a38e0a007e8d7b not found: ID does not exist" Mar 12 18:28:32.295320 master-0 kubenswrapper[7337]: I0312 18:28:32.295271 7337 scope.go:117] "RemoveContainer" containerID="c8c23b25538b9891d025b0dfc35feff302ed68aa1be434420fcec13631f52f01" Mar 12 18:28:32.295706 master-0 kubenswrapper[7337]: I0312 18:28:32.295682 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8c23b25538b9891d025b0dfc35feff302ed68aa1be434420fcec13631f52f01"} err="failed to get container status \"c8c23b25538b9891d025b0dfc35feff302ed68aa1be434420fcec13631f52f01\": rpc error: code = NotFound desc = could not find container \"c8c23b25538b9891d025b0dfc35feff302ed68aa1be434420fcec13631f52f01\": container with ID starting with c8c23b25538b9891d025b0dfc35feff302ed68aa1be434420fcec13631f52f01 not found: ID does not exist" Mar 12 18:28:32.295802 master-0 kubenswrapper[7337]: I0312 18:28:32.295788 7337 scope.go:117] "RemoveContainer" containerID="11462116dd241c4ae57d6e4ef957827246728175975c48e36b7a2f25f8d19909" Mar 12 18:28:32.296170 master-0 kubenswrapper[7337]: I0312 18:28:32.296139 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11462116dd241c4ae57d6e4ef957827246728175975c48e36b7a2f25f8d19909"} err="failed to get container status \"11462116dd241c4ae57d6e4ef957827246728175975c48e36b7a2f25f8d19909\": rpc error: code = NotFound desc = could not find container \"11462116dd241c4ae57d6e4ef957827246728175975c48e36b7a2f25f8d19909\": container with ID starting with 11462116dd241c4ae57d6e4ef957827246728175975c48e36b7a2f25f8d19909 not found: ID does not exist" Mar 12 18:28:32.296239 master-0 kubenswrapper[7337]: I0312 18:28:32.296168 7337 scope.go:117] "RemoveContainer" containerID="dd46c94c9d4bc1fcc6d89a3965b8b23166df4c096bf0eff2d6a38e0a007e8d7b" Mar 12 18:28:32.296699 master-0 kubenswrapper[7337]: I0312 18:28:32.296436 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd46c94c9d4bc1fcc6d89a3965b8b23166df4c096bf0eff2d6a38e0a007e8d7b"} err="failed to get container status \"dd46c94c9d4bc1fcc6d89a3965b8b23166df4c096bf0eff2d6a38e0a007e8d7b\": rpc error: code = NotFound desc = could not find container \"dd46c94c9d4bc1fcc6d89a3965b8b23166df4c096bf0eff2d6a38e0a007e8d7b\": container with ID starting with dd46c94c9d4bc1fcc6d89a3965b8b23166df4c096bf0eff2d6a38e0a007e8d7b not found: ID does not exist" Mar 12 18:28:32.296699 master-0 kubenswrapper[7337]: I0312 18:28:32.296468 7337 scope.go:117] "RemoveContainer" containerID="c8c23b25538b9891d025b0dfc35feff302ed68aa1be434420fcec13631f52f01" Mar 12 18:28:32.296930 master-0 kubenswrapper[7337]: I0312 18:28:32.296895 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8c23b25538b9891d025b0dfc35feff302ed68aa1be434420fcec13631f52f01"} err="failed to get container status \"c8c23b25538b9891d025b0dfc35feff302ed68aa1be434420fcec13631f52f01\": rpc error: code = NotFound desc = could not find container \"c8c23b25538b9891d025b0dfc35feff302ed68aa1be434420fcec13631f52f01\": container with ID starting with c8c23b25538b9891d025b0dfc35feff302ed68aa1be434420fcec13631f52f01 not found: ID does not exist" Mar 12 18:28:32.492258 master-0 kubenswrapper[7337]: I0312 18:28:32.492143 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:32.492258 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:32.492258 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:32.492258 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:32.492754 master-0 kubenswrapper[7337]: I0312 18:28:32.492712 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:32.783432 master-0 kubenswrapper[7337]: I0312 18:28:32.783298 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:28:33.234826 master-0 kubenswrapper[7337]: I0312 18:28:33.234776 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-xflrp" event={"ID":"f8b263f0-160c-4087-b047-90f1d53b9dba","Type":"ContainerStarted","Data":"66c6be84cf8a74721e872621308ca9116298e89c58340927ae4e71af086635f6"} Mar 12 18:28:33.235076 master-0 kubenswrapper[7337]: I0312 18:28:33.235054 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-xflrp" Mar 12 18:28:33.253656 master-0 kubenswrapper[7337]: I0312 18:28:33.253581 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-xflrp" podStartSLOduration=2.253560336 podStartE2EDuration="2.253560336s" podCreationTimestamp="2026-03-12 18:28:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:28:33.250833447 +0000 UTC m=+913.719434404" watchObservedRunningTime="2026-03-12 18:28:33.253560336 +0000 UTC m=+913.722161313" Mar 12 18:28:33.254414 master-0 kubenswrapper[7337]: I0312 18:28:33.254386 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-xflrp" Mar 12 18:28:33.424253 master-0 kubenswrapper[7337]: I0312 18:28:33.424181 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-xflrp"] Mar 12 18:28:33.493049 master-0 kubenswrapper[7337]: I0312 18:28:33.492928 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:33.493049 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:33.493049 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:33.493049 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:33.493284 master-0 kubenswrapper[7337]: I0312 18:28:33.493038 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:33.740121 master-0 kubenswrapper[7337]: I0312 18:28:33.740028 7337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3" path="/var/lib/kubelet/pods/933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3/volumes" Mar 12 18:28:34.030917 master-0 kubenswrapper[7337]: I0312 18:28:34.030835 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-5-retry-1-master-0"] Mar 12 18:28:34.031892 master-0 kubenswrapper[7337]: E0312 18:28:34.031205 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3" containerName="kube-rbac-proxy" Mar 12 18:28:34.031892 master-0 kubenswrapper[7337]: I0312 18:28:34.031222 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3" containerName="kube-rbac-proxy" Mar 12 18:28:34.031892 master-0 kubenswrapper[7337]: E0312 18:28:34.031260 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3" containerName="reload" Mar 12 18:28:34.031892 master-0 kubenswrapper[7337]: I0312 18:28:34.031269 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3" containerName="reload" Mar 12 18:28:34.031892 master-0 kubenswrapper[7337]: E0312 18:28:34.031284 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3" containerName="telemeter-client" Mar 12 18:28:34.031892 master-0 kubenswrapper[7337]: I0312 18:28:34.031293 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3" containerName="telemeter-client" Mar 12 18:28:34.031892 master-0 kubenswrapper[7337]: I0312 18:28:34.031484 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3" containerName="reload" Mar 12 18:28:34.031892 master-0 kubenswrapper[7337]: I0312 18:28:34.031504 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3" containerName="telemeter-client" Mar 12 18:28:34.031892 master-0 kubenswrapper[7337]: I0312 18:28:34.031532 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="933b3bc6-b1e4-4db9-b76e-fa7e9dda7ad3" containerName="kube-rbac-proxy" Mar 12 18:28:34.032396 master-0 kubenswrapper[7337]: I0312 18:28:34.032046 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 12 18:28:34.034423 master-0 kubenswrapper[7337]: I0312 18:28:34.034364 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-retry-1-master-0"] Mar 12 18:28:34.035176 master-0 kubenswrapper[7337]: I0312 18:28:34.035142 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 18:28:34.038735 master-0 kubenswrapper[7337]: I0312 18:28:34.038677 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 12 18:28:34.038916 master-0 kubenswrapper[7337]: I0312 18:28:34.038870 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-h72pz" Mar 12 18:28:34.038975 master-0 kubenswrapper[7337]: I0312 18:28:34.038902 7337 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 12 18:28:34.040594 master-0 kubenswrapper[7337]: I0312 18:28:34.040501 7337 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-99kdt" Mar 12 18:28:34.053715 master-0 kubenswrapper[7337]: I0312 18:28:34.053655 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-retry-1-master-0"] Mar 12 18:28:34.083083 master-0 kubenswrapper[7337]: I0312 18:28:34.083008 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-retry-1-master-0"] Mar 12 18:28:34.107734 master-0 kubenswrapper[7337]: I0312 18:28:34.107613 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0a18e6ce-2fed-4e81-9191-45c1e5d3a090-var-lock\") pod \"installer-5-retry-1-master-0\" (UID: \"0a18e6ce-2fed-4e81-9191-45c1e5d3a090\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 12 18:28:34.107734 master-0 kubenswrapper[7337]: I0312 18:28:34.107672 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 18:28:34.107734 master-0 kubenswrapper[7337]: I0312 18:28:34.107698 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 18:28:34.108135 master-0 kubenswrapper[7337]: I0312 18:28:34.107832 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4cb73c69-af16-4565-bdb5-aeae9dcfb423-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 18:28:34.108135 master-0 kubenswrapper[7337]: I0312 18:28:34.107881 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0a18e6ce-2fed-4e81-9191-45c1e5d3a090-kubelet-dir\") pod \"installer-5-retry-1-master-0\" (UID: \"0a18e6ce-2fed-4e81-9191-45c1e5d3a090\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 12 18:28:34.108251 master-0 kubenswrapper[7337]: I0312 18:28:34.108120 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a18e6ce-2fed-4e81-9191-45c1e5d3a090-kube-api-access\") pod \"installer-5-retry-1-master-0\" (UID: \"0a18e6ce-2fed-4e81-9191-45c1e5d3a090\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 12 18:28:34.209990 master-0 kubenswrapper[7337]: I0312 18:28:34.209903 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0a18e6ce-2fed-4e81-9191-45c1e5d3a090-var-lock\") pod \"installer-5-retry-1-master-0\" (UID: \"0a18e6ce-2fed-4e81-9191-45c1e5d3a090\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 12 18:28:34.209990 master-0 kubenswrapper[7337]: I0312 18:28:34.210013 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 18:28:34.210483 master-0 kubenswrapper[7337]: I0312 18:28:34.210053 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 18:28:34.210483 master-0 kubenswrapper[7337]: I0312 18:28:34.210129 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4cb73c69-af16-4565-bdb5-aeae9dcfb423-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 18:28:34.210483 master-0 kubenswrapper[7337]: I0312 18:28:34.210176 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0a18e6ce-2fed-4e81-9191-45c1e5d3a090-kubelet-dir\") pod \"installer-5-retry-1-master-0\" (UID: \"0a18e6ce-2fed-4e81-9191-45c1e5d3a090\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 12 18:28:34.210483 master-0 kubenswrapper[7337]: I0312 18:28:34.210275 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a18e6ce-2fed-4e81-9191-45c1e5d3a090-kube-api-access\") pod \"installer-5-retry-1-master-0\" (UID: \"0a18e6ce-2fed-4e81-9191-45c1e5d3a090\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 12 18:28:34.210915 master-0 kubenswrapper[7337]: I0312 18:28:34.210853 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 18:28:34.211032 master-0 kubenswrapper[7337]: I0312 18:28:34.210996 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0a18e6ce-2fed-4e81-9191-45c1e5d3a090-var-lock\") pod \"installer-5-retry-1-master-0\" (UID: \"0a18e6ce-2fed-4e81-9191-45c1e5d3a090\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 12 18:28:34.211095 master-0 kubenswrapper[7337]: I0312 18:28:34.210957 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0a18e6ce-2fed-4e81-9191-45c1e5d3a090-kubelet-dir\") pod \"installer-5-retry-1-master-0\" (UID: \"0a18e6ce-2fed-4e81-9191-45c1e5d3a090\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 12 18:28:34.211188 master-0 kubenswrapper[7337]: I0312 18:28:34.210915 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4cb73c69-af16-4565-bdb5-aeae9dcfb423-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 18:28:34.231966 master-0 kubenswrapper[7337]: I0312 18:28:34.231913 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a18e6ce-2fed-4e81-9191-45c1e5d3a090-kube-api-access\") pod \"installer-5-retry-1-master-0\" (UID: \"0a18e6ce-2fed-4e81-9191-45c1e5d3a090\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 12 18:28:34.235717 master-0 kubenswrapper[7337]: I0312 18:28:34.235650 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 18:28:34.378680 master-0 kubenswrapper[7337]: I0312 18:28:34.378575 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 12 18:28:34.415450 master-0 kubenswrapper[7337]: I0312 18:28:34.415372 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 18:28:34.492819 master-0 kubenswrapper[7337]: I0312 18:28:34.492739 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:34.492819 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:34.492819 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:34.492819 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:34.493219 master-0 kubenswrapper[7337]: I0312 18:28:34.492836 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:34.722644 master-0 kubenswrapper[7337]: I0312 18:28:34.722570 7337 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="901fa277-f91f-4050-9bba-fc5a7d764d16" Mar 12 18:28:34.722644 master-0 kubenswrapper[7337]: I0312 18:28:34.722612 7337 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="901fa277-f91f-4050-9bba-fc5a7d764d16" Mar 12 18:28:34.743356 master-0 kubenswrapper[7337]: I0312 18:28:34.742927 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 12 18:28:34.754538 master-0 kubenswrapper[7337]: I0312 18:28:34.753474 7337 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-0" Mar 12 18:28:34.766544 master-0 kubenswrapper[7337]: I0312 18:28:34.762791 7337 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 12 18:28:34.793548 master-0 kubenswrapper[7337]: I0312 18:28:34.793403 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 12 18:28:34.931812 master-0 kubenswrapper[7337]: I0312 18:28:34.916934 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-retry-1-master-0"] Mar 12 18:28:35.017661 master-0 kubenswrapper[7337]: I0312 18:28:35.017618 7337 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-retry-1-master-0"] Mar 12 18:28:35.255732 master-0 kubenswrapper[7337]: I0312 18:28:35.255685 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" event={"ID":"4cb73c69-af16-4565-bdb5-aeae9dcfb423","Type":"ContainerStarted","Data":"eef1cf57e8276fdad086e78802215bf998ecd43c19a3a34c77847d52949c2696"} Mar 12 18:28:35.256892 master-0 kubenswrapper[7337]: I0312 18:28:35.256814 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" event={"ID":"0a18e6ce-2fed-4e81-9191-45c1e5d3a090","Type":"ContainerStarted","Data":"ce6917041a8bfd1810138da6e9f362c03a9897208ddc7625a2d63afd22b8d0a8"} Mar 12 18:28:35.257024 master-0 kubenswrapper[7337]: I0312 18:28:35.256950 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-xflrp" podUID="f8b263f0-160c-4087-b047-90f1d53b9dba" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://66c6be84cf8a74721e872621308ca9116298e89c58340927ae4e71af086635f6" gracePeriod=30 Mar 12 18:28:35.257509 master-0 kubenswrapper[7337]: I0312 18:28:35.257479 7337 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="901fa277-f91f-4050-9bba-fc5a7d764d16" Mar 12 18:28:35.257572 master-0 kubenswrapper[7337]: I0312 18:28:35.257542 7337 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="901fa277-f91f-4050-9bba-fc5a7d764d16" Mar 12 18:28:35.493496 master-0 kubenswrapper[7337]: I0312 18:28:35.493365 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:35.493496 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:35.493496 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:35.493496 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:35.493496 master-0 kubenswrapper[7337]: I0312 18:28:35.493452 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:36.268748 master-0 kubenswrapper[7337]: I0312 18:28:36.268680 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" event={"ID":"4cb73c69-af16-4565-bdb5-aeae9dcfb423","Type":"ContainerStarted","Data":"4d14cf356a45b87bedba837114945ff27dddf151bc1c718cb0f056aecd18d911"} Mar 12 18:28:36.270856 master-0 kubenswrapper[7337]: I0312 18:28:36.270814 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" event={"ID":"0a18e6ce-2fed-4e81-9191-45c1e5d3a090","Type":"ContainerStarted","Data":"660d6cb7ac45d8c8e280bd8037da6efe2ef8548c41dcd02f688edd458d998314"} Mar 12 18:28:36.289877 master-0 kubenswrapper[7337]: I0312 18:28:36.289749 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" podStartSLOduration=2.289693345 podStartE2EDuration="2.289693345s" podCreationTimestamp="2026-03-12 18:28:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:28:36.287480159 +0000 UTC m=+916.756081126" watchObservedRunningTime="2026-03-12 18:28:36.289693345 +0000 UTC m=+916.758294322" Mar 12 18:28:36.301862 master-0 kubenswrapper[7337]: I0312 18:28:36.301719 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=2.301685016 podStartE2EDuration="2.301685016s" podCreationTimestamp="2026-03-12 18:28:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:28:35.304574926 +0000 UTC m=+915.773175873" watchObservedRunningTime="2026-03-12 18:28:36.301685016 +0000 UTC m=+916.770285983" Mar 12 18:28:36.317506 master-0 kubenswrapper[7337]: I0312 18:28:36.317075 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" podStartSLOduration=2.317041753 podStartE2EDuration="2.317041753s" podCreationTimestamp="2026-03-12 18:28:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:28:36.316030697 +0000 UTC m=+916.784631674" watchObservedRunningTime="2026-03-12 18:28:36.317041753 +0000 UTC m=+916.785642710" Mar 12 18:28:36.493227 master-0 kubenswrapper[7337]: I0312 18:28:36.493119 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:36.493227 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:36.493227 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:36.493227 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:36.493816 master-0 kubenswrapper[7337]: I0312 18:28:36.493265 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:37.492963 master-0 kubenswrapper[7337]: I0312 18:28:37.492898 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:37.492963 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:37.492963 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:37.492963 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:37.492963 master-0 kubenswrapper[7337]: I0312 18:28:37.492957 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:37.723305 master-0 kubenswrapper[7337]: I0312 18:28:37.723220 7337 scope.go:117] "RemoveContainer" containerID="0947e2aa600f02809bc0e80e74513829af30ca5ce847f92836ff0aa87ebf24ca" Mar 12 18:28:37.723762 master-0 kubenswrapper[7337]: E0312 18:28:37.723714 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-7c6989d6c4-ljw8b_openshift-authentication-operator(062f1b21-2ffc-47da-8334-427c3b2a1a90)\"" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" podUID="062f1b21-2ffc-47da-8334-427c3b2a1a90" Mar 12 18:28:38.492741 master-0 kubenswrapper[7337]: I0312 18:28:38.492672 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:38.492741 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:38.492741 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:38.492741 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:38.493326 master-0 kubenswrapper[7337]: I0312 18:28:38.492757 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:39.492709 master-0 kubenswrapper[7337]: I0312 18:28:39.492608 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:39.492709 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:39.492709 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:39.492709 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:39.493715 master-0 kubenswrapper[7337]: I0312 18:28:39.492717 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:40.492902 master-0 kubenswrapper[7337]: I0312 18:28:40.492819 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:40.492902 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:40.492902 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:40.492902 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:40.494038 master-0 kubenswrapper[7337]: I0312 18:28:40.492907 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:41.493615 master-0 kubenswrapper[7337]: I0312 18:28:41.493414 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:41.493615 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:41.493615 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:41.493615 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:41.494727 master-0 kubenswrapper[7337]: I0312 18:28:41.493639 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:41.862731 master-0 kubenswrapper[7337]: E0312 18:28:41.862649 7337 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="66c6be84cf8a74721e872621308ca9116298e89c58340927ae4e71af086635f6" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 18:28:41.865268 master-0 kubenswrapper[7337]: E0312 18:28:41.865178 7337 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="66c6be84cf8a74721e872621308ca9116298e89c58340927ae4e71af086635f6" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 18:28:41.867709 master-0 kubenswrapper[7337]: E0312 18:28:41.867586 7337 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="66c6be84cf8a74721e872621308ca9116298e89c58340927ae4e71af086635f6" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 18:28:41.867845 master-0 kubenswrapper[7337]: E0312 18:28:41.867710 7337 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-xflrp" podUID="f8b263f0-160c-4087-b047-90f1d53b9dba" containerName="kube-multus-additional-cni-plugins" Mar 12 18:28:42.492698 master-0 kubenswrapper[7337]: I0312 18:28:42.492606 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:42.492698 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:42.492698 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:42.492698 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:42.492698 master-0 kubenswrapper[7337]: I0312 18:28:42.492693 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:43.492977 master-0 kubenswrapper[7337]: I0312 18:28:43.492879 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:43.492977 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:43.492977 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:43.492977 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:43.493807 master-0 kubenswrapper[7337]: I0312 18:28:43.493018 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:44.493364 master-0 kubenswrapper[7337]: I0312 18:28:44.493270 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:44.493364 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:44.493364 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:44.493364 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:44.494408 master-0 kubenswrapper[7337]: I0312 18:28:44.493394 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:45.492194 master-0 kubenswrapper[7337]: I0312 18:28:45.492123 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:45.492194 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:45.492194 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:45.492194 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:45.492507 master-0 kubenswrapper[7337]: I0312 18:28:45.492203 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:45.724248 master-0 kubenswrapper[7337]: I0312 18:28:45.724158 7337 scope.go:117] "RemoveContainer" containerID="37c9fcab8917043972cb8da48f9b3a66fa98e29cb384d4ab82bdb89b8dd2d452" Mar 12 18:28:45.724939 master-0 kubenswrapper[7337]: E0312 18:28:45.724590 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-4527l_openshift-ingress-operator(d94dc349-c5cb-4f12-8e48-867030af4981)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" podUID="d94dc349-c5cb-4f12-8e48-867030af4981" Mar 12 18:28:46.493052 master-0 kubenswrapper[7337]: I0312 18:28:46.492970 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:46.493052 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:46.493052 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:46.493052 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:46.493480 master-0 kubenswrapper[7337]: I0312 18:28:46.493087 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:47.491646 master-0 kubenswrapper[7337]: I0312 18:28:47.491504 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:47.491646 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:47.491646 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:47.491646 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:47.491646 master-0 kubenswrapper[7337]: I0312 18:28:47.491642 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:48.492811 master-0 kubenswrapper[7337]: I0312 18:28:48.492670 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:48.492811 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:48.492811 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:48.492811 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:48.492811 master-0 kubenswrapper[7337]: I0312 18:28:48.492781 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:48.725102 master-0 kubenswrapper[7337]: I0312 18:28:48.724984 7337 scope.go:117] "RemoveContainer" containerID="0947e2aa600f02809bc0e80e74513829af30ca5ce847f92836ff0aa87ebf24ca" Mar 12 18:28:48.725596 master-0 kubenswrapper[7337]: E0312 18:28:48.725483 7337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-7c6989d6c4-ljw8b_openshift-authentication-operator(062f1b21-2ffc-47da-8334-427c3b2a1a90)\"" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" podUID="062f1b21-2ffc-47da-8334-427c3b2a1a90" Mar 12 18:28:49.492507 master-0 kubenswrapper[7337]: I0312 18:28:49.492413 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:49.492507 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:49.492507 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:49.492507 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:49.492898 master-0 kubenswrapper[7337]: I0312 18:28:49.492550 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:50.492309 master-0 kubenswrapper[7337]: I0312 18:28:50.492231 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:50.492309 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:50.492309 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:50.492309 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:50.492719 master-0 kubenswrapper[7337]: I0312 18:28:50.492324 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:51.491718 master-0 kubenswrapper[7337]: I0312 18:28:51.491658 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:51.491718 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:51.491718 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:51.491718 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:51.491718 master-0 kubenswrapper[7337]: I0312 18:28:51.491716 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:51.862139 master-0 kubenswrapper[7337]: E0312 18:28:51.862018 7337 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="66c6be84cf8a74721e872621308ca9116298e89c58340927ae4e71af086635f6" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 18:28:51.863351 master-0 kubenswrapper[7337]: E0312 18:28:51.863268 7337 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="66c6be84cf8a74721e872621308ca9116298e89c58340927ae4e71af086635f6" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 18:28:51.864817 master-0 kubenswrapper[7337]: E0312 18:28:51.864778 7337 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="66c6be84cf8a74721e872621308ca9116298e89c58340927ae4e71af086635f6" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 18:28:51.864909 master-0 kubenswrapper[7337]: E0312 18:28:51.864817 7337 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-xflrp" podUID="f8b263f0-160c-4087-b047-90f1d53b9dba" containerName="kube-multus-additional-cni-plugins" Mar 12 18:28:52.491917 master-0 kubenswrapper[7337]: I0312 18:28:52.491822 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:52.491917 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:52.491917 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:52.491917 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:52.491917 master-0 kubenswrapper[7337]: I0312 18:28:52.491880 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:53.022611 master-0 kubenswrapper[7337]: I0312 18:28:53.022490 7337 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 18:28:53.023030 master-0 kubenswrapper[7337]: I0312 18:28:53.022955 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="39c441a05d91070efc538925475b0a44" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://64bbe8f8e78fcdf7a8f37094d28682b6c744a6d2ce7b94afbf02202b8aaa42c7" gracePeriod=30 Mar 12 18:28:53.023131 master-0 kubenswrapper[7337]: I0312 18:28:53.023059 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="39c441a05d91070efc538925475b0a44" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://feb7a0602e16521ca8f037d98e053563e5dfd7b3fed109ded127b4e56a4c158c" gracePeriod=30 Mar 12 18:28:53.023232 master-0 kubenswrapper[7337]: I0312 18:28:53.023128 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="39c441a05d91070efc538925475b0a44" containerName="cluster-policy-controller" containerID="cri-o://6013ae8778b6f3db082ecdee07bf998643391f13699e5ddf7a85c9b9ddf833c3" gracePeriod=30 Mar 12 18:28:53.023289 master-0 kubenswrapper[7337]: I0312 18:28:53.023088 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="39c441a05d91070efc538925475b0a44" containerName="kube-controller-manager" containerID="cri-o://56c803b302b6c89542dd77ed04fecb43a59a8287926d38c4629dc8bd033d7a46" gracePeriod=30 Mar 12 18:28:53.024405 master-0 kubenswrapper[7337]: I0312 18:28:53.024376 7337 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 18:28:53.024911 master-0 kubenswrapper[7337]: E0312 18:28:53.024871 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c441a05d91070efc538925475b0a44" containerName="cluster-policy-controller" Mar 12 18:28:53.024911 master-0 kubenswrapper[7337]: I0312 18:28:53.024903 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c441a05d91070efc538925475b0a44" containerName="cluster-policy-controller" Mar 12 18:28:53.025012 master-0 kubenswrapper[7337]: E0312 18:28:53.024918 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c441a05d91070efc538925475b0a44" containerName="cluster-policy-controller" Mar 12 18:28:53.025012 master-0 kubenswrapper[7337]: I0312 18:28:53.024928 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c441a05d91070efc538925475b0a44" containerName="cluster-policy-controller" Mar 12 18:28:53.025012 master-0 kubenswrapper[7337]: E0312 18:28:53.024971 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c441a05d91070efc538925475b0a44" containerName="kube-controller-manager-recovery-controller" Mar 12 18:28:53.025012 master-0 kubenswrapper[7337]: I0312 18:28:53.024983 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c441a05d91070efc538925475b0a44" containerName="kube-controller-manager-recovery-controller" Mar 12 18:28:53.025012 master-0 kubenswrapper[7337]: E0312 18:28:53.025004 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c441a05d91070efc538925475b0a44" containerName="kube-controller-manager" Mar 12 18:28:53.025203 master-0 kubenswrapper[7337]: I0312 18:28:53.025044 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c441a05d91070efc538925475b0a44" containerName="kube-controller-manager" Mar 12 18:28:53.025203 master-0 kubenswrapper[7337]: E0312 18:28:53.025067 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c441a05d91070efc538925475b0a44" containerName="cluster-policy-controller" Mar 12 18:28:53.025203 master-0 kubenswrapper[7337]: I0312 18:28:53.025078 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c441a05d91070efc538925475b0a44" containerName="cluster-policy-controller" Mar 12 18:28:53.025203 master-0 kubenswrapper[7337]: E0312 18:28:53.025092 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c441a05d91070efc538925475b0a44" containerName="kube-controller-manager-cert-syncer" Mar 12 18:28:53.025203 master-0 kubenswrapper[7337]: I0312 18:28:53.025130 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c441a05d91070efc538925475b0a44" containerName="kube-controller-manager-cert-syncer" Mar 12 18:28:53.025467 master-0 kubenswrapper[7337]: I0312 18:28:53.025420 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="39c441a05d91070efc538925475b0a44" containerName="cluster-policy-controller" Mar 12 18:28:53.025574 master-0 kubenswrapper[7337]: I0312 18:28:53.025469 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="39c441a05d91070efc538925475b0a44" containerName="cluster-policy-controller" Mar 12 18:28:53.025574 master-0 kubenswrapper[7337]: I0312 18:28:53.025506 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="39c441a05d91070efc538925475b0a44" containerName="kube-controller-manager-cert-syncer" Mar 12 18:28:53.025574 master-0 kubenswrapper[7337]: I0312 18:28:53.025554 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="39c441a05d91070efc538925475b0a44" containerName="cluster-policy-controller" Mar 12 18:28:53.025574 master-0 kubenswrapper[7337]: I0312 18:28:53.025568 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="39c441a05d91070efc538925475b0a44" containerName="kube-controller-manager-cert-syncer" Mar 12 18:28:53.025737 master-0 kubenswrapper[7337]: I0312 18:28:53.025591 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="39c441a05d91070efc538925475b0a44" containerName="kube-controller-manager-recovery-controller" Mar 12 18:28:53.025737 master-0 kubenswrapper[7337]: I0312 18:28:53.025635 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="39c441a05d91070efc538925475b0a44" containerName="cluster-policy-controller" Mar 12 18:28:53.025737 master-0 kubenswrapper[7337]: I0312 18:28:53.025650 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="39c441a05d91070efc538925475b0a44" containerName="kube-controller-manager" Mar 12 18:28:53.025737 master-0 kubenswrapper[7337]: I0312 18:28:53.025670 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="39c441a05d91070efc538925475b0a44" containerName="kube-controller-manager" Mar 12 18:28:53.025925 master-0 kubenswrapper[7337]: E0312 18:28:53.025892 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c441a05d91070efc538925475b0a44" containerName="kube-controller-manager" Mar 12 18:28:53.025925 master-0 kubenswrapper[7337]: I0312 18:28:53.025912 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c441a05d91070efc538925475b0a44" containerName="kube-controller-manager" Mar 12 18:28:53.026007 master-0 kubenswrapper[7337]: E0312 18:28:53.025951 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c441a05d91070efc538925475b0a44" containerName="cluster-policy-controller" Mar 12 18:28:53.026007 master-0 kubenswrapper[7337]: I0312 18:28:53.025962 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c441a05d91070efc538925475b0a44" containerName="cluster-policy-controller" Mar 12 18:28:53.026007 master-0 kubenswrapper[7337]: E0312 18:28:53.025974 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c441a05d91070efc538925475b0a44" containerName="kube-controller-manager-cert-syncer" Mar 12 18:28:53.026007 master-0 kubenswrapper[7337]: I0312 18:28:53.025982 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c441a05d91070efc538925475b0a44" containerName="kube-controller-manager-cert-syncer" Mar 12 18:28:53.026296 master-0 kubenswrapper[7337]: I0312 18:28:53.026241 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="39c441a05d91070efc538925475b0a44" containerName="cluster-policy-controller" Mar 12 18:28:53.026576 master-0 kubenswrapper[7337]: E0312 18:28:53.026552 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c441a05d91070efc538925475b0a44" containerName="cluster-policy-controller" Mar 12 18:28:53.026635 master-0 kubenswrapper[7337]: I0312 18:28:53.026601 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c441a05d91070efc538925475b0a44" containerName="cluster-policy-controller" Mar 12 18:28:53.138776 master-0 kubenswrapper[7337]: I0312 18:28:53.138724 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/49835aec35bdc5feca0d7cf24779b8da-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"49835aec35bdc5feca0d7cf24779b8da\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:28:53.139029 master-0 kubenswrapper[7337]: I0312 18:28:53.139001 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/49835aec35bdc5feca0d7cf24779b8da-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"49835aec35bdc5feca0d7cf24779b8da\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:28:53.240155 master-0 kubenswrapper[7337]: I0312 18:28:53.240090 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/49835aec35bdc5feca0d7cf24779b8da-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"49835aec35bdc5feca0d7cf24779b8da\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:28:53.240417 master-0 kubenswrapper[7337]: I0312 18:28:53.240200 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/49835aec35bdc5feca0d7cf24779b8da-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"49835aec35bdc5feca0d7cf24779b8da\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:28:53.240417 master-0 kubenswrapper[7337]: I0312 18:28:53.240245 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/49835aec35bdc5feca0d7cf24779b8da-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"49835aec35bdc5feca0d7cf24779b8da\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:28:53.240625 master-0 kubenswrapper[7337]: I0312 18:28:53.240431 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/49835aec35bdc5feca0d7cf24779b8da-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"49835aec35bdc5feca0d7cf24779b8da\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:28:53.288624 master-0 kubenswrapper[7337]: I0312 18:28:53.288463 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/kube-controller-manager-cert-syncer/1.log" Mar 12 18:28:53.290065 master-0 kubenswrapper[7337]: I0312 18:28:53.290002 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/cluster-policy-controller/3.log" Mar 12 18:28:53.291937 master-0 kubenswrapper[7337]: I0312 18:28:53.291905 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/kube-controller-manager-cert-syncer/0.log" Mar 12 18:28:53.292648 master-0 kubenswrapper[7337]: I0312 18:28:53.292604 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/kube-controller-manager/0.log" Mar 12 18:28:53.292818 master-0 kubenswrapper[7337]: I0312 18:28:53.292704 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:28:53.296101 master-0 kubenswrapper[7337]: I0312 18:28:53.296057 7337 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="39c441a05d91070efc538925475b0a44" podUID="49835aec35bdc5feca0d7cf24779b8da" Mar 12 18:28:53.340647 master-0 kubenswrapper[7337]: I0312 18:28:53.340600 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/39c441a05d91070efc538925475b0a44-cert-dir\") pod \"39c441a05d91070efc538925475b0a44\" (UID: \"39c441a05d91070efc538925475b0a44\") " Mar 12 18:28:53.340647 master-0 kubenswrapper[7337]: I0312 18:28:53.340676 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/39c441a05d91070efc538925475b0a44-resource-dir\") pod \"39c441a05d91070efc538925475b0a44\" (UID: \"39c441a05d91070efc538925475b0a44\") " Mar 12 18:28:53.341029 master-0 kubenswrapper[7337]: I0312 18:28:53.340741 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39c441a05d91070efc538925475b0a44-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "39c441a05d91070efc538925475b0a44" (UID: "39c441a05d91070efc538925475b0a44"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:28:53.341029 master-0 kubenswrapper[7337]: I0312 18:28:53.340782 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39c441a05d91070efc538925475b0a44-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "39c441a05d91070efc538925475b0a44" (UID: "39c441a05d91070efc538925475b0a44"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:28:53.341029 master-0 kubenswrapper[7337]: I0312 18:28:53.340968 7337 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/39c441a05d91070efc538925475b0a44-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:28:53.341029 master-0 kubenswrapper[7337]: I0312 18:28:53.340983 7337 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/39c441a05d91070efc538925475b0a44-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:28:53.403070 master-0 kubenswrapper[7337]: I0312 18:28:53.403001 7337 generic.go:334] "Generic (PLEG): container finished" podID="50322fdb-6d3f-4237-92d2-a170e2071de5" containerID="6ad45a6ad32331be3b965a6445d2f9fd3e17e7a370bbd99490dcb4dc21bb6f9f" exitCode=0 Mar 12 18:28:53.403414 master-0 kubenswrapper[7337]: I0312 18:28:53.403129 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"50322fdb-6d3f-4237-92d2-a170e2071de5","Type":"ContainerDied","Data":"6ad45a6ad32331be3b965a6445d2f9fd3e17e7a370bbd99490dcb4dc21bb6f9f"} Mar 12 18:28:53.406295 master-0 kubenswrapper[7337]: I0312 18:28:53.406243 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/kube-controller-manager-cert-syncer/1.log" Mar 12 18:28:53.407386 master-0 kubenswrapper[7337]: I0312 18:28:53.407335 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/cluster-policy-controller/3.log" Mar 12 18:28:53.409159 master-0 kubenswrapper[7337]: I0312 18:28:53.409105 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/kube-controller-manager-cert-syncer/0.log" Mar 12 18:28:53.410195 master-0 kubenswrapper[7337]: I0312 18:28:53.410141 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/kube-controller-manager/0.log" Mar 12 18:28:53.410337 master-0 kubenswrapper[7337]: I0312 18:28:53.410214 7337 generic.go:334] "Generic (PLEG): container finished" podID="39c441a05d91070efc538925475b0a44" containerID="feb7a0602e16521ca8f037d98e053563e5dfd7b3fed109ded127b4e56a4c158c" exitCode=2 Mar 12 18:28:53.410337 master-0 kubenswrapper[7337]: I0312 18:28:53.410242 7337 generic.go:334] "Generic (PLEG): container finished" podID="39c441a05d91070efc538925475b0a44" containerID="6013ae8778b6f3db082ecdee07bf998643391f13699e5ddf7a85c9b9ddf833c3" exitCode=0 Mar 12 18:28:53.410337 master-0 kubenswrapper[7337]: I0312 18:28:53.410255 7337 generic.go:334] "Generic (PLEG): container finished" podID="39c441a05d91070efc538925475b0a44" containerID="56c803b302b6c89542dd77ed04fecb43a59a8287926d38c4629dc8bd033d7a46" exitCode=0 Mar 12 18:28:53.410337 master-0 kubenswrapper[7337]: I0312 18:28:53.410268 7337 generic.go:334] "Generic (PLEG): container finished" podID="39c441a05d91070efc538925475b0a44" containerID="64bbe8f8e78fcdf7a8f37094d28682b6c744a6d2ce7b94afbf02202b8aaa42c7" exitCode=0 Mar 12 18:28:53.410337 master-0 kubenswrapper[7337]: I0312 18:28:53.410321 7337 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd22c21b01ab8567576e92f9b78bcc2934cfd08f8466cc304cfffee656791ad7" Mar 12 18:28:53.410767 master-0 kubenswrapper[7337]: I0312 18:28:53.410344 7337 scope.go:117] "RemoveContainer" containerID="40bb332af0befdec702043170fe44c9cb61f64fd323636de64adc0352f5c7576" Mar 12 18:28:53.410767 master-0 kubenswrapper[7337]: I0312 18:28:53.410343 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:28:53.434931 master-0 kubenswrapper[7337]: I0312 18:28:53.434626 7337 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="39c441a05d91070efc538925475b0a44" podUID="49835aec35bdc5feca0d7cf24779b8da" Mar 12 18:28:53.441029 master-0 kubenswrapper[7337]: I0312 18:28:53.440978 7337 scope.go:117] "RemoveContainer" containerID="70a56fe639b2e29fe225557a46043eef2856d3d81e45b284d14769be0d5eb9f5" Mar 12 18:28:53.454157 master-0 kubenswrapper[7337]: I0312 18:28:53.454096 7337 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="39c441a05d91070efc538925475b0a44" podUID="49835aec35bdc5feca0d7cf24779b8da" Mar 12 18:28:53.463668 master-0 kubenswrapper[7337]: I0312 18:28:53.463606 7337 scope.go:117] "RemoveContainer" containerID="a84352e48f1355ad688a8d43acd0737d8ced53bb92d29ec7f76753f1e69e464d" Mar 12 18:28:53.492173 master-0 kubenswrapper[7337]: I0312 18:28:53.492093 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:53.492173 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:53.492173 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:53.492173 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:53.493452 master-0 kubenswrapper[7337]: I0312 18:28:53.492232 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:53.735397 master-0 kubenswrapper[7337]: I0312 18:28:53.734869 7337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39c441a05d91070efc538925475b0a44" path="/var/lib/kubelet/pods/39c441a05d91070efc538925475b0a44/volumes" Mar 12 18:28:54.418705 master-0 kubenswrapper[7337]: I0312 18:28:54.418637 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_39c441a05d91070efc538925475b0a44/kube-controller-manager-cert-syncer/1.log" Mar 12 18:28:54.493177 master-0 kubenswrapper[7337]: I0312 18:28:54.493102 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:54.493177 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:54.493177 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:54.493177 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:54.493711 master-0 kubenswrapper[7337]: I0312 18:28:54.493200 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:54.779147 master-0 kubenswrapper[7337]: I0312 18:28:54.779084 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 18:28:54.861318 master-0 kubenswrapper[7337]: I0312 18:28:54.861277 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50322fdb-6d3f-4237-92d2-a170e2071de5-kube-api-access\") pod \"50322fdb-6d3f-4237-92d2-a170e2071de5\" (UID: \"50322fdb-6d3f-4237-92d2-a170e2071de5\") " Mar 12 18:28:54.861566 master-0 kubenswrapper[7337]: I0312 18:28:54.861416 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/50322fdb-6d3f-4237-92d2-a170e2071de5-var-lock\") pod \"50322fdb-6d3f-4237-92d2-a170e2071de5\" (UID: \"50322fdb-6d3f-4237-92d2-a170e2071de5\") " Mar 12 18:28:54.861566 master-0 kubenswrapper[7337]: I0312 18:28:54.861487 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50322fdb-6d3f-4237-92d2-a170e2071de5-kubelet-dir\") pod \"50322fdb-6d3f-4237-92d2-a170e2071de5\" (UID: \"50322fdb-6d3f-4237-92d2-a170e2071de5\") " Mar 12 18:28:54.861741 master-0 kubenswrapper[7337]: I0312 18:28:54.861694 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50322fdb-6d3f-4237-92d2-a170e2071de5-var-lock" (OuterVolumeSpecName: "var-lock") pod "50322fdb-6d3f-4237-92d2-a170e2071de5" (UID: "50322fdb-6d3f-4237-92d2-a170e2071de5"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:28:54.861814 master-0 kubenswrapper[7337]: I0312 18:28:54.861756 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50322fdb-6d3f-4237-92d2-a170e2071de5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "50322fdb-6d3f-4237-92d2-a170e2071de5" (UID: "50322fdb-6d3f-4237-92d2-a170e2071de5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:28:54.861953 master-0 kubenswrapper[7337]: I0312 18:28:54.861921 7337 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/50322fdb-6d3f-4237-92d2-a170e2071de5-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 18:28:54.861953 master-0 kubenswrapper[7337]: I0312 18:28:54.861942 7337 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50322fdb-6d3f-4237-92d2-a170e2071de5-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:28:54.864562 master-0 kubenswrapper[7337]: I0312 18:28:54.864465 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50322fdb-6d3f-4237-92d2-a170e2071de5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "50322fdb-6d3f-4237-92d2-a170e2071de5" (UID: "50322fdb-6d3f-4237-92d2-a170e2071de5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:28:54.963538 master-0 kubenswrapper[7337]: I0312 18:28:54.963469 7337 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50322fdb-6d3f-4237-92d2-a170e2071de5-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 18:28:55.430850 master-0 kubenswrapper[7337]: I0312 18:28:55.430769 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"50322fdb-6d3f-4237-92d2-a170e2071de5","Type":"ContainerDied","Data":"9b35a0992276a5a57f41cc10a07b53753228078f2cd4dbc9d5d05061bb670327"} Mar 12 18:28:55.430850 master-0 kubenswrapper[7337]: I0312 18:28:55.430830 7337 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b35a0992276a5a57f41cc10a07b53753228078f2cd4dbc9d5d05061bb670327" Mar 12 18:28:55.430850 master-0 kubenswrapper[7337]: I0312 18:28:55.430844 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 18:28:55.494069 master-0 kubenswrapper[7337]: I0312 18:28:55.493937 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:55.494069 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:55.494069 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:55.494069 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:55.495097 master-0 kubenswrapper[7337]: I0312 18:28:55.494058 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:56.493426 master-0 kubenswrapper[7337]: I0312 18:28:56.493328 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:56.493426 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:56.493426 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:56.493426 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:56.493978 master-0 kubenswrapper[7337]: I0312 18:28:56.493449 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:57.492089 master-0 kubenswrapper[7337]: I0312 18:28:57.492013 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:57.492089 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:57.492089 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:57.492089 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:57.492845 master-0 kubenswrapper[7337]: I0312 18:28:57.492095 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:58.492244 master-0 kubenswrapper[7337]: I0312 18:28:58.492145 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:58.492244 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:58.492244 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:58.492244 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:58.492244 master-0 kubenswrapper[7337]: I0312 18:28:58.492225 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:59.492059 master-0 kubenswrapper[7337]: I0312 18:28:59.491982 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:28:59.492059 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:28:59.492059 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:28:59.492059 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:28:59.492059 master-0 kubenswrapper[7337]: I0312 18:28:59.492054 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:28:59.728291 master-0 kubenswrapper[7337]: I0312 18:28:59.728202 7337 scope.go:117] "RemoveContainer" containerID="37c9fcab8917043972cb8da48f9b3a66fa98e29cb384d4ab82bdb89b8dd2d452" Mar 12 18:29:00.471442 master-0 kubenswrapper[7337]: I0312 18:29:00.471396 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4527l_d94dc349-c5cb-4f12-8e48-867030af4981/ingress-operator/4.log" Mar 12 18:29:00.471778 master-0 kubenswrapper[7337]: I0312 18:29:00.471745 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" event={"ID":"d94dc349-c5cb-4f12-8e48-867030af4981","Type":"ContainerStarted","Data":"6ced8f024a281094c6264efd6d0ac8d2082a1c572c0b9ff7eea46e0401cb69e3"} Mar 12 18:29:00.493638 master-0 kubenswrapper[7337]: I0312 18:29:00.493594 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:29:00.493638 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:29:00.493638 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:29:00.493638 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:29:00.494167 master-0 kubenswrapper[7337]: I0312 18:29:00.493644 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:29:00.722934 master-0 kubenswrapper[7337]: I0312 18:29:00.722840 7337 scope.go:117] "RemoveContainer" containerID="0947e2aa600f02809bc0e80e74513829af30ca5ce847f92836ff0aa87ebf24ca" Mar 12 18:29:01.482781 master-0 kubenswrapper[7337]: I0312 18:29:01.482723 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ljw8b_062f1b21-2ffc-47da-8334-427c3b2a1a90/authentication-operator/4.log" Mar 12 18:29:01.482781 master-0 kubenswrapper[7337]: I0312 18:29:01.482775 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" event={"ID":"062f1b21-2ffc-47da-8334-427c3b2a1a90","Type":"ContainerStarted","Data":"efd4624fd451a54f736f105df3fa5cc5facf41c27c130b371018ee20c5099eb9"} Mar 12 18:29:01.491759 master-0 kubenswrapper[7337]: I0312 18:29:01.491704 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:29:01.491759 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:29:01.491759 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:29:01.491759 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:29:01.491759 master-0 kubenswrapper[7337]: I0312 18:29:01.491745 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:29:01.862098 master-0 kubenswrapper[7337]: E0312 18:29:01.861994 7337 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="66c6be84cf8a74721e872621308ca9116298e89c58340927ae4e71af086635f6" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 18:29:01.863744 master-0 kubenswrapper[7337]: E0312 18:29:01.863678 7337 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="66c6be84cf8a74721e872621308ca9116298e89c58340927ae4e71af086635f6" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 18:29:01.865494 master-0 kubenswrapper[7337]: E0312 18:29:01.865430 7337 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="66c6be84cf8a74721e872621308ca9116298e89c58340927ae4e71af086635f6" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 18:29:01.865629 master-0 kubenswrapper[7337]: E0312 18:29:01.865489 7337 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-xflrp" podUID="f8b263f0-160c-4087-b047-90f1d53b9dba" containerName="kube-multus-additional-cni-plugins" Mar 12 18:29:02.492343 master-0 kubenswrapper[7337]: I0312 18:29:02.492240 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:29:02.492343 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:29:02.492343 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:29:02.492343 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:29:02.492343 master-0 kubenswrapper[7337]: I0312 18:29:02.492326 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:29:03.492778 master-0 kubenswrapper[7337]: I0312 18:29:03.492689 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:29:03.492778 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:29:03.492778 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:29:03.492778 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:29:03.493810 master-0 kubenswrapper[7337]: I0312 18:29:03.492802 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:29:04.492855 master-0 kubenswrapper[7337]: I0312 18:29:04.492789 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:29:04.492855 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:29:04.492855 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:29:04.492855 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:29:04.493553 master-0 kubenswrapper[7337]: I0312 18:29:04.492889 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:29:05.393581 master-0 kubenswrapper[7337]: I0312 18:29:05.393503 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-xflrp_f8b263f0-160c-4087-b047-90f1d53b9dba/kube-multus-additional-cni-plugins/0.log" Mar 12 18:29:05.393853 master-0 kubenswrapper[7337]: I0312 18:29:05.393625 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-xflrp" Mar 12 18:29:05.428044 master-0 kubenswrapper[7337]: I0312 18:29:05.427961 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f8b263f0-160c-4087-b047-90f1d53b9dba-cni-sysctl-allowlist\") pod \"f8b263f0-160c-4087-b047-90f1d53b9dba\" (UID: \"f8b263f0-160c-4087-b047-90f1d53b9dba\") " Mar 12 18:29:05.428291 master-0 kubenswrapper[7337]: I0312 18:29:05.428063 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f8b263f0-160c-4087-b047-90f1d53b9dba-tuning-conf-dir\") pod \"f8b263f0-160c-4087-b047-90f1d53b9dba\" (UID: \"f8b263f0-160c-4087-b047-90f1d53b9dba\") " Mar 12 18:29:05.428291 master-0 kubenswrapper[7337]: I0312 18:29:05.428097 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4hnx\" (UniqueName: \"kubernetes.io/projected/f8b263f0-160c-4087-b047-90f1d53b9dba-kube-api-access-l4hnx\") pod \"f8b263f0-160c-4087-b047-90f1d53b9dba\" (UID: \"f8b263f0-160c-4087-b047-90f1d53b9dba\") " Mar 12 18:29:05.428291 master-0 kubenswrapper[7337]: I0312 18:29:05.428208 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8b263f0-160c-4087-b047-90f1d53b9dba-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "f8b263f0-160c-4087-b047-90f1d53b9dba" (UID: "f8b263f0-160c-4087-b047-90f1d53b9dba"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:29:05.428291 master-0 kubenswrapper[7337]: I0312 18:29:05.428275 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f8b263f0-160c-4087-b047-90f1d53b9dba-ready\") pod \"f8b263f0-160c-4087-b047-90f1d53b9dba\" (UID: \"f8b263f0-160c-4087-b047-90f1d53b9dba\") " Mar 12 18:29:05.428649 master-0 kubenswrapper[7337]: I0312 18:29:05.428555 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8b263f0-160c-4087-b047-90f1d53b9dba-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "f8b263f0-160c-4087-b047-90f1d53b9dba" (UID: "f8b263f0-160c-4087-b047-90f1d53b9dba"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:29:05.428947 master-0 kubenswrapper[7337]: I0312 18:29:05.428894 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8b263f0-160c-4087-b047-90f1d53b9dba-ready" (OuterVolumeSpecName: "ready") pod "f8b263f0-160c-4087-b047-90f1d53b9dba" (UID: "f8b263f0-160c-4087-b047-90f1d53b9dba"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:29:05.429035 master-0 kubenswrapper[7337]: I0312 18:29:05.429017 7337 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f8b263f0-160c-4087-b047-90f1d53b9dba-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Mar 12 18:29:05.429106 master-0 kubenswrapper[7337]: I0312 18:29:05.429043 7337 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f8b263f0-160c-4087-b047-90f1d53b9dba-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:29:05.429106 master-0 kubenswrapper[7337]: I0312 18:29:05.429062 7337 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f8b263f0-160c-4087-b047-90f1d53b9dba-ready\") on node \"master-0\" DevicePath \"\"" Mar 12 18:29:05.433092 master-0 kubenswrapper[7337]: I0312 18:29:05.432887 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8b263f0-160c-4087-b047-90f1d53b9dba-kube-api-access-l4hnx" (OuterVolumeSpecName: "kube-api-access-l4hnx") pod "f8b263f0-160c-4087-b047-90f1d53b9dba" (UID: "f8b263f0-160c-4087-b047-90f1d53b9dba"). InnerVolumeSpecName "kube-api-access-l4hnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:29:05.491991 master-0 kubenswrapper[7337]: I0312 18:29:05.491925 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:29:05.491991 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:29:05.491991 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:29:05.491991 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:29:05.491991 master-0 kubenswrapper[7337]: I0312 18:29:05.491990 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:29:05.517318 master-0 kubenswrapper[7337]: I0312 18:29:05.517260 7337 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-xflrp_f8b263f0-160c-4087-b047-90f1d53b9dba/kube-multus-additional-cni-plugins/0.log" Mar 12 18:29:05.517318 master-0 kubenswrapper[7337]: I0312 18:29:05.517319 7337 generic.go:334] "Generic (PLEG): container finished" podID="f8b263f0-160c-4087-b047-90f1d53b9dba" containerID="66c6be84cf8a74721e872621308ca9116298e89c58340927ae4e71af086635f6" exitCode=137 Mar 12 18:29:05.518692 master-0 kubenswrapper[7337]: I0312 18:29:05.517351 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-xflrp" event={"ID":"f8b263f0-160c-4087-b047-90f1d53b9dba","Type":"ContainerDied","Data":"66c6be84cf8a74721e872621308ca9116298e89c58340927ae4e71af086635f6"} Mar 12 18:29:05.518692 master-0 kubenswrapper[7337]: I0312 18:29:05.517382 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-xflrp" event={"ID":"f8b263f0-160c-4087-b047-90f1d53b9dba","Type":"ContainerDied","Data":"2e2fa4f0bfc5fd3f9c2f36bc6607a6612764a91512ae0acd23e22f3b743e0528"} Mar 12 18:29:05.518692 master-0 kubenswrapper[7337]: I0312 18:29:05.517401 7337 scope.go:117] "RemoveContainer" containerID="66c6be84cf8a74721e872621308ca9116298e89c58340927ae4e71af086635f6" Mar 12 18:29:05.518692 master-0 kubenswrapper[7337]: I0312 18:29:05.517580 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-xflrp" Mar 12 18:29:05.530356 master-0 kubenswrapper[7337]: I0312 18:29:05.530304 7337 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4hnx\" (UniqueName: \"kubernetes.io/projected/f8b263f0-160c-4087-b047-90f1d53b9dba-kube-api-access-l4hnx\") on node \"master-0\" DevicePath \"\"" Mar 12 18:29:05.537128 master-0 kubenswrapper[7337]: I0312 18:29:05.537102 7337 scope.go:117] "RemoveContainer" containerID="66c6be84cf8a74721e872621308ca9116298e89c58340927ae4e71af086635f6" Mar 12 18:29:05.537749 master-0 kubenswrapper[7337]: E0312 18:29:05.537632 7337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66c6be84cf8a74721e872621308ca9116298e89c58340927ae4e71af086635f6\": container with ID starting with 66c6be84cf8a74721e872621308ca9116298e89c58340927ae4e71af086635f6 not found: ID does not exist" containerID="66c6be84cf8a74721e872621308ca9116298e89c58340927ae4e71af086635f6" Mar 12 18:29:05.537749 master-0 kubenswrapper[7337]: I0312 18:29:05.537710 7337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66c6be84cf8a74721e872621308ca9116298e89c58340927ae4e71af086635f6"} err="failed to get container status \"66c6be84cf8a74721e872621308ca9116298e89c58340927ae4e71af086635f6\": rpc error: code = NotFound desc = could not find container \"66c6be84cf8a74721e872621308ca9116298e89c58340927ae4e71af086635f6\": container with ID starting with 66c6be84cf8a74721e872621308ca9116298e89c58340927ae4e71af086635f6 not found: ID does not exist" Mar 12 18:29:05.564166 master-0 kubenswrapper[7337]: I0312 18:29:05.564084 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-xflrp"] Mar 12 18:29:05.570673 master-0 kubenswrapper[7337]: I0312 18:29:05.570613 7337 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-xflrp"] Mar 12 18:29:05.722496 master-0 kubenswrapper[7337]: I0312 18:29:05.722444 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:29:05.738942 master-0 kubenswrapper[7337]: I0312 18:29:05.738890 7337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8b263f0-160c-4087-b047-90f1d53b9dba" path="/var/lib/kubelet/pods/f8b263f0-160c-4087-b047-90f1d53b9dba/volumes" Mar 12 18:29:05.739444 master-0 kubenswrapper[7337]: I0312 18:29:05.739402 7337 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="db4a5061-a30d-49bd-9590-6e69d9ec60c9" Mar 12 18:29:05.739490 master-0 kubenswrapper[7337]: I0312 18:29:05.739445 7337 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="db4a5061-a30d-49bd-9590-6e69d9ec60c9" Mar 12 18:29:05.756546 master-0 kubenswrapper[7337]: I0312 18:29:05.755065 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 18:29:05.756546 master-0 kubenswrapper[7337]: I0312 18:29:05.755395 7337 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:29:05.757799 master-0 kubenswrapper[7337]: I0312 18:29:05.757775 7337 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 18:29:05.767972 master-0 kubenswrapper[7337]: I0312 18:29:05.766444 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:29:05.771554 master-0 kubenswrapper[7337]: I0312 18:29:05.771493 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 18:29:05.794347 master-0 kubenswrapper[7337]: W0312 18:29:05.794312 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49835aec35bdc5feca0d7cf24779b8da.slice/crio-677d598751a9389168de4b8d58e7ebd447bfc781ea7149cdcbbd5de656faaac5 WatchSource:0}: Error finding container 677d598751a9389168de4b8d58e7ebd447bfc781ea7149cdcbbd5de656faaac5: Status 404 returned error can't find the container with id 677d598751a9389168de4b8d58e7ebd447bfc781ea7149cdcbbd5de656faaac5 Mar 12 18:29:06.422229 master-0 kubenswrapper[7337]: I0312 18:29:06.422173 7337 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 12 18:29:06.422447 master-0 kubenswrapper[7337]: I0312 18:29:06.422411 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" containerID="cri-o://91fc9e27f58a493917f258512c2dfe1c4bf9d4efc52492f0f4d3e21237d1136f" gracePeriod=30 Mar 12 18:29:06.423806 master-0 kubenswrapper[7337]: I0312 18:29:06.423163 7337 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 12 18:29:06.423806 master-0 kubenswrapper[7337]: E0312 18:29:06.423427 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50322fdb-6d3f-4237-92d2-a170e2071de5" containerName="installer" Mar 12 18:29:06.423806 master-0 kubenswrapper[7337]: I0312 18:29:06.423446 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="50322fdb-6d3f-4237-92d2-a170e2071de5" containerName="installer" Mar 12 18:29:06.423806 master-0 kubenswrapper[7337]: E0312 18:29:06.423465 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 12 18:29:06.423806 master-0 kubenswrapper[7337]: I0312 18:29:06.423473 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 12 18:29:06.423806 master-0 kubenswrapper[7337]: E0312 18:29:06.423490 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 12 18:29:06.423806 master-0 kubenswrapper[7337]: I0312 18:29:06.423498 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 12 18:29:06.423806 master-0 kubenswrapper[7337]: E0312 18:29:06.423514 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b263f0-160c-4087-b047-90f1d53b9dba" containerName="kube-multus-additional-cni-plugins" Mar 12 18:29:06.423806 master-0 kubenswrapper[7337]: I0312 18:29:06.423540 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b263f0-160c-4087-b047-90f1d53b9dba" containerName="kube-multus-additional-cni-plugins" Mar 12 18:29:06.423806 master-0 kubenswrapper[7337]: E0312 18:29:06.423559 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 12 18:29:06.423806 master-0 kubenswrapper[7337]: I0312 18:29:06.423566 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 12 18:29:06.423806 master-0 kubenswrapper[7337]: I0312 18:29:06.423688 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8b263f0-160c-4087-b047-90f1d53b9dba" containerName="kube-multus-additional-cni-plugins" Mar 12 18:29:06.423806 master-0 kubenswrapper[7337]: I0312 18:29:06.423709 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 12 18:29:06.423806 master-0 kubenswrapper[7337]: I0312 18:29:06.423718 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="50322fdb-6d3f-4237-92d2-a170e2071de5" containerName="installer" Mar 12 18:29:06.423806 master-0 kubenswrapper[7337]: I0312 18:29:06.423732 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 12 18:29:06.426044 master-0 kubenswrapper[7337]: I0312 18:29:06.424685 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 12 18:29:06.426044 master-0 kubenswrapper[7337]: I0312 18:29:06.425663 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 18:29:06.491966 master-0 kubenswrapper[7337]: I0312 18:29:06.491915 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:29:06.491966 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:29:06.491966 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:29:06.491966 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:29:06.492253 master-0 kubenswrapper[7337]: I0312 18:29:06.491996 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:29:06.530630 master-0 kubenswrapper[7337]: I0312 18:29:06.530559 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"49835aec35bdc5feca0d7cf24779b8da","Type":"ContainerStarted","Data":"f62d8ace6b78a3d4700c1f018543131bd0581db10be6e4a0ffe1a906b4efcd0a"} Mar 12 18:29:06.530630 master-0 kubenswrapper[7337]: I0312 18:29:06.530619 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"49835aec35bdc5feca0d7cf24779b8da","Type":"ContainerStarted","Data":"996ba8fb061459a072fc0dac62d85e8970305954c92459dbfc764353eca2dc98"} Mar 12 18:29:06.530630 master-0 kubenswrapper[7337]: I0312 18:29:06.530633 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"49835aec35bdc5feca0d7cf24779b8da","Type":"ContainerStarted","Data":"677d598751a9389168de4b8d58e7ebd447bfc781ea7149cdcbbd5de656faaac5"} Mar 12 18:29:06.544595 master-0 kubenswrapper[7337]: I0312 18:29:06.544370 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 18:29:06.544595 master-0 kubenswrapper[7337]: I0312 18:29:06.544468 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 18:29:06.645613 master-0 kubenswrapper[7337]: I0312 18:29:06.645356 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 18:29:06.645613 master-0 kubenswrapper[7337]: I0312 18:29:06.645423 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 18:29:06.645613 master-0 kubenswrapper[7337]: I0312 18:29:06.645472 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 18:29:06.645613 master-0 kubenswrapper[7337]: I0312 18:29:06.645489 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 18:29:06.667316 master-0 kubenswrapper[7337]: I0312 18:29:06.667135 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 18:29:06.671221 master-0 kubenswrapper[7337]: I0312 18:29:06.671133 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 18:29:06.674229 master-0 kubenswrapper[7337]: I0312 18:29:06.674194 7337 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 12 18:29:06.690862 master-0 kubenswrapper[7337]: I0312 18:29:06.690678 7337 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="1bfd33d4-47b3-49d7-b323-660cf35b7d89" Mar 12 18:29:06.709908 master-0 kubenswrapper[7337]: W0312 18:29:06.709862 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1453f6461bf5d599ad65a4656343ee91.slice/crio-9c555e5bffa63ad24656c5dfa5ef32654f3cce81a377d07d84caf4aca5f33e3f WatchSource:0}: Error finding container 9c555e5bffa63ad24656c5dfa5ef32654f3cce81a377d07d84caf4aca5f33e3f: Status 404 returned error can't find the container with id 9c555e5bffa63ad24656c5dfa5ef32654f3cce81a377d07d84caf4aca5f33e3f Mar 12 18:29:06.748555 master-0 kubenswrapper[7337]: I0312 18:29:06.746641 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"a1a56802af72ce1aac6b5077f1695ac0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " Mar 12 18:29:06.748555 master-0 kubenswrapper[7337]: I0312 18:29:06.746727 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"a1a56802af72ce1aac6b5077f1695ac0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " Mar 12 18:29:06.748555 master-0 kubenswrapper[7337]: I0312 18:29:06.746799 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs" (OuterVolumeSpecName: "logs") pod "a1a56802af72ce1aac6b5077f1695ac0" (UID: "a1a56802af72ce1aac6b5077f1695ac0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:29:06.748555 master-0 kubenswrapper[7337]: I0312 18:29:06.746833 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets" (OuterVolumeSpecName: "secrets") pod "a1a56802af72ce1aac6b5077f1695ac0" (UID: "a1a56802af72ce1aac6b5077f1695ac0"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:29:06.748555 master-0 kubenswrapper[7337]: I0312 18:29:06.747425 7337 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:29:06.748555 master-0 kubenswrapper[7337]: I0312 18:29:06.747455 7337 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") on node \"master-0\" DevicePath \"\"" Mar 12 18:29:07.492463 master-0 kubenswrapper[7337]: I0312 18:29:07.492404 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:29:07.492463 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:29:07.492463 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:29:07.492463 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:29:07.492913 master-0 kubenswrapper[7337]: I0312 18:29:07.492472 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:29:07.538241 master-0 kubenswrapper[7337]: I0312 18:29:07.537065 7337 generic.go:334] "Generic (PLEG): container finished" podID="1453f6461bf5d599ad65a4656343ee91" containerID="43fec13eaecff4e5dfee1960d9d80a34d149510a17fc33563f826b5c69991892" exitCode=0 Mar 12 18:29:07.538241 master-0 kubenswrapper[7337]: I0312 18:29:07.537104 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerDied","Data":"43fec13eaecff4e5dfee1960d9d80a34d149510a17fc33563f826b5c69991892"} Mar 12 18:29:07.538241 master-0 kubenswrapper[7337]: I0312 18:29:07.537137 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"9c555e5bffa63ad24656c5dfa5ef32654f3cce81a377d07d84caf4aca5f33e3f"} Mar 12 18:29:07.539396 master-0 kubenswrapper[7337]: I0312 18:29:07.539216 7337 generic.go:334] "Generic (PLEG): container finished" podID="0a18e6ce-2fed-4e81-9191-45c1e5d3a090" containerID="660d6cb7ac45d8c8e280bd8037da6efe2ef8548c41dcd02f688edd458d998314" exitCode=0 Mar 12 18:29:07.539396 master-0 kubenswrapper[7337]: I0312 18:29:07.539328 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" event={"ID":"0a18e6ce-2fed-4e81-9191-45c1e5d3a090","Type":"ContainerDied","Data":"660d6cb7ac45d8c8e280bd8037da6efe2ef8548c41dcd02f688edd458d998314"} Mar 12 18:29:07.543899 master-0 kubenswrapper[7337]: I0312 18:29:07.543671 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"49835aec35bdc5feca0d7cf24779b8da","Type":"ContainerStarted","Data":"69f074324c83bb75cf9edb644e26f8a566f617056c611c320fb03fd80290ef36"} Mar 12 18:29:07.543899 master-0 kubenswrapper[7337]: I0312 18:29:07.543726 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"49835aec35bdc5feca0d7cf24779b8da","Type":"ContainerStarted","Data":"65937ab55749d18637d8330aa44ceee2c94d4c78aeac6055c20ae0425fb42bf6"} Mar 12 18:29:07.546556 master-0 kubenswrapper[7337]: I0312 18:29:07.546461 7337 generic.go:334] "Generic (PLEG): container finished" podID="a1a56802af72ce1aac6b5077f1695ac0" containerID="91fc9e27f58a493917f258512c2dfe1c4bf9d4efc52492f0f4d3e21237d1136f" exitCode=0 Mar 12 18:29:07.546733 master-0 kubenswrapper[7337]: I0312 18:29:07.546564 7337 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91bbe32b85272f0f9f735ba2a67b1085ea37b3016231eb6d6938a08eed1a3b9d" Mar 12 18:29:07.546733 master-0 kubenswrapper[7337]: I0312 18:29:07.546587 7337 scope.go:117] "RemoveContainer" containerID="ab77ac8c9287ab57ea2a467e8e1f1dee411c2f94f5642f785b9b7baa2542752b" Mar 12 18:29:07.546733 master-0 kubenswrapper[7337]: I0312 18:29:07.546581 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 18:29:07.583804 master-0 kubenswrapper[7337]: I0312 18:29:07.583694 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.58367435 podStartE2EDuration="2.58367435s" podCreationTimestamp="2026-03-12 18:29:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:29:07.582596123 +0000 UTC m=+948.051197080" watchObservedRunningTime="2026-03-12 18:29:07.58367435 +0000 UTC m=+948.052275297" Mar 12 18:29:07.737328 master-0 kubenswrapper[7337]: I0312 18:29:07.737275 7337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1a56802af72ce1aac6b5077f1695ac0" path="/var/lib/kubelet/pods/a1a56802af72ce1aac6b5077f1695ac0/volumes" Mar 12 18:29:07.737590 master-0 kubenswrapper[7337]: I0312 18:29:07.737561 7337 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Mar 12 18:29:07.751732 master-0 kubenswrapper[7337]: I0312 18:29:07.751651 7337 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 12 18:29:07.751916 master-0 kubenswrapper[7337]: I0312 18:29:07.751899 7337 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="1bfd33d4-47b3-49d7-b323-660cf35b7d89" Mar 12 18:29:07.754717 master-0 kubenswrapper[7337]: I0312 18:29:07.754677 7337 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 12 18:29:07.754782 master-0 kubenswrapper[7337]: I0312 18:29:07.754715 7337 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="1bfd33d4-47b3-49d7-b323-660cf35b7d89" Mar 12 18:29:08.492245 master-0 kubenswrapper[7337]: I0312 18:29:08.492188 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:29:08.492245 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:29:08.492245 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:29:08.492245 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:29:08.492245 master-0 kubenswrapper[7337]: I0312 18:29:08.492244 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:29:08.554951 master-0 kubenswrapper[7337]: I0312 18:29:08.554906 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"9398dfdf2392e3a31a313f647f8c7402bae4efc9bf142697e60f18f631f4f9da"} Mar 12 18:29:08.554951 master-0 kubenswrapper[7337]: I0312 18:29:08.554946 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"2a1fd65e7a43d1b5dcde44cfab17aaf88383a780e7e8fdd45682ee7f8eb7ffc3"} Mar 12 18:29:08.554951 master-0 kubenswrapper[7337]: I0312 18:29:08.554959 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"c24a11c569ed28517312963d8cc79cf04602675bd0a036245bd76c38b6e0f58e"} Mar 12 18:29:08.555818 master-0 kubenswrapper[7337]: I0312 18:29:08.555795 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 18:29:08.574740 master-0 kubenswrapper[7337]: I0312 18:29:08.574670 7337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=2.574654222 podStartE2EDuration="2.574654222s" podCreationTimestamp="2026-03-12 18:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:29:08.573902203 +0000 UTC m=+949.042503150" watchObservedRunningTime="2026-03-12 18:29:08.574654222 +0000 UTC m=+949.043255169" Mar 12 18:29:08.812850 master-0 kubenswrapper[7337]: I0312 18:29:08.812795 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 12 18:29:08.871590 master-0 kubenswrapper[7337]: I0312 18:29:08.871283 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0a18e6ce-2fed-4e81-9191-45c1e5d3a090-var-lock\") pod \"0a18e6ce-2fed-4e81-9191-45c1e5d3a090\" (UID: \"0a18e6ce-2fed-4e81-9191-45c1e5d3a090\") " Mar 12 18:29:08.871590 master-0 kubenswrapper[7337]: I0312 18:29:08.871372 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a18e6ce-2fed-4e81-9191-45c1e5d3a090-kube-api-access\") pod \"0a18e6ce-2fed-4e81-9191-45c1e5d3a090\" (UID: \"0a18e6ce-2fed-4e81-9191-45c1e5d3a090\") " Mar 12 18:29:08.871590 master-0 kubenswrapper[7337]: I0312 18:29:08.871408 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0a18e6ce-2fed-4e81-9191-45c1e5d3a090-kubelet-dir\") pod \"0a18e6ce-2fed-4e81-9191-45c1e5d3a090\" (UID: \"0a18e6ce-2fed-4e81-9191-45c1e5d3a090\") " Mar 12 18:29:08.871590 master-0 kubenswrapper[7337]: I0312 18:29:08.871414 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a18e6ce-2fed-4e81-9191-45c1e5d3a090-var-lock" (OuterVolumeSpecName: "var-lock") pod "0a18e6ce-2fed-4e81-9191-45c1e5d3a090" (UID: "0a18e6ce-2fed-4e81-9191-45c1e5d3a090"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:29:08.871590 master-0 kubenswrapper[7337]: I0312 18:29:08.871538 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a18e6ce-2fed-4e81-9191-45c1e5d3a090-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0a18e6ce-2fed-4e81-9191-45c1e5d3a090" (UID: "0a18e6ce-2fed-4e81-9191-45c1e5d3a090"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:29:08.871904 master-0 kubenswrapper[7337]: I0312 18:29:08.871731 7337 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0a18e6ce-2fed-4e81-9191-45c1e5d3a090-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 18:29:08.871904 master-0 kubenswrapper[7337]: I0312 18:29:08.871745 7337 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0a18e6ce-2fed-4e81-9191-45c1e5d3a090-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:29:08.874962 master-0 kubenswrapper[7337]: I0312 18:29:08.874718 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a18e6ce-2fed-4e81-9191-45c1e5d3a090-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0a18e6ce-2fed-4e81-9191-45c1e5d3a090" (UID: "0a18e6ce-2fed-4e81-9191-45c1e5d3a090"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:29:08.972775 master-0 kubenswrapper[7337]: I0312 18:29:08.972737 7337 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a18e6ce-2fed-4e81-9191-45c1e5d3a090-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 18:29:09.492174 master-0 kubenswrapper[7337]: I0312 18:29:09.492071 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:29:09.492174 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:29:09.492174 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:29:09.492174 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:29:09.492501 master-0 kubenswrapper[7337]: I0312 18:29:09.492223 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:29:09.566594 master-0 kubenswrapper[7337]: I0312 18:29:09.566538 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" event={"ID":"0a18e6ce-2fed-4e81-9191-45c1e5d3a090","Type":"ContainerDied","Data":"ce6917041a8bfd1810138da6e9f362c03a9897208ddc7625a2d63afd22b8d0a8"} Mar 12 18:29:09.566594 master-0 kubenswrapper[7337]: I0312 18:29:09.566572 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 12 18:29:09.567416 master-0 kubenswrapper[7337]: I0312 18:29:09.566587 7337 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce6917041a8bfd1810138da6e9f362c03a9897208ddc7625a2d63afd22b8d0a8" Mar 12 18:29:09.689596 master-0 kubenswrapper[7337]: E0312 18:29:09.689520 7337 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod0a18e6ce_2fed_4e81_9191_45c1e5d3a090.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod0a18e6ce_2fed_4e81_9191_45c1e5d3a090.slice/crio-ce6917041a8bfd1810138da6e9f362c03a9897208ddc7625a2d63afd22b8d0a8\": RecentStats: unable to find data in memory cache]" Mar 12 18:29:10.492155 master-0 kubenswrapper[7337]: I0312 18:29:10.492064 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:29:10.492155 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:29:10.492155 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:29:10.492155 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:29:10.492155 master-0 kubenswrapper[7337]: I0312 18:29:10.492151 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:29:11.491816 master-0 kubenswrapper[7337]: I0312 18:29:11.491745 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:29:11.491816 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:29:11.491816 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:29:11.491816 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:29:11.492362 master-0 kubenswrapper[7337]: I0312 18:29:11.491846 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:29:12.493794 master-0 kubenswrapper[7337]: I0312 18:29:12.493736 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:29:12.493794 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:29:12.493794 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:29:12.493794 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:29:12.494501 master-0 kubenswrapper[7337]: I0312 18:29:12.493807 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:29:13.049355 master-0 kubenswrapper[7337]: I0312 18:29:13.049319 7337 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 12 18:29:13.049826 master-0 kubenswrapper[7337]: E0312 18:29:13.049808 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a18e6ce-2fed-4e81-9191-45c1e5d3a090" containerName="installer" Mar 12 18:29:13.049924 master-0 kubenswrapper[7337]: I0312 18:29:13.049913 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a18e6ce-2fed-4e81-9191-45c1e5d3a090" containerName="installer" Mar 12 18:29:13.050090 master-0 kubenswrapper[7337]: I0312 18:29:13.050078 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a18e6ce-2fed-4e81-9191-45c1e5d3a090" containerName="installer" Mar 12 18:29:13.050507 master-0 kubenswrapper[7337]: I0312 18:29:13.050493 7337 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 12 18:29:13.050768 master-0 kubenswrapper[7337]: I0312 18:29:13.050706 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:13.050834 master-0 kubenswrapper[7337]: I0312 18:29:13.050770 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://1b41120246139f832c6fce447150fed26bcd9a47dc2f49808aa8f04449aadbb6" gracePeriod=15 Mar 12 18:29:13.050901 master-0 kubenswrapper[7337]: I0312 18:29:13.050747 7337 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" containerID="cri-o://c0a8d4431acf000c36d5a8e20b8fbea835bbdf1fd7c8e5eab3ca1097edb9bbb4" gracePeriod=15 Mar 12 18:29:13.051845 master-0 kubenswrapper[7337]: I0312 18:29:13.051804 7337 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 12 18:29:13.051984 master-0 kubenswrapper[7337]: E0312 18:29:13.051967 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 12 18:29:13.051984 master-0 kubenswrapper[7337]: I0312 18:29:13.051983 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 12 18:29:13.052075 master-0 kubenswrapper[7337]: E0312 18:29:13.052003 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 12 18:29:13.052075 master-0 kubenswrapper[7337]: I0312 18:29:13.052010 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 12 18:29:13.052075 master-0 kubenswrapper[7337]: E0312 18:29:13.052021 7337 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 12 18:29:13.052075 master-0 kubenswrapper[7337]: I0312 18:29:13.052027 7337 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 12 18:29:13.052228 master-0 kubenswrapper[7337]: I0312 18:29:13.052135 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 12 18:29:13.052228 master-0 kubenswrapper[7337]: I0312 18:29:13.052147 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 12 18:29:13.052228 master-0 kubenswrapper[7337]: I0312 18:29:13.052168 7337 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 12 18:29:13.054342 master-0 kubenswrapper[7337]: I0312 18:29:13.054114 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:29:13.112988 master-0 kubenswrapper[7337]: E0312 18:29:13.112929 7337 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:13.126762 master-0 kubenswrapper[7337]: I0312 18:29:13.126719 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:13.126897 master-0 kubenswrapper[7337]: I0312 18:29:13.126771 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:29:13.126897 master-0 kubenswrapper[7337]: I0312 18:29:13.126792 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:13.126897 master-0 kubenswrapper[7337]: I0312 18:29:13.126812 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:13.126897 master-0 kubenswrapper[7337]: I0312 18:29:13.126872 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:29:13.127076 master-0 kubenswrapper[7337]: I0312 18:29:13.126935 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:13.127076 master-0 kubenswrapper[7337]: I0312 18:29:13.126958 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:29:13.127076 master-0 kubenswrapper[7337]: I0312 18:29:13.126975 7337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:13.134433 master-0 kubenswrapper[7337]: E0312 18:29:13.134382 7337 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:29:13.227995 master-0 kubenswrapper[7337]: I0312 18:29:13.227919 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:13.227995 master-0 kubenswrapper[7337]: I0312 18:29:13.227974 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:29:13.227995 master-0 kubenswrapper[7337]: I0312 18:29:13.227973 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:13.228324 master-0 kubenswrapper[7337]: I0312 18:29:13.228026 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:29:13.228324 master-0 kubenswrapper[7337]: I0312 18:29:13.228063 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:13.228324 master-0 kubenswrapper[7337]: I0312 18:29:13.228096 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:13.228324 master-0 kubenswrapper[7337]: I0312 18:29:13.228121 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:29:13.228324 master-0 kubenswrapper[7337]: I0312 18:29:13.228136 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:13.228324 master-0 kubenswrapper[7337]: I0312 18:29:13.228159 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:13.228324 master-0 kubenswrapper[7337]: I0312 18:29:13.228180 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:13.228324 master-0 kubenswrapper[7337]: I0312 18:29:13.228195 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:29:13.228324 master-0 kubenswrapper[7337]: I0312 18:29:13.228215 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:13.228324 master-0 kubenswrapper[7337]: I0312 18:29:13.228278 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:13.228803 master-0 kubenswrapper[7337]: I0312 18:29:13.228316 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:13.228803 master-0 kubenswrapper[7337]: I0312 18:29:13.228402 7337 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:29:13.228803 master-0 kubenswrapper[7337]: I0312 18:29:13.228368 7337 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:29:13.413456 master-0 kubenswrapper[7337]: I0312 18:29:13.413329 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:13.435904 master-0 kubenswrapper[7337]: I0312 18:29:13.435866 7337 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:29:13.440162 master-0 kubenswrapper[7337]: W0312 18:29:13.440118 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod899242a15b2bdf3b4a04fb323647ca94.slice/crio-9efb49a7f1b6a902873e9b844b8f9a0a68e95cea55ba6d37aefc0f305d7e46f9 WatchSource:0}: Error finding container 9efb49a7f1b6a902873e9b844b8f9a0a68e95cea55ba6d37aefc0f305d7e46f9: Status 404 returned error can't find the container with id 9efb49a7f1b6a902873e9b844b8f9a0a68e95cea55ba6d37aefc0f305d7e46f9 Mar 12 18:29:13.446100 master-0 kubenswrapper[7337]: E0312 18:29:13.445969 7337 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189c2b7957da3ca7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:899242a15b2bdf3b4a04fb323647ca94,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:29:13.442090151 +0000 UTC m=+953.910691128,LastTimestamp:2026-03-12 18:29:13.442090151 +0000 UTC m=+953.910691128,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:29:13.466649 master-0 kubenswrapper[7337]: W0312 18:29:13.466613 7337 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod077dd10388b9e3e48a07382126e86621.slice/crio-94fe08ab3fbb45153add02b6a25a4870b04bc9f5d9d03ddb9283e70a2fe32299 WatchSource:0}: Error finding container 94fe08ab3fbb45153add02b6a25a4870b04bc9f5d9d03ddb9283e70a2fe32299: Status 404 returned error can't find the container with id 94fe08ab3fbb45153add02b6a25a4870b04bc9f5d9d03ddb9283e70a2fe32299 Mar 12 18:29:13.492497 master-0 kubenswrapper[7337]: I0312 18:29:13.492431 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:29:13.492497 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:29:13.492497 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:29:13.492497 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:29:13.492958 master-0 kubenswrapper[7337]: I0312 18:29:13.492492 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:29:13.598144 master-0 kubenswrapper[7337]: I0312 18:29:13.598109 7337 generic.go:334] "Generic (PLEG): container finished" podID="4cb73c69-af16-4565-bdb5-aeae9dcfb423" containerID="4d14cf356a45b87bedba837114945ff27dddf151bc1c718cb0f056aecd18d911" exitCode=0 Mar 12 18:29:13.607862 master-0 kubenswrapper[7337]: I0312 18:29:13.598166 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" event={"ID":"4cb73c69-af16-4565-bdb5-aeae9dcfb423","Type":"ContainerDied","Data":"4d14cf356a45b87bedba837114945ff27dddf151bc1c718cb0f056aecd18d911"} Mar 12 18:29:13.607862 master-0 kubenswrapper[7337]: I0312 18:29:13.599104 7337 status_manager.go:851] "Failed to get status for pod" podUID="4cb73c69-af16-4565-bdb5-aeae9dcfb423" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:29:13.607862 master-0 kubenswrapper[7337]: I0312 18:29:13.599573 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"899242a15b2bdf3b4a04fb323647ca94","Type":"ContainerStarted","Data":"9efb49a7f1b6a902873e9b844b8f9a0a68e95cea55ba6d37aefc0f305d7e46f9"} Mar 12 18:29:13.607862 master-0 kubenswrapper[7337]: I0312 18:29:13.601526 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"94fe08ab3fbb45153add02b6a25a4870b04bc9f5d9d03ddb9283e70a2fe32299"} Mar 12 18:29:13.607862 master-0 kubenswrapper[7337]: I0312 18:29:13.604616 7337 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="1b41120246139f832c6fce447150fed26bcd9a47dc2f49808aa8f04449aadbb6" exitCode=0 Mar 12 18:29:13.630845 master-0 kubenswrapper[7337]: E0312 18:29:13.630716 7337 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189c2b7957da3ca7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:899242a15b2bdf3b4a04fb323647ca94,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:29:13.442090151 +0000 UTC m=+953.910691128,LastTimestamp:2026-03-12 18:29:13.442090151 +0000 UTC m=+953.910691128,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:29:14.491849 master-0 kubenswrapper[7337]: I0312 18:29:14.491781 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:29:14.491849 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:29:14.491849 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:29:14.491849 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:29:14.492176 master-0 kubenswrapper[7337]: I0312 18:29:14.491875 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:29:14.614882 master-0 kubenswrapper[7337]: I0312 18:29:14.614794 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"899242a15b2bdf3b4a04fb323647ca94","Type":"ContainerStarted","Data":"47171d91400de4e00e465f217262a5cfbabe28599c08b7a76e6b01d33016a909"} Mar 12 18:29:14.616840 master-0 kubenswrapper[7337]: I0312 18:29:14.616763 7337 status_manager.go:851] "Failed to get status for pod" podUID="4cb73c69-af16-4565-bdb5-aeae9dcfb423" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:29:14.617167 master-0 kubenswrapper[7337]: E0312 18:29:14.617054 7337 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:14.618209 master-0 kubenswrapper[7337]: I0312 18:29:14.618156 7337 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="577df93b02b1bb8864460514509f0380f90a4c7a14a9f501133a0fe6e0eb58f9" exitCode=0 Mar 12 18:29:14.618367 master-0 kubenswrapper[7337]: I0312 18:29:14.618239 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerDied","Data":"577df93b02b1bb8864460514509f0380f90a4c7a14a9f501133a0fe6e0eb58f9"} Mar 12 18:29:14.620127 master-0 kubenswrapper[7337]: I0312 18:29:14.620027 7337 status_manager.go:851] "Failed to get status for pod" podUID="4cb73c69-af16-4565-bdb5-aeae9dcfb423" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:29:14.620803 master-0 kubenswrapper[7337]: E0312 18:29:14.620732 7337 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:29:15.003069 master-0 kubenswrapper[7337]: I0312 18:29:14.997771 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 18:29:15.003069 master-0 kubenswrapper[7337]: I0312 18:29:14.998485 7337 status_manager.go:851] "Failed to get status for pod" podUID="4cb73c69-af16-4565-bdb5-aeae9dcfb423" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:29:15.055423 master-0 kubenswrapper[7337]: I0312 18:29:15.054729 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4cb73c69-af16-4565-bdb5-aeae9dcfb423-var-lock\") pod \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\" (UID: \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\") " Mar 12 18:29:15.055423 master-0 kubenswrapper[7337]: I0312 18:29:15.054826 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kube-api-access\") pod \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\" (UID: \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\") " Mar 12 18:29:15.055423 master-0 kubenswrapper[7337]: I0312 18:29:15.054865 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kubelet-dir\") pod \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\" (UID: \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\") " Mar 12 18:29:15.055423 master-0 kubenswrapper[7337]: I0312 18:29:15.054880 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cb73c69-af16-4565-bdb5-aeae9dcfb423-var-lock" (OuterVolumeSpecName: "var-lock") pod "4cb73c69-af16-4565-bdb5-aeae9dcfb423" (UID: "4cb73c69-af16-4565-bdb5-aeae9dcfb423"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:29:15.055423 master-0 kubenswrapper[7337]: I0312 18:29:15.055165 7337 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4cb73c69-af16-4565-bdb5-aeae9dcfb423-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 18:29:15.055423 master-0 kubenswrapper[7337]: I0312 18:29:15.055208 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4cb73c69-af16-4565-bdb5-aeae9dcfb423" (UID: "4cb73c69-af16-4565-bdb5-aeae9dcfb423"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:29:15.060176 master-0 kubenswrapper[7337]: I0312 18:29:15.060139 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4cb73c69-af16-4565-bdb5-aeae9dcfb423" (UID: "4cb73c69-af16-4565-bdb5-aeae9dcfb423"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:29:15.192030 master-0 kubenswrapper[7337]: I0312 18:29:15.191582 7337 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 18:29:15.192030 master-0 kubenswrapper[7337]: I0312 18:29:15.191618 7337 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:29:15.492416 master-0 kubenswrapper[7337]: I0312 18:29:15.492363 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:29:15.492416 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:29:15.492416 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:29:15.492416 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:29:15.492715 master-0 kubenswrapper[7337]: I0312 18:29:15.492434 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:29:15.633336 master-0 kubenswrapper[7337]: I0312 18:29:15.633234 7337 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="c0a8d4431acf000c36d5a8e20b8fbea835bbdf1fd7c8e5eab3ca1097edb9bbb4" exitCode=0 Mar 12 18:29:15.640791 master-0 kubenswrapper[7337]: I0312 18:29:15.637254 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" event={"ID":"4cb73c69-af16-4565-bdb5-aeae9dcfb423","Type":"ContainerDied","Data":"eef1cf57e8276fdad086e78802215bf998ecd43c19a3a34c77847d52949c2696"} Mar 12 18:29:15.640791 master-0 kubenswrapper[7337]: I0312 18:29:15.637285 7337 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eef1cf57e8276fdad086e78802215bf998ecd43c19a3a34c77847d52949c2696" Mar 12 18:29:15.640791 master-0 kubenswrapper[7337]: I0312 18:29:15.637354 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 18:29:15.642776 master-0 kubenswrapper[7337]: I0312 18:29:15.642735 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"0e37d83cdb0f462d8ef6bf2539d2a10c7d84ad143eb688508398e11bcd665ee1"} Mar 12 18:29:15.642838 master-0 kubenswrapper[7337]: I0312 18:29:15.642778 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"1f3cc25e5fa149ff612b530c2777559e47a1f93ddab8c630d27f6a1e07dff2d3"} Mar 12 18:29:15.642838 master-0 kubenswrapper[7337]: I0312 18:29:15.642790 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"f0795f7815622793e2682cda3a2b19590293281fa98d6a395d40fd76c0da7491"} Mar 12 18:29:15.767593 master-0 kubenswrapper[7337]: I0312 18:29:15.767541 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:29:15.767593 master-0 kubenswrapper[7337]: I0312 18:29:15.767581 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:29:15.767593 master-0 kubenswrapper[7337]: I0312 18:29:15.767592 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:29:15.767593 master-0 kubenswrapper[7337]: I0312 18:29:15.767601 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:29:15.771420 master-0 kubenswrapper[7337]: I0312 18:29:15.771375 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:29:15.771604 master-0 kubenswrapper[7337]: I0312 18:29:15.771574 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:29:15.786076 master-0 kubenswrapper[7337]: I0312 18:29:15.786038 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:29:15.904495 master-0 kubenswrapper[7337]: I0312 18:29:15.904452 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 12 18:29:15.904610 master-0 kubenswrapper[7337]: I0312 18:29:15.904547 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 12 18:29:15.904668 master-0 kubenswrapper[7337]: I0312 18:29:15.904614 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 12 18:29:15.904736 master-0 kubenswrapper[7337]: I0312 18:29:15.904709 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 12 18:29:15.904804 master-0 kubenswrapper[7337]: I0312 18:29:15.904742 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 12 18:29:15.904804 master-0 kubenswrapper[7337]: I0312 18:29:15.904790 7337 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 12 18:29:15.905012 master-0 kubenswrapper[7337]: I0312 18:29:15.904978 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:29:15.905088 master-0 kubenswrapper[7337]: I0312 18:29:15.905046 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config" (OuterVolumeSpecName: "config") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:29:15.905088 master-0 kubenswrapper[7337]: I0312 18:29:15.905068 7337 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Mar 12 18:29:15.905088 master-0 kubenswrapper[7337]: I0312 18:29:15.905082 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:29:15.905214 master-0 kubenswrapper[7337]: I0312 18:29:15.905093 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs" (OuterVolumeSpecName: "logs") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:29:15.905214 master-0 kubenswrapper[7337]: I0312 18:29:15.905108 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:29:15.905214 master-0 kubenswrapper[7337]: I0312 18:29:15.905112 7337 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets" (OuterVolumeSpecName: "secrets") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:29:16.005840 master-0 kubenswrapper[7337]: I0312 18:29:16.005778 7337 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Mar 12 18:29:16.005840 master-0 kubenswrapper[7337]: I0312 18:29:16.005836 7337 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:29:16.005840 master-0 kubenswrapper[7337]: I0312 18:29:16.005850 7337 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:29:16.006239 master-0 kubenswrapper[7337]: I0312 18:29:16.005861 7337 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:29:16.006239 master-0 kubenswrapper[7337]: I0312 18:29:16.005872 7337 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") on node \"master-0\" DevicePath \"\"" Mar 12 18:29:16.492097 master-0 kubenswrapper[7337]: I0312 18:29:16.491917 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:29:16.492097 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:29:16.492097 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:29:16.492097 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:29:16.492097 master-0 kubenswrapper[7337]: I0312 18:29:16.491993 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:29:16.656826 master-0 kubenswrapper[7337]: I0312 18:29:16.656771 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"01eeb3272c240b68931788fdbe353a89d0b5911f3532e9e6d3891f108d5a47fc"} Mar 12 18:29:16.657258 master-0 kubenswrapper[7337]: I0312 18:29:16.656867 7337 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"df94ca5ecee83a72fb47501f1f919eaf4028159e1d146ee2a47f3391c16e15fc"} Mar 12 18:29:16.657258 master-0 kubenswrapper[7337]: I0312 18:29:16.656979 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:29:16.658835 master-0 kubenswrapper[7337]: I0312 18:29:16.658809 7337 scope.go:117] "RemoveContainer" containerID="1b41120246139f832c6fce447150fed26bcd9a47dc2f49808aa8f04449aadbb6" Mar 12 18:29:16.658888 master-0 kubenswrapper[7337]: I0312 18:29:16.658850 7337 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 18:29:16.663040 master-0 kubenswrapper[7337]: I0312 18:29:16.662668 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:29:16.664496 master-0 kubenswrapper[7337]: I0312 18:29:16.664463 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:29:16.671529 master-0 kubenswrapper[7337]: I0312 18:29:16.671373 7337 scope.go:117] "RemoveContainer" containerID="c0a8d4431acf000c36d5a8e20b8fbea835bbdf1fd7c8e5eab3ca1097edb9bbb4" Mar 12 18:29:16.704556 master-0 kubenswrapper[7337]: I0312 18:29:16.704503 7337 scope.go:117] "RemoveContainer" containerID="424ff1cd728e6aca964e1aafeb2eb3f61c869370919f825ab26a7330b62524f0" Mar 12 18:29:17.491854 master-0 kubenswrapper[7337]: I0312 18:29:17.491782 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:29:17.491854 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:29:17.491854 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:29:17.491854 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:29:17.492144 master-0 kubenswrapper[7337]: I0312 18:29:17.491856 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:29:17.730963 master-0 kubenswrapper[7337]: I0312 18:29:17.730889 7337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f77c8e18b751d90bc0dfe2d4e304050" path="/var/lib/kubelet/pods/5f77c8e18b751d90bc0dfe2d4e304050/volumes" Mar 12 18:29:17.731706 master-0 kubenswrapper[7337]: I0312 18:29:17.731664 7337 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 12 18:29:18.436250 master-0 kubenswrapper[7337]: I0312 18:29:18.436162 7337 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:29:18.436727 master-0 kubenswrapper[7337]: I0312 18:29:18.436653 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:29:18.485814 master-0 kubenswrapper[7337]: I0312 18:29:18.485780 7337 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:29:18.492951 master-0 kubenswrapper[7337]: I0312 18:29:18.492917 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:29:18.492951 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:29:18.492951 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:29:18.492951 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:29:18.493201 master-0 kubenswrapper[7337]: I0312 18:29:18.492986 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:29:19.492836 master-0 kubenswrapper[7337]: I0312 18:29:19.492788 7337 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:29:19.492836 master-0 kubenswrapper[7337]: [-]has-synced failed: reason withheld Mar 12 18:29:19.492836 master-0 kubenswrapper[7337]: [+]process-running ok Mar 12 18:29:19.492836 master-0 kubenswrapper[7337]: healthz check failed Mar 12 18:29:19.493415 master-0 kubenswrapper[7337]: I0312 18:29:19.492854 7337 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:29:19.732321 master-0 kubenswrapper[7337]: I0312 18:29:19.732112 7337 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 12 18:29:20.191968 master-0 kubenswrapper[7337]: I0312 18:29:20.191880 7337 scope.go:117] "RemoveContainer" containerID="64bbe8f8e78fcdf7a8f37094d28682b6c744a6d2ce7b94afbf02202b8aaa42c7" Mar 12 18:29:20.355417 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 12 18:29:20.374328 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 12 18:29:20.374587 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 12 18:29:20.376140 master-0 systemd[1]: kubelet.service: Consumed 2min 10.942s CPU time. Mar 12 18:29:20.396149 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 12 18:29:20.588939 master-0 kubenswrapper[29097]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 18:29:20.588939 master-0 kubenswrapper[29097]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 12 18:29:20.588939 master-0 kubenswrapper[29097]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 18:29:20.588939 master-0 kubenswrapper[29097]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 18:29:20.588939 master-0 kubenswrapper[29097]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 12 18:29:20.588939 master-0 kubenswrapper[29097]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 18:29:20.589544 master-0 kubenswrapper[29097]: I0312 18:29:20.589006 29097 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 18:29:20.591104 master-0 kubenswrapper[29097]: W0312 18:29:20.591084 29097 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 18:29:20.591104 master-0 kubenswrapper[29097]: W0312 18:29:20.591099 29097 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 18:29:20.591104 master-0 kubenswrapper[29097]: W0312 18:29:20.591103 29097 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 18:29:20.591206 master-0 kubenswrapper[29097]: W0312 18:29:20.591109 29097 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 18:29:20.591206 master-0 kubenswrapper[29097]: W0312 18:29:20.591114 29097 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 18:29:20.591206 master-0 kubenswrapper[29097]: W0312 18:29:20.591122 29097 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 18:29:20.591206 master-0 kubenswrapper[29097]: W0312 18:29:20.591126 29097 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 18:29:20.591206 master-0 kubenswrapper[29097]: W0312 18:29:20.591130 29097 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 18:29:20.591206 master-0 kubenswrapper[29097]: W0312 18:29:20.591134 29097 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 18:29:20.591206 master-0 kubenswrapper[29097]: W0312 18:29:20.591137 29097 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 18:29:20.591206 master-0 kubenswrapper[29097]: W0312 18:29:20.591141 29097 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 18:29:20.591206 master-0 kubenswrapper[29097]: W0312 18:29:20.591145 29097 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 18:29:20.591206 master-0 kubenswrapper[29097]: W0312 18:29:20.591148 29097 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 18:29:20.591206 master-0 kubenswrapper[29097]: W0312 18:29:20.591152 29097 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 18:29:20.591206 master-0 kubenswrapper[29097]: W0312 18:29:20.591157 29097 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 18:29:20.591206 master-0 kubenswrapper[29097]: W0312 18:29:20.591162 29097 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 18:29:20.591206 master-0 kubenswrapper[29097]: W0312 18:29:20.591165 29097 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 18:29:20.591206 master-0 kubenswrapper[29097]: W0312 18:29:20.591169 29097 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 18:29:20.591206 master-0 kubenswrapper[29097]: W0312 18:29:20.591173 29097 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 18:29:20.591206 master-0 kubenswrapper[29097]: W0312 18:29:20.591177 29097 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 18:29:20.591206 master-0 kubenswrapper[29097]: W0312 18:29:20.591180 29097 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 18:29:20.591206 master-0 kubenswrapper[29097]: W0312 18:29:20.591184 29097 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 18:29:20.591730 master-0 kubenswrapper[29097]: W0312 18:29:20.591187 29097 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 18:29:20.591730 master-0 kubenswrapper[29097]: W0312 18:29:20.591191 29097 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 18:29:20.591730 master-0 kubenswrapper[29097]: W0312 18:29:20.591195 29097 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 18:29:20.591730 master-0 kubenswrapper[29097]: W0312 18:29:20.591198 29097 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 18:29:20.591730 master-0 kubenswrapper[29097]: W0312 18:29:20.591202 29097 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 18:29:20.591730 master-0 kubenswrapper[29097]: W0312 18:29:20.591205 29097 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 18:29:20.591730 master-0 kubenswrapper[29097]: W0312 18:29:20.591210 29097 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 18:29:20.591730 master-0 kubenswrapper[29097]: W0312 18:29:20.591214 29097 feature_gate.go:330] unrecognized feature gate: Example Mar 12 18:29:20.591730 master-0 kubenswrapper[29097]: W0312 18:29:20.591218 29097 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 18:29:20.591730 master-0 kubenswrapper[29097]: W0312 18:29:20.591221 29097 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 18:29:20.591730 master-0 kubenswrapper[29097]: W0312 18:29:20.591241 29097 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 18:29:20.591730 master-0 kubenswrapper[29097]: W0312 18:29:20.591246 29097 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 18:29:20.591730 master-0 kubenswrapper[29097]: W0312 18:29:20.591250 29097 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 18:29:20.591730 master-0 kubenswrapper[29097]: W0312 18:29:20.591254 29097 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 18:29:20.591730 master-0 kubenswrapper[29097]: W0312 18:29:20.591259 29097 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 18:29:20.591730 master-0 kubenswrapper[29097]: W0312 18:29:20.591265 29097 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 18:29:20.591730 master-0 kubenswrapper[29097]: W0312 18:29:20.591269 29097 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 18:29:20.591730 master-0 kubenswrapper[29097]: W0312 18:29:20.591273 29097 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 18:29:20.591730 master-0 kubenswrapper[29097]: W0312 18:29:20.591277 29097 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 18:29:20.591730 master-0 kubenswrapper[29097]: W0312 18:29:20.591281 29097 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 18:29:20.592223 master-0 kubenswrapper[29097]: W0312 18:29:20.591285 29097 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 18:29:20.592223 master-0 kubenswrapper[29097]: W0312 18:29:20.591289 29097 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 18:29:20.592223 master-0 kubenswrapper[29097]: W0312 18:29:20.591293 29097 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 18:29:20.592223 master-0 kubenswrapper[29097]: W0312 18:29:20.591297 29097 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 18:29:20.592223 master-0 kubenswrapper[29097]: W0312 18:29:20.591301 29097 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 18:29:20.592223 master-0 kubenswrapper[29097]: W0312 18:29:20.591305 29097 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 18:29:20.592223 master-0 kubenswrapper[29097]: W0312 18:29:20.591308 29097 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 18:29:20.592223 master-0 kubenswrapper[29097]: W0312 18:29:20.591312 29097 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 18:29:20.592223 master-0 kubenswrapper[29097]: W0312 18:29:20.591350 29097 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 18:29:20.592223 master-0 kubenswrapper[29097]: W0312 18:29:20.591355 29097 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 18:29:20.592223 master-0 kubenswrapper[29097]: W0312 18:29:20.591360 29097 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 18:29:20.592223 master-0 kubenswrapper[29097]: W0312 18:29:20.591364 29097 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 18:29:20.592223 master-0 kubenswrapper[29097]: W0312 18:29:20.591368 29097 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 18:29:20.592223 master-0 kubenswrapper[29097]: W0312 18:29:20.591372 29097 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 18:29:20.592223 master-0 kubenswrapper[29097]: W0312 18:29:20.591375 29097 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 18:29:20.592223 master-0 kubenswrapper[29097]: W0312 18:29:20.591379 29097 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 18:29:20.592223 master-0 kubenswrapper[29097]: W0312 18:29:20.591382 29097 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 18:29:20.592223 master-0 kubenswrapper[29097]: W0312 18:29:20.591386 29097 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 18:29:20.592223 master-0 kubenswrapper[29097]: W0312 18:29:20.591389 29097 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 18:29:20.592223 master-0 kubenswrapper[29097]: W0312 18:29:20.591393 29097 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 18:29:20.592714 master-0 kubenswrapper[29097]: W0312 18:29:20.591399 29097 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 18:29:20.592714 master-0 kubenswrapper[29097]: W0312 18:29:20.591404 29097 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 18:29:20.592714 master-0 kubenswrapper[29097]: W0312 18:29:20.591408 29097 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 18:29:20.592714 master-0 kubenswrapper[29097]: W0312 18:29:20.591413 29097 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 18:29:20.592714 master-0 kubenswrapper[29097]: W0312 18:29:20.591416 29097 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 18:29:20.592714 master-0 kubenswrapper[29097]: W0312 18:29:20.591420 29097 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 18:29:20.592714 master-0 kubenswrapper[29097]: W0312 18:29:20.591423 29097 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 18:29:20.592714 master-0 kubenswrapper[29097]: W0312 18:29:20.591427 29097 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 18:29:20.592714 master-0 kubenswrapper[29097]: W0312 18:29:20.591431 29097 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 18:29:20.592714 master-0 kubenswrapper[29097]: W0312 18:29:20.591435 29097 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 18:29:20.592714 master-0 kubenswrapper[29097]: I0312 18:29:20.591523 29097 flags.go:64] FLAG: --address="0.0.0.0" Mar 12 18:29:20.592714 master-0 kubenswrapper[29097]: I0312 18:29:20.591533 29097 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 12 18:29:20.592714 master-0 kubenswrapper[29097]: I0312 18:29:20.591541 29097 flags.go:64] FLAG: --anonymous-auth="true" Mar 12 18:29:20.592714 master-0 kubenswrapper[29097]: I0312 18:29:20.591546 29097 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 12 18:29:20.592714 master-0 kubenswrapper[29097]: I0312 18:29:20.591552 29097 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 12 18:29:20.592714 master-0 kubenswrapper[29097]: I0312 18:29:20.591557 29097 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 12 18:29:20.592714 master-0 kubenswrapper[29097]: I0312 18:29:20.591563 29097 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 12 18:29:20.592714 master-0 kubenswrapper[29097]: I0312 18:29:20.591568 29097 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 12 18:29:20.592714 master-0 kubenswrapper[29097]: I0312 18:29:20.591573 29097 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 12 18:29:20.592714 master-0 kubenswrapper[29097]: I0312 18:29:20.591577 29097 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 12 18:29:20.592714 master-0 kubenswrapper[29097]: I0312 18:29:20.591582 29097 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 12 18:29:20.593294 master-0 kubenswrapper[29097]: I0312 18:29:20.591586 29097 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 12 18:29:20.593294 master-0 kubenswrapper[29097]: I0312 18:29:20.591591 29097 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 12 18:29:20.593294 master-0 kubenswrapper[29097]: I0312 18:29:20.591595 29097 flags.go:64] FLAG: --cgroup-root="" Mar 12 18:29:20.593294 master-0 kubenswrapper[29097]: I0312 18:29:20.591599 29097 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 12 18:29:20.593294 master-0 kubenswrapper[29097]: I0312 18:29:20.591604 29097 flags.go:64] FLAG: --client-ca-file="" Mar 12 18:29:20.593294 master-0 kubenswrapper[29097]: I0312 18:29:20.591608 29097 flags.go:64] FLAG: --cloud-config="" Mar 12 18:29:20.593294 master-0 kubenswrapper[29097]: I0312 18:29:20.591612 29097 flags.go:64] FLAG: --cloud-provider="" Mar 12 18:29:20.593294 master-0 kubenswrapper[29097]: I0312 18:29:20.591616 29097 flags.go:64] FLAG: --cluster-dns="[]" Mar 12 18:29:20.593294 master-0 kubenswrapper[29097]: I0312 18:29:20.591621 29097 flags.go:64] FLAG: --cluster-domain="" Mar 12 18:29:20.593294 master-0 kubenswrapper[29097]: I0312 18:29:20.591625 29097 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 12 18:29:20.593294 master-0 kubenswrapper[29097]: I0312 18:29:20.591629 29097 flags.go:64] FLAG: --config-dir="" Mar 12 18:29:20.593294 master-0 kubenswrapper[29097]: I0312 18:29:20.591634 29097 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 12 18:29:20.593294 master-0 kubenswrapper[29097]: I0312 18:29:20.591638 29097 flags.go:64] FLAG: --container-log-max-files="5" Mar 12 18:29:20.593294 master-0 kubenswrapper[29097]: I0312 18:29:20.591644 29097 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 12 18:29:20.593294 master-0 kubenswrapper[29097]: I0312 18:29:20.591648 29097 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 12 18:29:20.593294 master-0 kubenswrapper[29097]: I0312 18:29:20.591652 29097 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 12 18:29:20.593294 master-0 kubenswrapper[29097]: I0312 18:29:20.591657 29097 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 12 18:29:20.593294 master-0 kubenswrapper[29097]: I0312 18:29:20.591662 29097 flags.go:64] FLAG: --contention-profiling="false" Mar 12 18:29:20.593294 master-0 kubenswrapper[29097]: I0312 18:29:20.591667 29097 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 12 18:29:20.593294 master-0 kubenswrapper[29097]: I0312 18:29:20.591671 29097 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 12 18:29:20.593294 master-0 kubenswrapper[29097]: I0312 18:29:20.591675 29097 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 12 18:29:20.593294 master-0 kubenswrapper[29097]: I0312 18:29:20.591679 29097 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 12 18:29:20.593294 master-0 kubenswrapper[29097]: I0312 18:29:20.591685 29097 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 12 18:29:20.593294 master-0 kubenswrapper[29097]: I0312 18:29:20.591689 29097 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 12 18:29:20.593294 master-0 kubenswrapper[29097]: I0312 18:29:20.591693 29097 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 12 18:29:20.593895 master-0 kubenswrapper[29097]: I0312 18:29:20.591697 29097 flags.go:64] FLAG: --enable-load-reader="false" Mar 12 18:29:20.593895 master-0 kubenswrapper[29097]: I0312 18:29:20.591701 29097 flags.go:64] FLAG: --enable-server="true" Mar 12 18:29:20.593895 master-0 kubenswrapper[29097]: I0312 18:29:20.591705 29097 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 12 18:29:20.593895 master-0 kubenswrapper[29097]: I0312 18:29:20.591712 29097 flags.go:64] FLAG: --event-burst="100" Mar 12 18:29:20.593895 master-0 kubenswrapper[29097]: I0312 18:29:20.591716 29097 flags.go:64] FLAG: --event-qps="50" Mar 12 18:29:20.593895 master-0 kubenswrapper[29097]: I0312 18:29:20.591721 29097 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 12 18:29:20.593895 master-0 kubenswrapper[29097]: I0312 18:29:20.591725 29097 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 12 18:29:20.593895 master-0 kubenswrapper[29097]: I0312 18:29:20.591729 29097 flags.go:64] FLAG: --eviction-hard="" Mar 12 18:29:20.593895 master-0 kubenswrapper[29097]: I0312 18:29:20.591740 29097 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 12 18:29:20.593895 master-0 kubenswrapper[29097]: I0312 18:29:20.591744 29097 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 12 18:29:20.593895 master-0 kubenswrapper[29097]: I0312 18:29:20.591748 29097 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 12 18:29:20.593895 master-0 kubenswrapper[29097]: I0312 18:29:20.591753 29097 flags.go:64] FLAG: --eviction-soft="" Mar 12 18:29:20.593895 master-0 kubenswrapper[29097]: I0312 18:29:20.591757 29097 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 12 18:29:20.593895 master-0 kubenswrapper[29097]: I0312 18:29:20.591761 29097 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 12 18:29:20.593895 master-0 kubenswrapper[29097]: I0312 18:29:20.591765 29097 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 12 18:29:20.593895 master-0 kubenswrapper[29097]: I0312 18:29:20.591770 29097 flags.go:64] FLAG: --experimental-mounter-path="" Mar 12 18:29:20.593895 master-0 kubenswrapper[29097]: I0312 18:29:20.591774 29097 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 12 18:29:20.593895 master-0 kubenswrapper[29097]: I0312 18:29:20.591778 29097 flags.go:64] FLAG: --fail-swap-on="true" Mar 12 18:29:20.593895 master-0 kubenswrapper[29097]: I0312 18:29:20.591782 29097 flags.go:64] FLAG: --feature-gates="" Mar 12 18:29:20.593895 master-0 kubenswrapper[29097]: I0312 18:29:20.591787 29097 flags.go:64] FLAG: --file-check-frequency="20s" Mar 12 18:29:20.593895 master-0 kubenswrapper[29097]: I0312 18:29:20.591791 29097 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 12 18:29:20.593895 master-0 kubenswrapper[29097]: I0312 18:29:20.591796 29097 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 12 18:29:20.593895 master-0 kubenswrapper[29097]: I0312 18:29:20.591800 29097 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 12 18:29:20.593895 master-0 kubenswrapper[29097]: I0312 18:29:20.591805 29097 flags.go:64] FLAG: --healthz-port="10248" Mar 12 18:29:20.593895 master-0 kubenswrapper[29097]: I0312 18:29:20.591809 29097 flags.go:64] FLAG: --help="false" Mar 12 18:29:20.593895 master-0 kubenswrapper[29097]: I0312 18:29:20.591813 29097 flags.go:64] FLAG: --hostname-override="" Mar 12 18:29:20.594508 master-0 kubenswrapper[29097]: I0312 18:29:20.591817 29097 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 12 18:29:20.594508 master-0 kubenswrapper[29097]: I0312 18:29:20.591821 29097 flags.go:64] FLAG: --http-check-frequency="20s" Mar 12 18:29:20.594508 master-0 kubenswrapper[29097]: I0312 18:29:20.591825 29097 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 12 18:29:20.594508 master-0 kubenswrapper[29097]: I0312 18:29:20.591830 29097 flags.go:64] FLAG: --image-credential-provider-config="" Mar 12 18:29:20.594508 master-0 kubenswrapper[29097]: I0312 18:29:20.591834 29097 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 12 18:29:20.594508 master-0 kubenswrapper[29097]: I0312 18:29:20.591842 29097 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 12 18:29:20.594508 master-0 kubenswrapper[29097]: I0312 18:29:20.591846 29097 flags.go:64] FLAG: --image-service-endpoint="" Mar 12 18:29:20.594508 master-0 kubenswrapper[29097]: I0312 18:29:20.591850 29097 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 12 18:29:20.594508 master-0 kubenswrapper[29097]: I0312 18:29:20.591854 29097 flags.go:64] FLAG: --kube-api-burst="100" Mar 12 18:29:20.594508 master-0 kubenswrapper[29097]: I0312 18:29:20.591858 29097 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 12 18:29:20.594508 master-0 kubenswrapper[29097]: I0312 18:29:20.591863 29097 flags.go:64] FLAG: --kube-api-qps="50" Mar 12 18:29:20.594508 master-0 kubenswrapper[29097]: I0312 18:29:20.591867 29097 flags.go:64] FLAG: --kube-reserved="" Mar 12 18:29:20.594508 master-0 kubenswrapper[29097]: I0312 18:29:20.591871 29097 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 12 18:29:20.594508 master-0 kubenswrapper[29097]: I0312 18:29:20.591875 29097 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 12 18:29:20.594508 master-0 kubenswrapper[29097]: I0312 18:29:20.591879 29097 flags.go:64] FLAG: --kubelet-cgroups="" Mar 12 18:29:20.594508 master-0 kubenswrapper[29097]: I0312 18:29:20.591884 29097 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 12 18:29:20.594508 master-0 kubenswrapper[29097]: I0312 18:29:20.591888 29097 flags.go:64] FLAG: --lock-file="" Mar 12 18:29:20.594508 master-0 kubenswrapper[29097]: I0312 18:29:20.591892 29097 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 12 18:29:20.594508 master-0 kubenswrapper[29097]: I0312 18:29:20.591896 29097 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 12 18:29:20.594508 master-0 kubenswrapper[29097]: I0312 18:29:20.591903 29097 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 12 18:29:20.594508 master-0 kubenswrapper[29097]: I0312 18:29:20.591909 29097 flags.go:64] FLAG: --log-json-split-stream="false" Mar 12 18:29:20.594508 master-0 kubenswrapper[29097]: I0312 18:29:20.591913 29097 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 12 18:29:20.594508 master-0 kubenswrapper[29097]: I0312 18:29:20.591917 29097 flags.go:64] FLAG: --log-text-split-stream="false" Mar 12 18:29:20.594508 master-0 kubenswrapper[29097]: I0312 18:29:20.591922 29097 flags.go:64] FLAG: --logging-format="text" Mar 12 18:29:20.594508 master-0 kubenswrapper[29097]: I0312 18:29:20.591926 29097 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 12 18:29:20.595124 master-0 kubenswrapper[29097]: I0312 18:29:20.591931 29097 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 12 18:29:20.595124 master-0 kubenswrapper[29097]: I0312 18:29:20.591935 29097 flags.go:64] FLAG: --manifest-url="" Mar 12 18:29:20.595124 master-0 kubenswrapper[29097]: I0312 18:29:20.591939 29097 flags.go:64] FLAG: --manifest-url-header="" Mar 12 18:29:20.595124 master-0 kubenswrapper[29097]: I0312 18:29:20.591944 29097 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 12 18:29:20.595124 master-0 kubenswrapper[29097]: I0312 18:29:20.591948 29097 flags.go:64] FLAG: --max-open-files="1000000" Mar 12 18:29:20.595124 master-0 kubenswrapper[29097]: I0312 18:29:20.591953 29097 flags.go:64] FLAG: --max-pods="110" Mar 12 18:29:20.595124 master-0 kubenswrapper[29097]: I0312 18:29:20.591958 29097 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 12 18:29:20.595124 master-0 kubenswrapper[29097]: I0312 18:29:20.591962 29097 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 12 18:29:20.595124 master-0 kubenswrapper[29097]: I0312 18:29:20.591966 29097 flags.go:64] FLAG: --memory-manager-policy="None" Mar 12 18:29:20.595124 master-0 kubenswrapper[29097]: I0312 18:29:20.591970 29097 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 12 18:29:20.595124 master-0 kubenswrapper[29097]: I0312 18:29:20.591975 29097 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 12 18:29:20.595124 master-0 kubenswrapper[29097]: I0312 18:29:20.591979 29097 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 12 18:29:20.595124 master-0 kubenswrapper[29097]: I0312 18:29:20.591986 29097 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 12 18:29:20.595124 master-0 kubenswrapper[29097]: I0312 18:29:20.591995 29097 flags.go:64] FLAG: --node-status-max-images="50" Mar 12 18:29:20.595124 master-0 kubenswrapper[29097]: I0312 18:29:20.591999 29097 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 12 18:29:20.595124 master-0 kubenswrapper[29097]: I0312 18:29:20.592004 29097 flags.go:64] FLAG: --oom-score-adj="-999" Mar 12 18:29:20.595124 master-0 kubenswrapper[29097]: I0312 18:29:20.592008 29097 flags.go:64] FLAG: --pod-cidr="" Mar 12 18:29:20.595124 master-0 kubenswrapper[29097]: I0312 18:29:20.592012 29097 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3" Mar 12 18:29:20.595124 master-0 kubenswrapper[29097]: I0312 18:29:20.592018 29097 flags.go:64] FLAG: --pod-manifest-path="" Mar 12 18:29:20.595124 master-0 kubenswrapper[29097]: I0312 18:29:20.592022 29097 flags.go:64] FLAG: --pod-max-pids="-1" Mar 12 18:29:20.595124 master-0 kubenswrapper[29097]: I0312 18:29:20.592027 29097 flags.go:64] FLAG: --pods-per-core="0" Mar 12 18:29:20.595124 master-0 kubenswrapper[29097]: I0312 18:29:20.592031 29097 flags.go:64] FLAG: --port="10250" Mar 12 18:29:20.595124 master-0 kubenswrapper[29097]: I0312 18:29:20.592035 29097 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 12 18:29:20.595124 master-0 kubenswrapper[29097]: I0312 18:29:20.592039 29097 flags.go:64] FLAG: --provider-id="" Mar 12 18:29:20.595785 master-0 kubenswrapper[29097]: I0312 18:29:20.592044 29097 flags.go:64] FLAG: --qos-reserved="" Mar 12 18:29:20.595785 master-0 kubenswrapper[29097]: I0312 18:29:20.592048 29097 flags.go:64] FLAG: --read-only-port="10255" Mar 12 18:29:20.595785 master-0 kubenswrapper[29097]: I0312 18:29:20.592054 29097 flags.go:64] FLAG: --register-node="true" Mar 12 18:29:20.595785 master-0 kubenswrapper[29097]: I0312 18:29:20.592058 29097 flags.go:64] FLAG: --register-schedulable="true" Mar 12 18:29:20.595785 master-0 kubenswrapper[29097]: I0312 18:29:20.592062 29097 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 12 18:29:20.595785 master-0 kubenswrapper[29097]: I0312 18:29:20.592069 29097 flags.go:64] FLAG: --registry-burst="10" Mar 12 18:29:20.595785 master-0 kubenswrapper[29097]: I0312 18:29:20.592074 29097 flags.go:64] FLAG: --registry-qps="5" Mar 12 18:29:20.595785 master-0 kubenswrapper[29097]: I0312 18:29:20.592078 29097 flags.go:64] FLAG: --reserved-cpus="" Mar 12 18:29:20.595785 master-0 kubenswrapper[29097]: I0312 18:29:20.592082 29097 flags.go:64] FLAG: --reserved-memory="" Mar 12 18:29:20.595785 master-0 kubenswrapper[29097]: I0312 18:29:20.592088 29097 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 12 18:29:20.595785 master-0 kubenswrapper[29097]: I0312 18:29:20.592092 29097 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 12 18:29:20.595785 master-0 kubenswrapper[29097]: I0312 18:29:20.592096 29097 flags.go:64] FLAG: --rotate-certificates="false" Mar 12 18:29:20.595785 master-0 kubenswrapper[29097]: I0312 18:29:20.592100 29097 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 12 18:29:20.595785 master-0 kubenswrapper[29097]: I0312 18:29:20.592105 29097 flags.go:64] FLAG: --runonce="false" Mar 12 18:29:20.595785 master-0 kubenswrapper[29097]: I0312 18:29:20.592109 29097 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 12 18:29:20.595785 master-0 kubenswrapper[29097]: I0312 18:29:20.592113 29097 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 12 18:29:20.595785 master-0 kubenswrapper[29097]: I0312 18:29:20.592117 29097 flags.go:64] FLAG: --seccomp-default="false" Mar 12 18:29:20.595785 master-0 kubenswrapper[29097]: I0312 18:29:20.592122 29097 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 12 18:29:20.595785 master-0 kubenswrapper[29097]: I0312 18:29:20.592126 29097 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 12 18:29:20.595785 master-0 kubenswrapper[29097]: I0312 18:29:20.592130 29097 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 12 18:29:20.595785 master-0 kubenswrapper[29097]: I0312 18:29:20.592134 29097 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 12 18:29:20.595785 master-0 kubenswrapper[29097]: I0312 18:29:20.592140 29097 flags.go:64] FLAG: --storage-driver-password="root" Mar 12 18:29:20.595785 master-0 kubenswrapper[29097]: I0312 18:29:20.592144 29097 flags.go:64] FLAG: --storage-driver-secure="false" Mar 12 18:29:20.595785 master-0 kubenswrapper[29097]: I0312 18:29:20.592148 29097 flags.go:64] FLAG: --storage-driver-table="stats" Mar 12 18:29:20.595785 master-0 kubenswrapper[29097]: I0312 18:29:20.592152 29097 flags.go:64] FLAG: --storage-driver-user="root" Mar 12 18:29:20.596428 master-0 kubenswrapper[29097]: I0312 18:29:20.592156 29097 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 12 18:29:20.596428 master-0 kubenswrapper[29097]: I0312 18:29:20.592163 29097 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 12 18:29:20.596428 master-0 kubenswrapper[29097]: I0312 18:29:20.592167 29097 flags.go:64] FLAG: --system-cgroups="" Mar 12 18:29:20.596428 master-0 kubenswrapper[29097]: I0312 18:29:20.592171 29097 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 12 18:29:20.596428 master-0 kubenswrapper[29097]: I0312 18:29:20.592178 29097 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 12 18:29:20.596428 master-0 kubenswrapper[29097]: I0312 18:29:20.592182 29097 flags.go:64] FLAG: --tls-cert-file="" Mar 12 18:29:20.596428 master-0 kubenswrapper[29097]: I0312 18:29:20.592186 29097 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 12 18:29:20.596428 master-0 kubenswrapper[29097]: I0312 18:29:20.592191 29097 flags.go:64] FLAG: --tls-min-version="" Mar 12 18:29:20.596428 master-0 kubenswrapper[29097]: I0312 18:29:20.592195 29097 flags.go:64] FLAG: --tls-private-key-file="" Mar 12 18:29:20.596428 master-0 kubenswrapper[29097]: I0312 18:29:20.592201 29097 flags.go:64] FLAG: --topology-manager-policy="none" Mar 12 18:29:20.596428 master-0 kubenswrapper[29097]: I0312 18:29:20.592205 29097 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 12 18:29:20.596428 master-0 kubenswrapper[29097]: I0312 18:29:20.592209 29097 flags.go:64] FLAG: --topology-manager-scope="container" Mar 12 18:29:20.596428 master-0 kubenswrapper[29097]: I0312 18:29:20.592213 29097 flags.go:64] FLAG: --v="2" Mar 12 18:29:20.596428 master-0 kubenswrapper[29097]: I0312 18:29:20.592223 29097 flags.go:64] FLAG: --version="false" Mar 12 18:29:20.596428 master-0 kubenswrapper[29097]: I0312 18:29:20.592229 29097 flags.go:64] FLAG: --vmodule="" Mar 12 18:29:20.596428 master-0 kubenswrapper[29097]: I0312 18:29:20.592234 29097 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 12 18:29:20.596428 master-0 kubenswrapper[29097]: I0312 18:29:20.592238 29097 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 12 18:29:20.596428 master-0 kubenswrapper[29097]: W0312 18:29:20.592332 29097 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 18:29:20.596428 master-0 kubenswrapper[29097]: W0312 18:29:20.592337 29097 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 18:29:20.596428 master-0 kubenswrapper[29097]: W0312 18:29:20.592342 29097 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 18:29:20.596428 master-0 kubenswrapper[29097]: W0312 18:29:20.592346 29097 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 18:29:20.596428 master-0 kubenswrapper[29097]: W0312 18:29:20.592350 29097 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 18:29:20.596428 master-0 kubenswrapper[29097]: W0312 18:29:20.592354 29097 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 18:29:20.596428 master-0 kubenswrapper[29097]: W0312 18:29:20.592358 29097 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 18:29:20.597035 master-0 kubenswrapper[29097]: W0312 18:29:20.592363 29097 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 18:29:20.597035 master-0 kubenswrapper[29097]: W0312 18:29:20.592367 29097 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 18:29:20.597035 master-0 kubenswrapper[29097]: W0312 18:29:20.592372 29097 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 18:29:20.597035 master-0 kubenswrapper[29097]: W0312 18:29:20.592376 29097 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 18:29:20.597035 master-0 kubenswrapper[29097]: W0312 18:29:20.592382 29097 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 18:29:20.597035 master-0 kubenswrapper[29097]: W0312 18:29:20.592386 29097 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 18:29:20.597035 master-0 kubenswrapper[29097]: W0312 18:29:20.592390 29097 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 18:29:20.597035 master-0 kubenswrapper[29097]: W0312 18:29:20.592394 29097 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 18:29:20.597035 master-0 kubenswrapper[29097]: W0312 18:29:20.592398 29097 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 18:29:20.597035 master-0 kubenswrapper[29097]: W0312 18:29:20.592402 29097 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 18:29:20.597035 master-0 kubenswrapper[29097]: W0312 18:29:20.592405 29097 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 18:29:20.597035 master-0 kubenswrapper[29097]: W0312 18:29:20.592409 29097 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 18:29:20.597035 master-0 kubenswrapper[29097]: W0312 18:29:20.592413 29097 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 18:29:20.597035 master-0 kubenswrapper[29097]: W0312 18:29:20.592417 29097 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 18:29:20.597035 master-0 kubenswrapper[29097]: W0312 18:29:20.592421 29097 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 18:29:20.597035 master-0 kubenswrapper[29097]: W0312 18:29:20.592424 29097 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 18:29:20.597035 master-0 kubenswrapper[29097]: W0312 18:29:20.592428 29097 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 18:29:20.597035 master-0 kubenswrapper[29097]: W0312 18:29:20.592433 29097 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 18:29:20.597035 master-0 kubenswrapper[29097]: W0312 18:29:20.592436 29097 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 18:29:20.597035 master-0 kubenswrapper[29097]: W0312 18:29:20.592440 29097 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 18:29:20.597500 master-0 kubenswrapper[29097]: W0312 18:29:20.592443 29097 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 18:29:20.597500 master-0 kubenswrapper[29097]: W0312 18:29:20.592448 29097 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 18:29:20.597500 master-0 kubenswrapper[29097]: W0312 18:29:20.592452 29097 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 18:29:20.597500 master-0 kubenswrapper[29097]: W0312 18:29:20.592456 29097 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 18:29:20.597500 master-0 kubenswrapper[29097]: W0312 18:29:20.592460 29097 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 18:29:20.597500 master-0 kubenswrapper[29097]: W0312 18:29:20.592464 29097 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 18:29:20.597500 master-0 kubenswrapper[29097]: W0312 18:29:20.592468 29097 feature_gate.go:330] unrecognized feature gate: Example Mar 12 18:29:20.597500 master-0 kubenswrapper[29097]: W0312 18:29:20.592471 29097 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 18:29:20.597500 master-0 kubenswrapper[29097]: W0312 18:29:20.592475 29097 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 18:29:20.597500 master-0 kubenswrapper[29097]: W0312 18:29:20.592478 29097 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 18:29:20.597500 master-0 kubenswrapper[29097]: W0312 18:29:20.592482 29097 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 18:29:20.597500 master-0 kubenswrapper[29097]: W0312 18:29:20.592486 29097 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 18:29:20.597500 master-0 kubenswrapper[29097]: W0312 18:29:20.592489 29097 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 18:29:20.597500 master-0 kubenswrapper[29097]: W0312 18:29:20.592493 29097 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 18:29:20.597500 master-0 kubenswrapper[29097]: W0312 18:29:20.592497 29097 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 18:29:20.597500 master-0 kubenswrapper[29097]: W0312 18:29:20.592500 29097 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 18:29:20.597500 master-0 kubenswrapper[29097]: W0312 18:29:20.592505 29097 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 18:29:20.597500 master-0 kubenswrapper[29097]: W0312 18:29:20.592509 29097 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 18:29:20.597500 master-0 kubenswrapper[29097]: W0312 18:29:20.592526 29097 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 18:29:20.597500 master-0 kubenswrapper[29097]: W0312 18:29:20.592531 29097 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 18:29:20.598032 master-0 kubenswrapper[29097]: W0312 18:29:20.592534 29097 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 18:29:20.598032 master-0 kubenswrapper[29097]: W0312 18:29:20.592538 29097 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 18:29:20.598032 master-0 kubenswrapper[29097]: W0312 18:29:20.592542 29097 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 18:29:20.598032 master-0 kubenswrapper[29097]: W0312 18:29:20.592545 29097 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 18:29:20.598032 master-0 kubenswrapper[29097]: W0312 18:29:20.592550 29097 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 18:29:20.598032 master-0 kubenswrapper[29097]: W0312 18:29:20.592554 29097 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 18:29:20.598032 master-0 kubenswrapper[29097]: W0312 18:29:20.592558 29097 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 18:29:20.598032 master-0 kubenswrapper[29097]: W0312 18:29:20.592563 29097 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 18:29:20.598032 master-0 kubenswrapper[29097]: W0312 18:29:20.592567 29097 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 18:29:20.598032 master-0 kubenswrapper[29097]: W0312 18:29:20.592572 29097 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 18:29:20.598032 master-0 kubenswrapper[29097]: W0312 18:29:20.592576 29097 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 18:29:20.598032 master-0 kubenswrapper[29097]: W0312 18:29:20.592580 29097 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 18:29:20.598032 master-0 kubenswrapper[29097]: W0312 18:29:20.592584 29097 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 18:29:20.598032 master-0 kubenswrapper[29097]: W0312 18:29:20.592588 29097 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 18:29:20.598032 master-0 kubenswrapper[29097]: W0312 18:29:20.592592 29097 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 18:29:20.598032 master-0 kubenswrapper[29097]: W0312 18:29:20.592596 29097 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 18:29:20.598032 master-0 kubenswrapper[29097]: W0312 18:29:20.592600 29097 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 18:29:20.598032 master-0 kubenswrapper[29097]: W0312 18:29:20.592605 29097 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 18:29:20.598032 master-0 kubenswrapper[29097]: W0312 18:29:20.592609 29097 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 18:29:20.598583 master-0 kubenswrapper[29097]: W0312 18:29:20.592612 29097 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 18:29:20.598583 master-0 kubenswrapper[29097]: W0312 18:29:20.592616 29097 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 18:29:20.598583 master-0 kubenswrapper[29097]: W0312 18:29:20.592620 29097 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 18:29:20.598583 master-0 kubenswrapper[29097]: W0312 18:29:20.592624 29097 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 18:29:20.598583 master-0 kubenswrapper[29097]: W0312 18:29:20.592628 29097 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 18:29:20.598583 master-0 kubenswrapper[29097]: W0312 18:29:20.592631 29097 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 18:29:20.598583 master-0 kubenswrapper[29097]: I0312 18:29:20.592643 29097 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 18:29:20.598583 master-0 kubenswrapper[29097]: I0312 18:29:20.598136 29097 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 12 18:29:20.598583 master-0 kubenswrapper[29097]: I0312 18:29:20.598165 29097 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 18:29:20.598583 master-0 kubenswrapper[29097]: W0312 18:29:20.598224 29097 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 18:29:20.598583 master-0 kubenswrapper[29097]: W0312 18:29:20.598231 29097 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 18:29:20.598583 master-0 kubenswrapper[29097]: W0312 18:29:20.598235 29097 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 18:29:20.598583 master-0 kubenswrapper[29097]: W0312 18:29:20.598241 29097 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 18:29:20.598583 master-0 kubenswrapper[29097]: W0312 18:29:20.598245 29097 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 18:29:20.598583 master-0 kubenswrapper[29097]: W0312 18:29:20.598249 29097 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 18:29:20.598998 master-0 kubenswrapper[29097]: W0312 18:29:20.598253 29097 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 18:29:20.598998 master-0 kubenswrapper[29097]: W0312 18:29:20.598257 29097 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 18:29:20.598998 master-0 kubenswrapper[29097]: W0312 18:29:20.598261 29097 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 18:29:20.598998 master-0 kubenswrapper[29097]: W0312 18:29:20.598265 29097 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 18:29:20.598998 master-0 kubenswrapper[29097]: W0312 18:29:20.598270 29097 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 18:29:20.598998 master-0 kubenswrapper[29097]: W0312 18:29:20.598277 29097 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 18:29:20.598998 master-0 kubenswrapper[29097]: W0312 18:29:20.598282 29097 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 18:29:20.598998 master-0 kubenswrapper[29097]: W0312 18:29:20.598287 29097 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 18:29:20.598998 master-0 kubenswrapper[29097]: W0312 18:29:20.598291 29097 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 18:29:20.598998 master-0 kubenswrapper[29097]: W0312 18:29:20.598296 29097 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 18:29:20.598998 master-0 kubenswrapper[29097]: W0312 18:29:20.598300 29097 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 18:29:20.598998 master-0 kubenswrapper[29097]: W0312 18:29:20.598304 29097 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 18:29:20.598998 master-0 kubenswrapper[29097]: W0312 18:29:20.598308 29097 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 18:29:20.598998 master-0 kubenswrapper[29097]: W0312 18:29:20.598312 29097 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 18:29:20.598998 master-0 kubenswrapper[29097]: W0312 18:29:20.598316 29097 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 18:29:20.598998 master-0 kubenswrapper[29097]: W0312 18:29:20.598320 29097 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 18:29:20.598998 master-0 kubenswrapper[29097]: W0312 18:29:20.598324 29097 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 18:29:20.598998 master-0 kubenswrapper[29097]: W0312 18:29:20.598332 29097 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 18:29:20.598998 master-0 kubenswrapper[29097]: W0312 18:29:20.598336 29097 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 18:29:20.599469 master-0 kubenswrapper[29097]: W0312 18:29:20.598340 29097 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 18:29:20.599469 master-0 kubenswrapper[29097]: W0312 18:29:20.598344 29097 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 18:29:20.599469 master-0 kubenswrapper[29097]: W0312 18:29:20.598349 29097 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 18:29:20.599469 master-0 kubenswrapper[29097]: W0312 18:29:20.598352 29097 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 18:29:20.599469 master-0 kubenswrapper[29097]: W0312 18:29:20.598357 29097 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 18:29:20.599469 master-0 kubenswrapper[29097]: W0312 18:29:20.598361 29097 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 18:29:20.599469 master-0 kubenswrapper[29097]: W0312 18:29:20.598365 29097 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 18:29:20.599469 master-0 kubenswrapper[29097]: W0312 18:29:20.598369 29097 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 18:29:20.599469 master-0 kubenswrapper[29097]: W0312 18:29:20.598373 29097 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 18:29:20.599469 master-0 kubenswrapper[29097]: W0312 18:29:20.598376 29097 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 18:29:20.599469 master-0 kubenswrapper[29097]: W0312 18:29:20.598380 29097 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 18:29:20.599469 master-0 kubenswrapper[29097]: W0312 18:29:20.598384 29097 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 18:29:20.599469 master-0 kubenswrapper[29097]: W0312 18:29:20.598387 29097 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 18:29:20.599469 master-0 kubenswrapper[29097]: W0312 18:29:20.598391 29097 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 18:29:20.599469 master-0 kubenswrapper[29097]: W0312 18:29:20.598395 29097 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 18:29:20.599469 master-0 kubenswrapper[29097]: W0312 18:29:20.598398 29097 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 18:29:20.599469 master-0 kubenswrapper[29097]: W0312 18:29:20.598402 29097 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 18:29:20.599469 master-0 kubenswrapper[29097]: W0312 18:29:20.598406 29097 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 18:29:20.599469 master-0 kubenswrapper[29097]: W0312 18:29:20.598410 29097 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 18:29:20.599469 master-0 kubenswrapper[29097]: W0312 18:29:20.598415 29097 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 18:29:20.600021 master-0 kubenswrapper[29097]: W0312 18:29:20.598419 29097 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 18:29:20.600021 master-0 kubenswrapper[29097]: W0312 18:29:20.598423 29097 feature_gate.go:330] unrecognized feature gate: Example Mar 12 18:29:20.600021 master-0 kubenswrapper[29097]: W0312 18:29:20.598427 29097 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 18:29:20.600021 master-0 kubenswrapper[29097]: W0312 18:29:20.598431 29097 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 18:29:20.600021 master-0 kubenswrapper[29097]: W0312 18:29:20.598434 29097 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 18:29:20.600021 master-0 kubenswrapper[29097]: W0312 18:29:20.598438 29097 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 18:29:20.600021 master-0 kubenswrapper[29097]: W0312 18:29:20.598442 29097 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 18:29:20.600021 master-0 kubenswrapper[29097]: W0312 18:29:20.598445 29097 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 18:29:20.600021 master-0 kubenswrapper[29097]: W0312 18:29:20.598449 29097 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 18:29:20.600021 master-0 kubenswrapper[29097]: W0312 18:29:20.598453 29097 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 18:29:20.600021 master-0 kubenswrapper[29097]: W0312 18:29:20.598456 29097 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 18:29:20.600021 master-0 kubenswrapper[29097]: W0312 18:29:20.598460 29097 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 18:29:20.600021 master-0 kubenswrapper[29097]: W0312 18:29:20.598464 29097 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 18:29:20.600021 master-0 kubenswrapper[29097]: W0312 18:29:20.598467 29097 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 18:29:20.600021 master-0 kubenswrapper[29097]: W0312 18:29:20.598472 29097 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 18:29:20.600021 master-0 kubenswrapper[29097]: W0312 18:29:20.598476 29097 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 18:29:20.600021 master-0 kubenswrapper[29097]: W0312 18:29:20.598480 29097 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 18:29:20.600021 master-0 kubenswrapper[29097]: W0312 18:29:20.598483 29097 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 18:29:20.600021 master-0 kubenswrapper[29097]: W0312 18:29:20.598487 29097 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 18:29:20.600021 master-0 kubenswrapper[29097]: W0312 18:29:20.598491 29097 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 18:29:20.600785 master-0 kubenswrapper[29097]: W0312 18:29:20.598494 29097 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 18:29:20.600785 master-0 kubenswrapper[29097]: W0312 18:29:20.598498 29097 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 18:29:20.600785 master-0 kubenswrapper[29097]: W0312 18:29:20.598503 29097 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 18:29:20.600785 master-0 kubenswrapper[29097]: W0312 18:29:20.598507 29097 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 18:29:20.600785 master-0 kubenswrapper[29097]: W0312 18:29:20.598511 29097 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 18:29:20.600785 master-0 kubenswrapper[29097]: W0312 18:29:20.598527 29097 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 18:29:20.600785 master-0 kubenswrapper[29097]: W0312 18:29:20.598531 29097 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 18:29:20.600785 master-0 kubenswrapper[29097]: I0312 18:29:20.598539 29097 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 18:29:20.600785 master-0 kubenswrapper[29097]: W0312 18:29:20.598636 29097 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 18:29:20.600785 master-0 kubenswrapper[29097]: W0312 18:29:20.598643 29097 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 18:29:20.600785 master-0 kubenswrapper[29097]: W0312 18:29:20.598647 29097 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 18:29:20.600785 master-0 kubenswrapper[29097]: W0312 18:29:20.598651 29097 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 18:29:20.600785 master-0 kubenswrapper[29097]: W0312 18:29:20.598655 29097 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 18:29:20.600785 master-0 kubenswrapper[29097]: W0312 18:29:20.598658 29097 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 18:29:20.600785 master-0 kubenswrapper[29097]: W0312 18:29:20.598662 29097 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 18:29:20.601203 master-0 kubenswrapper[29097]: W0312 18:29:20.598665 29097 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 18:29:20.601203 master-0 kubenswrapper[29097]: W0312 18:29:20.598669 29097 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 18:29:20.601203 master-0 kubenswrapper[29097]: W0312 18:29:20.598673 29097 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 18:29:20.601203 master-0 kubenswrapper[29097]: W0312 18:29:20.598677 29097 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 18:29:20.601203 master-0 kubenswrapper[29097]: W0312 18:29:20.598681 29097 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 18:29:20.601203 master-0 kubenswrapper[29097]: W0312 18:29:20.598685 29097 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 18:29:20.601203 master-0 kubenswrapper[29097]: W0312 18:29:20.598689 29097 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 18:29:20.601203 master-0 kubenswrapper[29097]: W0312 18:29:20.598692 29097 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 18:29:20.601203 master-0 kubenswrapper[29097]: W0312 18:29:20.598696 29097 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 18:29:20.601203 master-0 kubenswrapper[29097]: W0312 18:29:20.598700 29097 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 18:29:20.601203 master-0 kubenswrapper[29097]: W0312 18:29:20.598703 29097 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 18:29:20.601203 master-0 kubenswrapper[29097]: W0312 18:29:20.598708 29097 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 18:29:20.601203 master-0 kubenswrapper[29097]: W0312 18:29:20.598714 29097 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 18:29:20.601203 master-0 kubenswrapper[29097]: W0312 18:29:20.598718 29097 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 18:29:20.601203 master-0 kubenswrapper[29097]: W0312 18:29:20.598722 29097 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 18:29:20.601203 master-0 kubenswrapper[29097]: W0312 18:29:20.598727 29097 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 18:29:20.601203 master-0 kubenswrapper[29097]: W0312 18:29:20.598731 29097 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 18:29:20.601203 master-0 kubenswrapper[29097]: W0312 18:29:20.598735 29097 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 18:29:20.601203 master-0 kubenswrapper[29097]: W0312 18:29:20.598740 29097 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 18:29:20.601676 master-0 kubenswrapper[29097]: W0312 18:29:20.598745 29097 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 18:29:20.601676 master-0 kubenswrapper[29097]: W0312 18:29:20.598749 29097 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 18:29:20.601676 master-0 kubenswrapper[29097]: W0312 18:29:20.598754 29097 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 18:29:20.601676 master-0 kubenswrapper[29097]: W0312 18:29:20.598758 29097 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 18:29:20.601676 master-0 kubenswrapper[29097]: W0312 18:29:20.598761 29097 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 18:29:20.601676 master-0 kubenswrapper[29097]: W0312 18:29:20.598765 29097 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 18:29:20.601676 master-0 kubenswrapper[29097]: W0312 18:29:20.598769 29097 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 18:29:20.601676 master-0 kubenswrapper[29097]: W0312 18:29:20.598772 29097 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 18:29:20.601676 master-0 kubenswrapper[29097]: W0312 18:29:20.598776 29097 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 18:29:20.601676 master-0 kubenswrapper[29097]: W0312 18:29:20.598780 29097 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 18:29:20.601676 master-0 kubenswrapper[29097]: W0312 18:29:20.598784 29097 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 18:29:20.601676 master-0 kubenswrapper[29097]: W0312 18:29:20.598787 29097 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 18:29:20.601676 master-0 kubenswrapper[29097]: W0312 18:29:20.598791 29097 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 18:29:20.601676 master-0 kubenswrapper[29097]: W0312 18:29:20.598794 29097 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 18:29:20.601676 master-0 kubenswrapper[29097]: W0312 18:29:20.598798 29097 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 18:29:20.601676 master-0 kubenswrapper[29097]: W0312 18:29:20.598802 29097 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 18:29:20.601676 master-0 kubenswrapper[29097]: W0312 18:29:20.598805 29097 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 18:29:20.601676 master-0 kubenswrapper[29097]: W0312 18:29:20.598809 29097 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 18:29:20.601676 master-0 kubenswrapper[29097]: W0312 18:29:20.598812 29097 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 18:29:20.602147 master-0 kubenswrapper[29097]: W0312 18:29:20.598818 29097 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 18:29:20.602147 master-0 kubenswrapper[29097]: W0312 18:29:20.598822 29097 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 18:29:20.602147 master-0 kubenswrapper[29097]: W0312 18:29:20.598826 29097 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 18:29:20.602147 master-0 kubenswrapper[29097]: W0312 18:29:20.598829 29097 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 18:29:20.602147 master-0 kubenswrapper[29097]: W0312 18:29:20.598833 29097 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 18:29:20.602147 master-0 kubenswrapper[29097]: W0312 18:29:20.598837 29097 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 18:29:20.602147 master-0 kubenswrapper[29097]: W0312 18:29:20.598841 29097 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 18:29:20.602147 master-0 kubenswrapper[29097]: W0312 18:29:20.598844 29097 feature_gate.go:330] unrecognized feature gate: Example Mar 12 18:29:20.602147 master-0 kubenswrapper[29097]: W0312 18:29:20.598848 29097 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 18:29:20.602147 master-0 kubenswrapper[29097]: W0312 18:29:20.598852 29097 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 18:29:20.602147 master-0 kubenswrapper[29097]: W0312 18:29:20.598855 29097 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 18:29:20.602147 master-0 kubenswrapper[29097]: W0312 18:29:20.598859 29097 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 18:29:20.602147 master-0 kubenswrapper[29097]: W0312 18:29:20.598863 29097 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 18:29:20.602147 master-0 kubenswrapper[29097]: W0312 18:29:20.598867 29097 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 18:29:20.602147 master-0 kubenswrapper[29097]: W0312 18:29:20.598870 29097 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 18:29:20.602147 master-0 kubenswrapper[29097]: W0312 18:29:20.598874 29097 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 18:29:20.602147 master-0 kubenswrapper[29097]: W0312 18:29:20.598879 29097 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 18:29:20.602147 master-0 kubenswrapper[29097]: W0312 18:29:20.598884 29097 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 18:29:20.602147 master-0 kubenswrapper[29097]: W0312 18:29:20.598889 29097 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 18:29:20.602625 master-0 kubenswrapper[29097]: W0312 18:29:20.598893 29097 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 18:29:20.602625 master-0 kubenswrapper[29097]: W0312 18:29:20.598898 29097 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 18:29:20.602625 master-0 kubenswrapper[29097]: W0312 18:29:20.598902 29097 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 18:29:20.602625 master-0 kubenswrapper[29097]: W0312 18:29:20.598906 29097 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 18:29:20.602625 master-0 kubenswrapper[29097]: W0312 18:29:20.598911 29097 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 18:29:20.602625 master-0 kubenswrapper[29097]: W0312 18:29:20.598915 29097 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 18:29:20.602625 master-0 kubenswrapper[29097]: W0312 18:29:20.598919 29097 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 18:29:20.602625 master-0 kubenswrapper[29097]: W0312 18:29:20.598923 29097 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 18:29:20.602625 master-0 kubenswrapper[29097]: I0312 18:29:20.598929 29097 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 18:29:20.602625 master-0 kubenswrapper[29097]: I0312 18:29:20.599075 29097 server.go:940] "Client rotation is on, will bootstrap in background" Mar 12 18:29:20.602625 master-0 kubenswrapper[29097]: I0312 18:29:20.602372 29097 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 12 18:29:20.602625 master-0 kubenswrapper[29097]: I0312 18:29:20.602439 29097 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 12 18:29:20.602917 master-0 kubenswrapper[29097]: I0312 18:29:20.602665 29097 server.go:997] "Starting client certificate rotation" Mar 12 18:29:20.602917 master-0 kubenswrapper[29097]: I0312 18:29:20.602676 29097 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 12 18:29:20.602972 master-0 kubenswrapper[29097]: I0312 18:29:20.602871 29097 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-13 18:02:20 +0000 UTC, rotation deadline is 2026-03-13 13:53:57.320577564 +0000 UTC Mar 12 18:29:20.603012 master-0 kubenswrapper[29097]: I0312 18:29:20.602967 29097 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 19h24m36.71761401s for next certificate rotation Mar 12 18:29:20.603325 master-0 kubenswrapper[29097]: I0312 18:29:20.603298 29097 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 18:29:20.604487 master-0 kubenswrapper[29097]: I0312 18:29:20.604463 29097 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 18:29:20.606648 master-0 kubenswrapper[29097]: I0312 18:29:20.606620 29097 log.go:25] "Validated CRI v1 runtime API" Mar 12 18:29:20.610332 master-0 kubenswrapper[29097]: I0312 18:29:20.610300 29097 log.go:25] "Validated CRI v1 image API" Mar 12 18:29:20.611237 master-0 kubenswrapper[29097]: I0312 18:29:20.611202 29097 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 12 18:29:20.621944 master-0 kubenswrapper[29097]: I0312 18:29:20.621895 29097 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 f6c40199-182a-4be5-87d7-87de18d890be:/dev/vda3] Mar 12 18:29:20.623245 master-0 kubenswrapper[29097]: I0312 18:29:20.621933 29097 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0182a4eff93f7ac8355fe5920af6a23f38515c1d4a493448a8ac4ea00cfb1b71/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0182a4eff93f7ac8355fe5920af6a23f38515c1d4a493448a8ac4ea00cfb1b71/userdata/shm major:0 minor:573 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/025f6ef7726027b226244a49b1b7aa7b4b726a6a64b08241b8944ae1790681b8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/025f6ef7726027b226244a49b1b7aa7b4b726a6a64b08241b8944ae1790681b8/userdata/shm major:0 minor:567 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/07e040d6dfa9951cac42e33315b3d655ef1dac90f6ba66c364219500701a9ef4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/07e040d6dfa9951cac42e33315b3d655ef1dac90f6ba66c364219500701a9ef4/userdata/shm major:0 minor:739 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/16d696d609e4d9275dce2cfecd0a4d1078c8e60ea7f137e92635d2bfd874a46b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/16d696d609e4d9275dce2cfecd0a4d1078c8e60ea7f137e92635d2bfd874a46b/userdata/shm major:0 minor:129 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/17d1a088b8419eadaf38001b6fa832ae43cb7cf4605e77942c3aeacc31e4a82a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/17d1a088b8419eadaf38001b6fa832ae43cb7cf4605e77942c3aeacc31e4a82a/userdata/shm major:0 minor:844 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/17f1eb5b22dadcc1a27bca5d2e41cabae79a53d549f65fc68a87a8776fc86dbf/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/17f1eb5b22dadcc1a27bca5d2e41cabae79a53d549f65fc68a87a8776fc86dbf/userdata/shm major:0 minor:44 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1c062db0efa15fd6679d2718a5857a9b8db81f25fe5e3a47bd35e6f192db0dd6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1c062db0efa15fd6679d2718a5857a9b8db81f25fe5e3a47bd35e6f192db0dd6/userdata/shm major:0 minor:250 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1cd179b2f7f0fe564fa7a9477bf555cfaf8b89c4a460b563f7a642b74759364e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1cd179b2f7f0fe564fa7a9477bf555cfaf8b89c4a460b563f7a642b74759364e/userdata/shm major:0 minor:546 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1d311f9e8cf6ad3cdfb6335b00f9729ed813d3bacc476060ca21b806ee856231/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1d311f9e8cf6ad3cdfb6335b00f9729ed813d3bacc476060ca21b806ee856231/userdata/shm major:0 minor:267 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/238ce3dd6965f9273cbd743e0b3e1979d392d0ae170e37e7a7824e217686dfd8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/238ce3dd6965f9273cbd743e0b3e1979d392d0ae170e37e7a7824e217686dfd8/userdata/shm major:0 minor:417 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2acd016733769b6d86086e869e4b5b990685163236e57389d21ff18ee823169b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2acd016733769b6d86086e869e4b5b990685163236e57389d21ff18ee823169b/userdata/shm major:0 minor:374 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3092260e94cf7b40349ae07a2ae8e596f460006829227c4e274b91910ac605bd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3092260e94cf7b40349ae07a2ae8e596f460006829227c4e274b91910ac605bd/userdata/shm major:0 minor:169 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/38c299b0655225599ee9b03928de90a7927480cc6329508c816e9e361bbcfa16/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/38c299b0655225599ee9b03928de90a7927480cc6329508c816e9e361bbcfa16/userdata/shm major:0 minor:256 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3e5cfbce195dab841bc3d549f7ec807dce5f9f747be2dff9f428eff5e81f95a6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3e5cfbce195dab841bc3d549f7ec807dce5f9f747be2dff9f428eff5e81f95a6/userdata/shm major:0 minor:337 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/419bbcc10e95d196cd0f08dbf057bbc2aa7a617fdfb7f0d1b356baa7bbabca04/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/419bbcc10e95d196cd0f08dbf057bbc2aa7a617fdfb7f0d1b356baa7bbabca04/userdata/shm major:0 minor:273 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/427d9fb4da53770ccc67c12bc3aaae3e507cd75207365e40c1611eeaaf274b84/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/427d9fb4da53770ccc67c12bc3aaae3e507cd75207365e40c1611eeaaf274b84/userdata/shm major:0 minor:47 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/467832511abdd120edabe55a66306c8828fbfde7aa084b7647ffcfdeb1475b2c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/467832511abdd120edabe55a66306c8828fbfde7aa084b7647ffcfdeb1475b2c/userdata/shm major:0 minor:541 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/473fbb88708371a6bebda6f8bc6fbc876db715f79766d57af954db0a509d99f7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/473fbb88708371a6bebda6f8bc6fbc876db715f79766d57af954db0a509d99f7/userdata/shm major:0 minor:1014 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/477c2eb598b783cdff738fbc37cca5c05334e3fbafc9bde3daf1f7428b823f9e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/477c2eb598b783cdff738fbc37cca5c05334e3fbafc9bde3daf1f7428b823f9e/userdata/shm major:0 minor:1016 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/49a13810e28c69eccfa523be3ac0813defa610868fd3abaf3cd37d9177c29502/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/49a13810e28c69eccfa523be3ac0813defa610868fd3abaf3cd37d9177c29502/userdata/shm major:0 minor:993 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/49ed17fafdb495990cffcb60e09d22b57348e9bbf59679c7126d84628d0f24f1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/49ed17fafdb495990cffcb60e09d22b57348e9bbf59679c7126d84628d0f24f1/userdata/shm major:0 minor:711 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4e02da5dec5be8e8f6d924d6c2fb726f7b25e71cacfc4eb1074f2a274b8a70bf/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4e02da5dec5be8e8f6d924d6c2fb726f7b25e71cacfc4eb1074f2a274b8a70bf/userdata/shm major:0 minor:371 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/59af426bb753de2f517179014e6cfd5fa8b94b02ab3fedab6e4b42ba0bebac29/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/59af426bb753de2f517179014e6cfd5fa8b94b02ab3fedab6e4b42ba0bebac29/userdata/shm major:0 minor:575 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5a8691e7dd271734f7d5ba67a7c54479d001ddcb15882ee789f7857b1fdecfe2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5a8691e7dd271734f7d5ba67a7c54479d001ddcb15882ee789f7857b1fdecfe2/userdata/shm major:0 minor:1051 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/60ce66e2a62ed17df4cad9067d0bb6d4940a38dc4b5a5337ba95a9117aca3c70/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/60ce66e2a62ed17df4cad9067d0bb6d4940a38dc4b5a5337ba95a9117aca3c70/userdata/shm major:0 minor:265 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/64939e6ed4a35637d0a2c2bc028af5d4314b96efb512849766779eb1c4382a35/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/64939e6ed4a35637d0a2c2bc028af5d4314b96efb512849766779eb1c4382a35/userdata/shm major:0 minor:270 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/66ebd53076fd17791334e64546a2f2ecb5fadc07eb32a382f94ef82be445ec00/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/66ebd53076fd17791334e64546a2f2ecb5fadc07eb32a382f94ef82be445ec00/userdata/shm major:0 minor:1049 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/677d598751a9389168de4b8d58e7ebd447bfc781ea7149cdcbbd5de656faaac5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/677d598751a9389168de4b8d58e7ebd447bfc781ea7149cdcbbd5de656faaac5/userdata/shm major:0 minor:380 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6a236b16d484393f57933afd10e1f7dd4dd1ef7cdb2b760e6b6520b399e5ee85/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6a236b16d484393f57933afd10e1f7dd4dd1ef7cdb2b760e6b6520b399e5ee85/userdata/shm major:0 minor:1018 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6d3dbd3e29a6d7e3e111f4c45f534b1d831fee55b19f442dc477ede7e14f8ccb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6d3dbd3e29a6d7e3e111f4c45f534b1d831fee55b19f442dc477ede7e14f8ccb/userdata/shm major:0 minor:155 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6dd6381115d9cbf9ba7c1a108737553c31041609750cc0e631e36ed92f66311d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6dd6381115d9cbf9ba7c1a108737553c31041609750cc0e631e36ed92f66311d/userdata/shm major:0 minor:100 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/70abcecf11f5f6f42b55c74bce2244e7addd9a94c042b787c6169811b3dbde3f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/70abcecf11f5f6f42b55c74bce2244e7addd9a94c042b787c6169811b3dbde3f/userdata/shm major:0 minor:822 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/74bbf11cd33cced50ba626f06b188adf24ce7f72b1161eb2c06db1ce6ae46dd5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/74bbf11cd33cced50ba626f06b188adf24ce7f72b1161eb2c06db1ce6ae46dd5/userdata/shm major:0 minor:139 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7982749b657bfab7994ceaf29145e70be8f2384ed3fb94c1cb38726c467e71d6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7982749b657bfab7994ceaf29145e70be8f2384ed3fb94c1cb38726c467e71d6/userdata/shm major:0 minor:576 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7bb177ed28141c5fad6532f2da685328c613d7795aff40f2cb06337556b42750/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7bb177ed28141c5fad6532f2da685328c613d7795aff40f2cb06337556b42750/userdata/shm major:0 minor:850 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7f1277c10cbb7843daf01cf48e1bbb02b9db679e347497370ac485520e63be09/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7f1277c10cbb7843daf01cf48e1bbb02b9db679e347497370ac485520e63be09/userdata/shm major:0 minor:365 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/81a2ffe73dc94d42d0d0d238f88887bd148c25d4cd10443967e58bd472ed7cfd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/81a2ffe73dc94d42d0d0d238f88887bd148c25d4cd10443967e58bd472ed7cfd/userdata/shm major:0 minor:424 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/85053d5f110db3eb5945372c71fba0aaee9c7dfe111d937780aa0b35eca2e681/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/85053d5f110db3eb5945372c71fba0aaee9c7dfe111d937780aa0b35eca2e681/userdata/shm major:0 minor:416 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/86c89d73955641e1c897c48a0e24b070831cd22e13c7526466c6f9aac066f9fb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/86c89d73955641e1c897c48a0e24b070831cd22e13c7526466c6f9aac066f9fb/userdata/shm major:0 minor:746 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/88d123f937ec5c733d7e95dda1a51126ac31987054c509b5e60506575b947b18/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/88d123f937ec5c733d7e95dda1a51126ac31987054c509b5e60506575b947b18/userdata/shm major:0 minor:569 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8b4322c396b926726b3445bf3f4c514365e3dc0962cabf32996b7feaa6ce265c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8b4322c396b926726b3445bf3f4c514365e3dc0962cabf32996b7feaa6ce265c/userdata/shm major:0 minor:570 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9198031b5b07e64ace92b4c83419c05c8903b54c6277e82aff6a36c6cdfe7576/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9198031b5b07e64ace92b4c83419c05c8903b54c6277e82aff6a36c6cdfe7576/userdata/shm major:0 minor:1083 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/91d19dd0041e348f5ab95fd10ff19be4195ac501d593d4346b94e73b4b7bfba3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/91d19dd0041e348f5ab95fd10ff19be4195ac501d593d4346b94e73b4b7bfba3/userdata/shm major:0 minor:252 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/94fe08ab3fbb45153add02b6a25a4870b04bc9f5d9d03ddb9283e70a2fe32299/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/94fe08ab3fbb45153add02b6a25a4870b04bc9f5d9d03ddb9283e70a2fe32299/userdata/shm major:0 minor:93 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9c555e5bffa63ad24656c5dfa5ef32654f3cce81a377d07d84caf4aca5f33e3f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9c555e5bffa63ad24656c5dfa5ef32654f3cce81a377d07d84caf4aca5f33e3f/userdata/shm major:0 minor:56 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9c936c5cf4325b6eaecd87ab37df8b339b08dfc494b408b448e5f3edd8efcd5a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9c936c5cf4325b6eaecd87ab37df8b339b08dfc494b408b448e5f3edd8efcd5a/userdata/shm major:0 minor:421 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9efb49a7f1b6a902873e9b844b8f9a0a68e95cea55ba6d37aefc0f305d7e46f9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9efb49a7f1b6a902873e9b844b8f9a0a68e95cea55ba6d37aefc0f305d7e46f9/userdata/shm major:0 minor:84 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9f552493910bdb73df860ba3e68ba62d10417ad2cd26090e67bcb0c06153f976/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9f552493910bdb73df860ba3e68ba62d10417ad2cd26090e67bcb0c06153f976/userdata/shm major:0 minor:786 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a27b7d74527b56755a6c2c471b3ca3c73b2cfc54277efe40b5551df95fef2671/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a27b7d74527b56755a6c2c471b3ca3c73b2cfc54277efe40b5551df95fef2671/userdata/shm major:0 minor:327 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a3668de3fedf57192290be88e895d89eca099cb587eeab867bde241aeee908bc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a3668de3fedf57192290be88e895d89eca099cb587eeab867bde241aeee908bc/userdata/shm major:0 minor:838 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a8072978949c36bf7009bf60cefe0dca093e821c0beb2fafb1092bfaa0b6ca78/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a8072978949c36bf7009bf60cefe0dca093e821c0beb2fafb1092bfaa0b6ca78/userdata/shm major:0 minor:842 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a91d85c0ce3e6a8b926dbcc4b0882326fc962f35e4dc2d7cda43fa3db3301729/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a91d85c0ce3e6a8b926dbcc4b0882326fc962f35e4dc2d7cda43fa3db3301729/userdata/shm major:0 minor:729 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ab67b82c7d40212f9275c2d813cf0ddfb1de4b0eb68ab01e5d797fad81a0d351/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ab67b82c7d40212f9275c2d813cf0ddfb1de4b0eb68ab01e5d797fad81a0d351/userdata/shm major:0 minor:258 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/af342b8bd10a5707fb2e78e4192b40fedda1b3166a7a1ba47d9935fc638c9b76/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/af342b8bd10a5707fb2e78e4192b40fedda1b3166a7a1ba47d9935fc638c9b76/userdata/shm major:0 minor:846 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b2474f5d479286c70d652654d0e6946155d42c6b7dd11abc4f60fd4bf3123854/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b2474f5d479286c70d652654d0e6946155d42c6b7dd11abc4f60fd4bf3123854/userdata/shm major:0 minor:248 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bafb4a547df5e8f39b94a88d98e85d34e5a0230468f2013bc4da6ee9fbc59ee3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bafb4a547df5e8f39b94a88d98e85d34e5a0230468f2013bc4da6ee9fbc59ee3/userdata/shm major:0 minor:263 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bc7646036582f47fdf8d0b7175478e014e467156ca033a5485ffd89d7588c9e5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bc7646036582f47fdf8d0b7175478e014e467156ca033a5485ffd89d7588c9e5/userdata/shm major:0 minor:853 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bd825685a2a078da6de9c77b8a86a4456fa5c958068f18e079159355b91a76d4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bd825685a2a078da6de9c77b8a86a4456fa5c958068f18e079159355b91a76d4/userdata/shm major:0 minor:827 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bf988f8c0a2c5b4133ef3fafc379f42b9d2b5f0585dc6f41596f02be776951fc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bf988f8c0a2c5b4133ef3fafc379f42b9d2b5f0585dc6f41596f02be776951fc/userdata/shm major:0 minor:828 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c14a32fc9f0111ac97c8d0756c820cfe5f40ed691d6de42ac60400f58318b138/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c14a32fc9f0111ac97c8d0756c820cfe5f40ed691d6de42ac60400f58318b138/userdata/shm major:0 minor:114 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c7f70b704680d5914ffad158cbccda7455cb9abad7ebd364fad668180fbeff37/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c7f70b704680d5914ffad158cbccda7455cb9abad7ebd364fad668180fbeff37/userdata/shm major:0 minor:112 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c8f19a12a173a3644a8c884e60505576df72aa86c56475b9cb55da771d09977f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c8f19a12a173a3644a8c884e60505576df72aa86c56475b9cb55da771d09977f/userdata/shm major:0 minor:1080 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cbd303c81d220cd5ed6e63d675881c37da5cce6a8a3c62add5c0bf5721b5fd9f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cbd303c81d220cd5ed6e63d675881c37da5cce6a8a3c62add5c0bf5721b5fd9f/userdata/shm major:0 minor:718 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cdac8f08c2f48b7e4861a00ec2d8e5264134cf6ddac6c83f56497027e5816cb7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cdac8f08c2f48b7e4861a00ec2d8e5264134cf6ddac6c83f56497027e5816cb7/userdata/shm major:0 minor:765 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d8368573a80125faaafb84704a42eab10d08f21db89fb0224a3e775974fbecf4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d8368573a80125faaafb84704a42eab10d08f21db89fb0224a3e775974fbecf4/userdata/shm major:0 minor:840 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/da8a3dd02c7bc3e5376ebe604c414570540ccdc280e818957636de9c32beb180/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/da8a3dd02c7bc3e5376ebe604c414570540ccdc280e818957636de9c32beb180/userdata/shm major:0 minor:574 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dd6da0ee34b8892cb152f5bedf147197e5f0cc5b453d7c1a52f8c962aaace2e8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dd6da0ee34b8892cb152f5bedf147197e5f0cc5b453d7c1a52f8c962aaace2e8/userdata/shm major:0 minor:832 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e11e6ab5433862f323f8ba8f5b3beee99fbf9268c9b94118367fbf5cbb898018/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e11e6ab5433862f323f8ba8f5b3beee99fbf9268c9b94118367fbf5cbb898018/userdata/shm major:0 minor:449 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e3cce5ce786ddb4af71a8112135cad1426f074e7b12c67ae740271017aa946b3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e3cce5ce786ddb4af71a8112135cad1426f074e7b12c67ae740271017aa946b3/userdata/shm major:0 minor:254 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e54bd9f4ed4a4d50ffb03de0e433f3897b33df6a46c77fdb71d0900fa9a91e17/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e54bd9f4ed4a4d50ffb03de0e433f3897b33df6a46c77fdb71d0900fa9a91e17/userdata/shm major:0 minor:261 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e7b7f3534352e488adef6510c4ad914236d57c0ac52f6e0d4e107e52563cb840/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e7b7f3534352e488adef6510c4ad914236d57c0ac52f6e0d4e107e52563cb840/userdata/shm major:0 minor:119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ecf7670cd0c657ac23db39730253e7de6d6d1e9634e025fe447cc9e07fe1d91a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ecf7670cd0c657ac23db39730253e7de6d6d1e9634e025fe447cc9e07fe1d91a/userdata/shm major:0 minor:266 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f58312bc5c3e22538ea35107690dd2a543db5a56cd4a19ebaf6640fbb1518551/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f58312bc5c3e22538ea35107690dd2a543db5a56cd4a19ebaf6640fbb1518551/userdata/shm major:0 minor:1099 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f61feb293ec886a1805e62ec052aa4ca410bad475cca6977bbcf9b16b205a3fd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f61feb293ec886a1805e62ec052aa4ca410bad475cca6977bbcf9b16b205a3fd/userdata/shm major:0 minor:512 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fa6c2fe81e494b2ba395dd1830ab3075ce81e641a81c84edd4df4d6a6849559f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fa6c2fe81e494b2ba395dd1830ab3075ce81e641a81c84edd4df4d6a6849559f/userdata/shm major:0 minor:1142 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fb3de63e9ae8f0f90ed99bf3dc6471ec32942e542a8f9f641416a08fbffeda83/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fb3de63e9ae8f0f90ed99bf3dc6471ec32942e542a8f9f641416a08fbffeda83/userdata/shm major:0 minor:577 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/030160af-c915-4f00-903a-1c4b5c2b719a/volumes/kubernetes.io~projected/kube-api-access-9p4dz:{mountpoint:/var/lib/kubelet/pods/030160af-c915-4f00-903a-1c4b5c2b719a/volumes/kubernetes.io~projected/kube-api-access-9p4dz major:0 minor:491 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/030160af-c915-4f00-903a-1c4b5c2b719a/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/030160af-c915-4f00-903a-1c4b5c2b719a/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/055f5c67-f512-4510-99c5-e194944b0599/volumes/kubernetes.io~projected/kube-api-access-tkt7d:{mountpoint:/var/lib/kubelet/pods/055f5c67-f512-4510-99c5-e194944b0599/volumes/kubernetes.io~projected/kube-api-access-tkt7d major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/055f5c67-f512-4510-99c5-e194944b0599/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/055f5c67-f512-4510-99c5-e194944b0599/volumes/kubernetes.io~secret/serving-cert major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/062f1b21-2ffc-47da-8334-427c3b2a1a90/volumes/kubernetes.io~projected/kube-api-access-jn9nf:{mountpoint:/var/lib/kubelet/pods/062f1b21-2ffc-47da-8334-427c3b2a1a90/volumes/kubernetes.io~projected/kube-api-access-jn9nf major:0 minor:236 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/062f1b21-2ffc-47da-8334-427c3b2a1a90/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/062f1b21-2ffc-47da-8334-427c3b2a1a90/volumes/kubernetes.io~secret/serving-cert major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0cc54e47-af53-448a-b1c9-043710890a32/volumes/kubernetes.io~projected/kube-api-access-bdc26:{mountpoint:/var/lib/kubelet/pods/0cc54e47-af53-448a-b1c9-043710890a32/volumes/kubernetes.io~projected/kube-api-access-bdc26 major:0 minor:730 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0fb78c61-2051-42e2-8668-fa7404ccac43/volumes/kubernetes.io~projected/kube-api-access-zsdjs:{mountpoint:/var/lib/kubelet/pods/0fb78c61-2051-42e2-8668-fa7404ccac43/volumes/kubernetes.io~projected/kube-api-access-zsdjs major:0 minor:796 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0fb78c61-2051-42e2-8668-fa7404ccac43/volumes/kubernetes.io~secret/samples-operator-tls:{mountpoint:/var/lib/kubelet/pods/0fb78c61-2051-42e2-8668-fa7404ccac43/volumes/kubernetes.io~secret/samples-operator-tls major:0 minor:794 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1287cbb9-c9f6-48d2-9fda-f4464074e41b/volumes/kubernetes.io~projected/kube-api-access-hjz8k:{mountpoint:/var/lib/kubelet/pods/1287cbb9-c9f6-48d2-9fda-f4464074e41b/volumes/kubernetes.io~projected/kube-api-access-hjz8k major:0 minor:807 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1287cbb9-c9f6-48d2-9fda-f4464074e41b/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/1287cbb9-c9f6-48d2-9fda-f4464074e41b/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert major:0 minor:804 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1c016b1e-d47c-47d4-a15f-4160e7731c82/volumes/kubernetes.io~projected/kube-api-access-clz8x:{mountpoint:/var/lib/kubelet/pods/1c016b1e-d47c-47d4-a15f-4160e7731c82/volumes/kubernetes.io~projected/kube-api-access-clz8x major:0 minor:716 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1c016b1e-d47c-47d4-a15f-4160e7731c82/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1c016b1e-d47c-47d4-a15f-4160e7731c82/volumes/kubernetes.io~secret/serving-cert major:0 minor:705 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa/volumes/kubernetes.io~projected/kube-api-access-s55hv:{mountpoint:/var/lib/kubelet/pods/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa/volumes/kubernetes.io~projected/kube-api-access-s55hv major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa/volumes/kubernetes.io~secret/serving-cert major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/236f2886-bb69-49a7-9471-36454fd1cbd3/volumes/kubernetes.io~projected/kube-api-access-b6ggg:{mountpoint:/var/lib/kubelet/pods/236f2886-bb69-49a7-9471-36454fd1cbd3/volumes/kubernetes.io~projected/kube-api-access-b6ggg major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/236f2886-bb69-49a7-9471-36454fd1cbd3/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/236f2886-bb69-49a7-9471-36454fd1cbd3/volumes/kubernetes.io~secret/serving-cert major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327/volumes/kubernetes.io~projected/kube-api-access-x6595:{mountpoint:/var/lib/kubelet/pods/25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327/volumes/kubernetes.io~projected/kube-api-access-x6595 major:0 minor:820 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327/volumes/kubernetes.io~secret/webhook-certs:{mountpoint:/var/lib/kubelet/pods/25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327/volumes/kubernetes.io~secret/webhook-certs major:0 minor:812 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/266b9f4f-3fb4-474d-84df-0a6c687c7e9a/volumes/kubernetes.io~projected/kube-api-access-6tmqs:{mountpoint:/var/lib/kubelet/pods/266b9f4f-3fb4-474d-84df-0a6c687c7e9a/volumes/kubernetes.io~projected/kube-api-access-6tmqs major:0 minor:540 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/266b9f4f-3fb4-474d-84df-0a6c687c7e9a/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/266b9f4f-3fb4-474d-84df-0a6c687c7e9a/volumes/kubernetes.io~secret/metrics-tls major:0 minor:536 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed/volumes/kubernetes.io~projected/kube-api-access-wlf77:{mountpoint:/var/lib/kubelet/pods/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed/volumes/kubernetes.io~projected/kube-api-access-wlf77 major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:410 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed/volumes/kubernetes.io~secret/node-tuning-operator-tls:{mountpoint:/var/lib/kubelet/pods/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed/volumes/kubernetes.io~secret/node-tuning-operator-tls major:0 minor:415 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/33feec78-4592-4343-965b-aa1b7044fcf3/volumes/kubernetes.io~projected/kube-api-access-ptrtx:{mountpoint:/var/lib/kubelet/pods/33feec78-4592-4343-965b-aa1b7044fcf3/volumes/kubernetes.io~projected/kube-api-access-ptrtx major:0 minor:303 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/34cbf061-4c76-476e-bed9-0a133c744862/volumes/kubernetes.io~projected/kube-api-access-gmsnk:{mountpoint:/var/lib/kubelet/pods/34cbf061-4c76-476e-bed9-0a133c744862/volumes/kubernetes.io~projected/kube-api-access-gmsnk major:0 minor:737 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/34cbf061-4c76-476e-bed9-0a133c744862/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls:{mountpoint:/var/lib/kubelet/pods/34cbf061-4c76-476e-bed9-0a133c744862/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls major:0 minor:515 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/37cd9c0a-697e-4e67-932b-b331ff77c8c0/volumes/kubernetes.io~projected/kube-api-access-pfpb9:{mountpoint:/var/lib/kubelet/pods/37cd9c0a-697e-4e67-932b-b331ff77c8c0/volumes/kubernetes.io~projected/kube-api-access-pfpb9 major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/37cd9c0a-697e-4e67-932b-b331ff77c8c0/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/37cd9c0a-697e-4e67-932b-b331ff77c8c0/volumes/kubernetes.io~secret/serving-cert major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/38a4bf73-479e-4bbf-9aa3-639fc288c8bc/volumes/kubernetes.io~projected/kube-api-access-2pn9h:{mountpoint:/var/lib/kubelet/pods/38a4bf73-479e-4bbf-9aa3-639fc288c8bc/volumes/kubernetes.io~projected/kube-api-access-2pn9h major:0 minor:105 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3d77a98a-0176-4924-81d3-8e9890852b38/volumes/kubernetes.io~projected/kube-api-access-f72ng:{mountpoint:/var/lib/kubelet/pods/3d77a98a-0176-4924-81d3-8e9890852b38/volumes/kubernetes.io~projected/kube-api-access-f72ng major:0 minor:1079 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3d77a98a-0176-4924-81d3-8e9890852b38/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/3d77a98a-0176-4924-81d3-8e9890852b38/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config major:0 minor:1077 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3d77a98a-0176-4924-81d3-8e9890852b38/volumes/kubernetes.io~secret/kube-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/3d77a98a-0176-4924-81d3-8e9890852b38/volumes/kubernetes.io~secret/kube-state-metrics-tls major:0 minor:1076 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4048e453-a983-4708-89b6-a81af0067e29/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/4048e453-a983-4708-89b6-a81af0067e29/volumes/kubernetes.io~projected/kube-api-access major:0 minor:696 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4048e453-a983-4708-89b6-a81af0067e29/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/4048e453-a983-4708-89b6-a81af0067e29/volumes/kubernetes.io~secret/serving-cert major:0 minor:683 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/41c1bd85-369e-4341-9e80-8b4b248b5572/volumes/kubernetes.io~projected/kube-api-access-q7pjn:{mountpoint:/var/lib/kubelet/pods/41c1bd85-369e-4341-9e80-8b4b248b5572/volumes/kubernetes.io~projected/kube-api-access-q7pjn major:0 minor:1048 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/41c1bd85-369e-4341-9e80-8b4b248b5572/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/41c1bd85-369e-4341-9e80-8b4b248b5572/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config major:0 minor:1045 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/41c1bd85-369e-4341-9e80-8b4b248b5572/volumes/kubernetes.io~secret/prometheus-operator-tls:{mountpoint:/var/lib/kubelet/pods/41c1bd85-369e-4341-9e80-8b4b248b5572/volumes/kubernetes.io~secret/prometheus-operator-tls major:0 minor:1037 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4519000b-e475-4c26-a1c0-bf05cd9c242b/volumes/kubernetes.io~projected/kube-api-access-5x57x:{mountpoint:/var/lib/kubelet/pods/4519000b-e475-4c26-a1c0-bf05cd9c242b/volumes/kubernetes.io~projected/kube-api-access-5x57x major:0 minor:111 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/455f0aad-add2-49d0-995c-f92467bce2d6/volumes/kubernetes.io~projected/kube-api-access-pxsgv:{mountpoint:/var/lib/kubelet/pods/455f0aad-add2-49d0-995c-f92467bce2d6/volumes/kubernetes.io~projected/kube-api-access-pxsgv major:0 minor:118 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/45aa4887-c913-4ece-ae34-fcde33832621/volumes/kubernetes.io~projected/kube-api-access-4vr66:{mountpoint:/var/lib/kubelet/pods/45aa4887-c913-4ece-ae34-fcde33832621/volumes/kubernetes.io~projected/kube-api-access-4vr66 major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4687cf53-55d7-42b7-b24d-e57da3989fd6/volumes/kubernetes.io~projected/kube-api-access-68xhl:{mountpoint:/var/lib/kubelet/pods/4687cf53-55d7-42b7-b24d-e57da3989fd6/volumes/kubernetes.io~projected/kube-api-access-68xhl major:0 minor:818 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4687cf53-55d7-42b7-b24d-e57da3989fd6/volumes/kubernetes.io~secret/machine-api-operator-tls:{mountpoint:/var/lib/kubelet/pods/4687cf53-55d7-42b7-b24d-e57da3989fd6/volumes/kubernetes.io~secret/machine-api-operator-tls major:0 minor:817 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6/volumes/kubernetes.io~projected/kube-api-access-h65dg:{mountpoint:/var/lib/kubelet/pods/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6/volumes/kubernetes.io~projected/kube-api-access-h65dg major:0 minor:1047 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6/volumes/kubernetes.io~secret/certs:{mountpoint:/var/lib/kubelet/pods/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6/volumes/kubernetes.io~secret/certs major:0 minor:1036 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6/volumes/kubernetes.io~secret/node-bootstrap-token:{mountpoint:/var/lib/kubelet/pods/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6/volumes/kubernetes.io~secret/node-bootstrap-token major:0 minor:1046 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/47850839-bb4b-41e9-ac31-f1cabbb4926d/volumes/kubernetes.io~projected/kube-api-access-krrkl:{mountpoint:/var/lib/kubelet/pods/47850839-bb4b-41e9-ac31-f1cabbb4926d/volumes/kubernetes.io~projected/kube-api-access-krrkl major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/47850839-bb4b-41e9-ac31-f1cabbb4926d/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/47850839-bb4b-41e9-ac31-f1cabbb4926d/volumes/kubernetes.io~secret/srv-cert major:0 minor:564 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/492e9833-4513-4f2f-b865-d05a8973fadc/volumes/kubernetes.io~projected/kube-api-access-5kn2k:{mountpoint:/var/lib/kubelet/pods/492e9833-4513-4f2f-b865-d05a8973fadc/volumes/kubernetes.io~projected/kube-api-access-5kn2k major:0 minor:698 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/492e9833-4513-4f2f-b865-d05a8973fadc/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/492e9833-4513-4f2f-b865-d05a8973fadc/volumes/kubernetes.io~secret/proxy-tls major:0 minor:720 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64/volumes/kubernetes.io~projected/kube-api-access-fdlxn:{mountpoint:/var/lib/kubelet/pods/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64/volumes/kubernetes.io~projected/kube-api-access-fdlxn major:0 minor:238 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64/volumes/kubernetes.io~secret/marketplace-operator-metrics:{mountpoint:/var/lib/kubelet/pods/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64/volumes/kubernetes.io~secret/marketplace-operator-metrics major:0 minor:562 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/518ffff8-8119-41be-8b76-ce49d5751254/volumes/kubernetes.io~projected/kube-api-access-4glbr:{mountpoint:/var/lib/kubelet/pods/518ffff8-8119-41be-8b76-ce49d5751254/volumes/kubernetes.io~projected/kube-api-access-4glbr major:0 minor:1013 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/518ffff8-8119-41be-8b76-ce49d5751254/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/518ffff8-8119-41be-8b76-ce49d5751254/volumes/kubernetes.io~secret/default-certificate major:0 minor:1009 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/518ffff8-8119-41be-8b76-ce49d5751254/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/518ffff8-8119-41be-8b76-ce49d5751254/volumes/kubernetes.io~secret/metrics-certs major:0 minor:1010 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/518ffff8-8119-41be-8b76-ce49d5751254/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/518ffff8-8119-41be-8b76-ce49d5751254/volumes/kubernetes.io~secret/stats-auth major:0 minor:1011 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/51eb717b-d11f-4bc3-8df6-deb51d5889f3/volumes/kubernetes.io~projected/kube-api-access-gbnx8:{mountpoint:/var/lib/kubelet/pods/51eb717b-d11f-4bc3-8df6-deb51d5889f3/volumes/kubernetes.io~projected/kube-api-access-gbnx8 major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/51eb717b-d11f-4bc3-8df6-deb51d5889f3/volumes/kubernetes.io~secret/package-server-manager-serving-cert:{mountpoint:/var/lib/kubelet/pods/51eb717b-d11f-4bc3-8df6-deb51d5889f3/volumes/kubernetes.io~secret/package-server-manager-serving-cert major:0 minor:561 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/52d3dc87-0bf8-4a62-a6fc-0ffa6060c6a8/volumes/kubernetes.io~secret/tls-certificates:{mountpoint:/var/lib/kubelet/pods/52d3dc87-0bf8-4a62-a6fc-0ffa6060c6a8/volumes/kubernetes.io~secret/tls-certificates major:0 minor:1005 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f/volumes/kubernetes.io~projected/kube-api-access-9jgbv:{mountpoint:/var/lib/kubelet/pods/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f/volumes/kubernetes.io~projected/kube-api-access-9jgbv major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f/volumes/kubernetes.io~secret/metrics-certs major:0 minor:566 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/604044f4-9b0b-4747-827d-843f3cfa7077/volumes/kubernetes.io~projected/kube-api-access-fqzmm:{mountpoint:/var/lib/kubelet/pods/604044f4-9b0b-4747-827d-843f3cfa7077/volumes/kubernetes.io~projected/kube-api-access-fqzmm major:0 minor:821 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/604044f4-9b0b-4747-827d-843f3cfa7077/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/604044f4-9b0b-4747-827d-843f3cfa7077/volumes/kubernetes.io~secret/proxy-tls major:0 minor:815 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/74eb1407-de29-42e5-9e6c-ce1bec3a9d80/volumes/kubernetes.io~projected/kube-api-access-b6ggc:{mountpoint:/var/lib/kubelet/pods/74eb1407-de29-42e5-9e6c-ce1bec3a9d80/volumes/kubernetes.io~projected/kube-api-access-b6ggc major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/74eb1407-de29-42e5-9e6c-ce1bec3a9d80/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/74eb1407-de29-42e5-9e6c-ce1bec3a9d80/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/78c13011-7a79-445f-807c-4f5e75643549/volumes/kubernetes.io~projected/kube-api-access-bmntw:{mountpoint:/var/lib/kubelet/pods/78c13011-7a79-445f-807c-4f5e75643549/volumes/kubernetes.io~projected/kube-api-access-bmntw major:0 minor:1075 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/78c13011-7a79-445f-807c-4f5e75643549/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/78c13011-7a79-445f-807c-4f5e75643549/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config major:0 minor:1070 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/78c13011-7a79-445f-807c-4f5e75643549/volumes/kubernetes.io~secret/openshift-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/78c13011-7a79-445f-807c-4f5e75643549/volumes/kubernetes.io~secret/openshift-state-metrics-tls major:0 minor:1074 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8ad05507-e242-4ff8-ae80-c16ff9ee68e2/volumes/kubernetes.io~projected/kube-api-access-th8tc:{mountpoint:/var/lib/kubelet/pods/8ad05507-e242-4ff8-ae80-c16ff9ee68e2/volumes/kubernetes.io~projected/kube-api-access-th8tc major:0 minor:234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8ad05507-e242-4ff8-ae80-c16ff9ee68e2/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/8ad05507-e242-4ff8-ae80-c16ff9ee68e2/volumes/kubernetes.io~secret/metrics-tls major:0 minor:406 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c241720-7815-40fd-8d4a-1685a43b5893/volumes/kubernetes.io~projected/kube-api-access-l8qw4:{mountpoint:/var/lib/kubelet/pods/8c241720-7815-40fd-8d4a-1685a43b5893/volumes/kubernetes.io~projected/kube-api-access-l8qw4 major:0 minor:366 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe/volumes/kubernetes.io~projected/kube-api-access-tdlcw:{mountpoint:/var/lib/kubelet/pods/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe/volumes/kubernetes.io~projected/kube-api-access-tdlcw major:0 minor:142 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe/volumes/kubernetes.io~secret/webhook-cert major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8fe3d699-023e-4de7-8d42-6c9d8a5e68f3/volumes/kubernetes.io~projected/kube-api-access-xtrvs:{mountpoint:/var/lib/kubelet/pods/8fe3d699-023e-4de7-8d42-6c9d8a5e68f3/volumes/kubernetes.io~projected/kube-api-access-xtrvs major:0 minor:369 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8fe3d699-023e-4de7-8d42-6c9d8a5e68f3/volumes/kubernetes.io~secret/signing-key:{mountpoint:/var/lib/kubelet/pods/8fe3d699-023e-4de7-8d42-6c9d8a5e68f3/volumes/kubernetes.io~secret/signing-key major:0 minor:368 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9717d467-af1a-4de0-88e0-c47ec4d12d6e/volumes/kubernetes.io~projected/kube-api-access-kbzcs:{mountpoint:/var/lib/kubelet/pods/9717d467-af1a-4de0-88e0-c47ec4d12d6e/volumes/kubernetes.io~projected/kube-api-access-kbzcs major:0 minor:545 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:522 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc/volumes/kubernetes.io~empty-dir/tmp major:0 minor:523 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc/volumes/kubernetes.io~projected/kube-api-access-2lj7z:{mountpoint:/var/lib/kubelet/pods/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc/volumes/kubernetes.io~projected/kube-api-access-2lj7z major:0 minor:531 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9b41258c-ac1d-4e00-ac5e-732d85441f12/volumes/kubernetes.io~projected/kube-api-access-7lmj2:{mountpoint:/var/lib/kubelet/pods/9b41258c-ac1d-4e00-ac5e-732d85441f12/volumes/kubernetes.io~projected/kube-api-access-7lmj2 major:0 minor:444 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9b41258c-ac1d-4e00-ac5e-732d85441f12/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/9b41258c-ac1d-4e00-ac5e-732d85441f12/volumes/kubernetes.io~secret/encryption-config major:0 minor:389 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9b41258c-ac1d-4e00-ac5e-732d85441f12/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/9b41258c-ac1d-4e00-ac5e-732d85441f12/volumes/kubernetes.io~secret/etcd-client major:0 minor:367 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9b41258c-ac1d-4e00-ac5e-732d85441f12/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/9b41258c-ac1d-4e00-ac5e-732d85441f12/volumes/kubernetes.io~secret/serving-cert major:0 minor:384 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9f1f60fa-d79d-4f31-b5bf-2ad333151537/volumes/kubernetes.io~projected/kube-api-access-hqlfx:{mountpoint:/var/lib/kubelet/pods/9f1f60fa-d79d-4f31-b5bf-2ad333151537/volumes/kubernetes.io~projected/kube-api-access-hqlfx major:0 minor:1141 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9f1f60fa-d79d-4f31-b5bf-2ad333151537/volumes/kubernetes.io~secret/client-ca-bundle:{mountpoint:/var/lib/kubelet/pods/9f1f60fa-d79d-4f31-b5bf-2ad333151537/volumes/kubernetes.io~secret/client-ca-bundle major:0 minor:1138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9f1f60fa-d79d-4f31-b5bf-2ad333151537/volumes/kubernetes.io~secret/secret-metrics-client-certs:{mountpoint:/var/lib/kubelet/pods/9f1f60fa-d79d-4f31-b5bf-2ad333151537/volumes/kubernetes.io~secret/secret-metrics-client-certs major:0 minor:1140 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9f1f60fa-d79d-4f31-b5bf-2ad333151537/volumes/kubernetes.io~secret/secret-metrics-server-tls:{mountpoint:/var/lib/kubelet/pods/9f1f60fa-d79d-4f31-b5bf-2ad333151537/volumes/kubernetes.io~secret/secret-metrics-server-tls major:0 minor:1139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a1e2340b-ebca-40de-b1e0-8133999cd860/volumes/kubernetes.io~projected/kube-api-access-6vpbp:{mountpoint:/var/lib/kubelet/pods/a1e2340b-ebca-40de-b1e0-8133999cd860/volumes/kubernetes.io~projected/kube-api-access-6vpbp major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a1e2340b-ebca-40de-b1e0-8133999cd860/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a1e2340b-ebca-40de-b1e0-8133999cd860/volumes/kubernetes.io~secret/serving-cert major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab926874-9722-4e65-9084-27b2f9915450/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/ab926874-9722-4e65-9084-27b2f9915450/volumes/kubernetes.io~projected/kube-api-access major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab926874-9722-4e65-9084-27b2f9915450/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/ab926874-9722-4e65-9084-27b2f9915450/volumes/kubernetes.io~secret/serving-cert major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/adb0dbbf-458d-46f5-b236-d4904e125418/volumes/kubernetes.io~projected/kube-api-access-52svc:{mountpoint:/var/lib/kubelet/pods/adb0dbbf-458d-46f5-b236-d4904e125418/volumes/kubernetes.io~projected/kube-api-access-52svc major:0 minor:1082 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/adb0dbbf-458d-46f5-b236-d4904e125418/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/adb0dbbf-458d-46f5-b236-d4904e125418/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config major:0 minor:1078 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/adb0dbbf-458d-46f5-b236-d4904e125418/volumes/kubernetes.io~secret/node-exporter-tls:{mountpoint:/var/lib/kubelet/pods/adb0dbbf-458d-46f5-b236-d4904e125418/volumes/kubernetes.io~secret/node-exporter-tls major:0 minor:1089 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/aee40f88-83e4-45c8-8331-969943f9f9aa/volumes/kubernetes.io~projected/kube-api-acc Mar 12 18:29:20.623648 master-0 kubenswrapper[29097]: ess-th72r:{mountpoint:/var/lib/kubelet/pods/aee40f88-83e4-45c8-8331-969943f9f9aa/volumes/kubernetes.io~projected/kube-api-access-th72r major:0 minor:824 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/aee40f88-83e4-45c8-8331-969943f9f9aa/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/aee40f88-83e4-45c8-8331-969943f9f9aa/volumes/kubernetes.io~secret/cert major:0 minor:816 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652/volumes/kubernetes.io~projected/ca-certs major:0 minor:559 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652/volumes/kubernetes.io~projected/kube-api-access-k59mb:{mountpoint:/var/lib/kubelet/pods/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652/volumes/kubernetes.io~projected/kube-api-access-k59mb major:0 minor:558 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b2c6cd11-b1ed-4fed-a4ce-4eee0af20868/volumes/kubernetes.io~projected/kube-api-access-md9dt:{mountpoint:/var/lib/kubelet/pods/b2c6cd11-b1ed-4fed-a4ce-4eee0af20868/volumes/kubernetes.io~projected/kube-api-access-md9dt major:0 minor:802 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b2c6cd11-b1ed-4fed-a4ce-4eee0af20868/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/b2c6cd11-b1ed-4fed-a4ce-4eee0af20868/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert major:0 minor:792 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b38e7fcd-8f7a-4d4f-8702-7ef205261054/volumes/kubernetes.io~projected/kube-api-access-zp5gk:{mountpoint:/var/lib/kubelet/pods/b38e7fcd-8f7a-4d4f-8702-7ef205261054/volumes/kubernetes.io~projected/kube-api-access-zp5gk major:0 minor:837 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b38e7fcd-8f7a-4d4f-8702-7ef205261054/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/b38e7fcd-8f7a-4d4f-8702-7ef205261054/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:835 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b38e7fcd-8f7a-4d4f-8702-7ef205261054/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/b38e7fcd-8f7a-4d4f-8702-7ef205261054/volumes/kubernetes.io~secret/webhook-cert major:0 minor:836 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b648b6de-59a6-42da-84e2-77ea0264ae25/volumes/kubernetes.io~projected/kube-api-access-7n4d5:{mountpoint:/var/lib/kubelet/pods/b648b6de-59a6-42da-84e2-77ea0264ae25/volumes/kubernetes.io~projected/kube-api-access-7n4d5 major:0 minor:1012 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b6d288e3-8e73-44d2-874d-64c6c98dd991/volumes/kubernetes.io~projected/kube-api-access-vdb9w:{mountpoint:/var/lib/kubelet/pods/b6d288e3-8e73-44d2-874d-64c6c98dd991/volumes/kubernetes.io~projected/kube-api-access-vdb9w major:0 minor:94 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b6d288e3-8e73-44d2-874d-64c6c98dd991/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/b6d288e3-8e73-44d2-874d-64c6c98dd991/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c/volumes/kubernetes.io~projected/kube-api-access-xmvnh:{mountpoint:/var/lib/kubelet/pods/b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c/volumes/kubernetes.io~projected/kube-api-access-xmvnh major:0 minor:764 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b8dd13a7-10e5-431b-8d30-405dcfea02f5/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/b8dd13a7-10e5-431b-8d30-405dcfea02f5/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b8dd13a7-10e5-431b-8d30-405dcfea02f5/volumes/kubernetes.io~projected/kube-api-access-7rhmv:{mountpoint:/var/lib/kubelet/pods/b8dd13a7-10e5-431b-8d30-405dcfea02f5/volumes/kubernetes.io~projected/kube-api-access-7rhmv major:0 minor:154 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b8dd13a7-10e5-431b-8d30-405dcfea02f5/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/b8dd13a7-10e5-431b-8d30-405dcfea02f5/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:128 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bce831df-c604-4608-a24e-b14d62c5287a/volumes/kubernetes.io~projected/kube-api-access-wfjj6:{mountpoint:/var/lib/kubelet/pods/bce831df-c604-4608-a24e-b14d62c5287a/volumes/kubernetes.io~projected/kube-api-access-wfjj6 major:0 minor:328 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/be2da107-a419-423f-a657-44d681291f28/volumes/kubernetes.io~projected/kube-api-access-jfp84:{mountpoint:/var/lib/kubelet/pods/be2da107-a419-423f-a657-44d681291f28/volumes/kubernetes.io~projected/kube-api-access-jfp84 major:0 minor:717 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/be2da107-a419-423f-a657-44d681291f28/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/be2da107-a419-423f-a657-44d681291f28/volumes/kubernetes.io~secret/serving-cert major:0 minor:471 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d1b3859c-20a1-4a1c-8508-86ed843768f5/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/d1b3859c-20a1-4a1c-8508-86ed843768f5/volumes/kubernetes.io~projected/ca-certs major:0 minor:552 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d1b3859c-20a1-4a1c-8508-86ed843768f5/volumes/kubernetes.io~projected/kube-api-access-gw4m5:{mountpoint:/var/lib/kubelet/pods/d1b3859c-20a1-4a1c-8508-86ed843768f5/volumes/kubernetes.io~projected/kube-api-access-gw4m5 major:0 minor:557 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d1b3859c-20a1-4a1c-8508-86ed843768f5/volumes/kubernetes.io~secret/catalogserver-certs:{mountpoint:/var/lib/kubelet/pods/d1b3859c-20a1-4a1c-8508-86ed843768f5/volumes/kubernetes.io~secret/catalogserver-certs major:0 minor:556 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d3e5b8c8-a100-4880-a0b9-9c3989d4e739/volumes/kubernetes.io~projected/kube-api-access-jrg6p:{mountpoint:/var/lib/kubelet/pods/d3e5b8c8-a100-4880-a0b9-9c3989d4e739/volumes/kubernetes.io~projected/kube-api-access-jrg6p major:0 minor:780 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d4ae1240-e04e-48e9-88df-9f1a53508da7/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/d4ae1240-e04e-48e9-88df-9f1a53508da7/volumes/kubernetes.io~projected/kube-api-access major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d4ae1240-e04e-48e9-88df-9f1a53508da7/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/d4ae1240-e04e-48e9-88df-9f1a53508da7/volumes/kubernetes.io~secret/serving-cert major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27/volumes/kubernetes.io~projected/kube-api-access-ggsdx:{mountpoint:/var/lib/kubelet/pods/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27/volumes/kubernetes.io~projected/kube-api-access-ggsdx major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27/volumes/kubernetes.io~secret/srv-cert major:0 minor:563 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d92dddc8-a810-43f5-8beb-32d1c8ad8381/volumes/kubernetes.io~projected/kube-api-access-l22gw:{mountpoint:/var/lib/kubelet/pods/d92dddc8-a810-43f5-8beb-32d1c8ad8381/volumes/kubernetes.io~projected/kube-api-access-l22gw major:0 minor:259 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d94dc349-c5cb-4f12-8e48-867030af4981/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/d94dc349-c5cb-4f12-8e48-867030af4981/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d94dc349-c5cb-4f12-8e48-867030af4981/volumes/kubernetes.io~projected/kube-api-access-zjmcv:{mountpoint:/var/lib/kubelet/pods/d94dc349-c5cb-4f12-8e48-867030af4981/volumes/kubernetes.io~projected/kube-api-access-zjmcv major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d94dc349-c5cb-4f12-8e48-867030af4981/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/d94dc349-c5cb-4f12-8e48-867030af4981/volumes/kubernetes.io~secret/metrics-tls major:0 minor:373 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e22c7035-4b7a-48cb-9abb-db277b387842/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/e22c7035-4b7a-48cb-9abb-db277b387842/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e22c7035-4b7a-48cb-9abb-db277b387842/volumes/kubernetes.io~projected/kube-api-access-pmxc2:{mountpoint:/var/lib/kubelet/pods/e22c7035-4b7a-48cb-9abb-db277b387842/volumes/kubernetes.io~projected/kube-api-access-pmxc2 major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e22c7035-4b7a-48cb-9abb-db277b387842/volumes/kubernetes.io~secret/image-registry-operator-tls:{mountpoint:/var/lib/kubelet/pods/e22c7035-4b7a-48cb-9abb-db277b387842/volumes/kubernetes.io~secret/image-registry-operator-tls major:0 minor:405 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e5fb0152-3efd-4000-bce3-fa90b75316ae/volumes/kubernetes.io~projected/kube-api-access-pkftr:{mountpoint:/var/lib/kubelet/pods/e5fb0152-3efd-4000-bce3-fa90b75316ae/volumes/kubernetes.io~projected/kube-api-access-pkftr major:0 minor:819 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e5fb0152-3efd-4000-bce3-fa90b75316ae/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/e5fb0152-3efd-4000-bce3-fa90b75316ae/volumes/kubernetes.io~secret/cert major:0 minor:814 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e5fb0152-3efd-4000-bce3-fa90b75316ae/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls:{mountpoint:/var/lib/kubelet/pods/e5fb0152-3efd-4000-bce3-fa90b75316ae/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls major:0 minor:813 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e697746f-fb9e-4d10-ab61-33c68e62cc0d/volumes/kubernetes.io~projected/kube-api-access-vct98:{mountpoint:/var/lib/kubelet/pods/e697746f-fb9e-4d10-ab61-33c68e62cc0d/volumes/kubernetes.io~projected/kube-api-access-vct98 major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e697746f-fb9e-4d10-ab61-33c68e62cc0d/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/e697746f-fb9e-4d10-ab61-33c68e62cc0d/volumes/kubernetes.io~secret/etcd-client major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e697746f-fb9e-4d10-ab61-33c68e62cc0d/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/e697746f-fb9e-4d10-ab61-33c68e62cc0d/volumes/kubernetes.io~secret/serving-cert major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e720e1d0-5a6d-4b76-8b25-5963e24950f5/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/e720e1d0-5a6d-4b76-8b25-5963e24950f5/volumes/kubernetes.io~projected/kube-api-access major:0 minor:237 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e720e1d0-5a6d-4b76-8b25-5963e24950f5/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/e720e1d0-5a6d-4b76-8b25-5963e24950f5/volumes/kubernetes.io~secret/serving-cert major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e94d098b-fbcc-4e85-b8ad-42f3a21c822c/volumes/kubernetes.io~projected/kube-api-access-bttzm:{mountpoint:/var/lib/kubelet/pods/e94d098b-fbcc-4e85-b8ad-42f3a21c822c/volumes/kubernetes.io~projected/kube-api-access-bttzm major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e94d098b-fbcc-4e85-b8ad-42f3a21c822c/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls:{mountpoint:/var/lib/kubelet/pods/e94d098b-fbcc-4e85-b8ad-42f3a21c822c/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls major:0 minor:560 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ee4c1949-96b4-4444-9675-9df1d46f681e/volumes/kubernetes.io~projected/kube-api-access-x4wsx:{mountpoint:/var/lib/kubelet/pods/ee4c1949-96b4-4444-9675-9df1d46f681e/volumes/kubernetes.io~projected/kube-api-access-x4wsx major:0 minor:831 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ee4c1949-96b4-4444-9675-9df1d46f681e/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/ee4c1949-96b4-4444-9675-9df1d46f681e/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:748 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ee55b576-6b8d-4217-b5a7-93b023a1e885/volumes/kubernetes.io~projected/kube-api-access-j5lf8:{mountpoint:/var/lib/kubelet/pods/ee55b576-6b8d-4217-b5a7-93b023a1e885/volumes/kubernetes.io~projected/kube-api-access-j5lf8 major:0 minor:992 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ee55b576-6b8d-4217-b5a7-93b023a1e885/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/ee55b576-6b8d-4217-b5a7-93b023a1e885/volumes/kubernetes.io~secret/proxy-tls major:0 minor:988 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f3a2cda2-b70f-4128-a1be-48503f5aad6d/volumes/kubernetes.io~projected/kube-api-access-tcvfv:{mountpoint:/var/lib/kubelet/pods/f3a2cda2-b70f-4128-a1be-48503f5aad6d/volumes/kubernetes.io~projected/kube-api-access-tcvfv major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f3a2cda2-b70f-4128-a1be-48503f5aad6d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/f3a2cda2-b70f-4128-a1be-48503f5aad6d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f5e09875-4445-4584-94f0-243148307bb0/volumes/kubernetes.io~projected/kube-api-access-clsd9:{mountpoint:/var/lib/kubelet/pods/f5e09875-4445-4584-94f0-243148307bb0/volumes/kubernetes.io~projected/kube-api-access-clsd9 major:0 minor:795 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f5e09875-4445-4584-94f0-243148307bb0/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f5e09875-4445-4584-94f0-243148307bb0/volumes/kubernetes.io~secret/serving-cert major:0 minor:805 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fb529297-b3de-4167-a91e-0a63725b3b0f/volumes/kubernetes.io~projected/kube-api-access-tmzf4:{mountpoint:/var/lib/kubelet/pods/fb529297-b3de-4167-a91e-0a63725b3b0f/volumes/kubernetes.io~projected/kube-api-access-tmzf4 major:0 minor:585 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fb529297-b3de-4167-a91e-0a63725b3b0f/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/fb529297-b3de-4167-a91e-0a63725b3b0f/volumes/kubernetes.io~secret/encryption-config major:0 minor:487 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fb529297-b3de-4167-a91e-0a63725b3b0f/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/fb529297-b3de-4167-a91e-0a63725b3b0f/volumes/kubernetes.io~secret/etcd-client major:0 minor:488 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fb529297-b3de-4167-a91e-0a63725b3b0f/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/fb529297-b3de-4167-a91e-0a63725b3b0f/volumes/kubernetes.io~secret/serving-cert major:0 minor:486 fsType:tmpfs blockSize:0} overlay_0-102:{mountpoint:/var/lib/containers/storage/overlay/a8504c483985228e2d897d5ece91eddd9fe04544f33ed9f4b9ac4b0460f0572a/merged major:0 minor:102 fsType:overlay blockSize:0} overlay_0-1020:{mountpoint:/var/lib/containers/storage/overlay/e32f13cc4c1937637f3049783c446c75542be2632301a4736c766c3af3de1d44/merged major:0 minor:1020 fsType:overlay blockSize:0} overlay_0-1022:{mountpoint:/var/lib/containers/storage/overlay/09b07db6574a569c39562829e9e871f7f07b6c9ef4f124f7313c777c604f4ffe/merged major:0 minor:1022 fsType:overlay blockSize:0} overlay_0-1024:{mountpoint:/var/lib/containers/storage/overlay/ffa5ec6dd030bfd5a8f77909fd26c7079f13459b429f87836ec165de712d68cc/merged major:0 minor:1024 fsType:overlay blockSize:0} overlay_0-1026:{mountpoint:/var/lib/containers/storage/overlay/35e829a521698e905256df26b16c492093671331d74a30cbdb8a0606a6003602/merged major:0 minor:1026 fsType:overlay blockSize:0} overlay_0-1034:{mountpoint:/var/lib/containers/storage/overlay/c8c5ec0f6f0f171c6633149fe88c8ccc1db0eee30f84757af5346c8b99adabcf/merged major:0 minor:1034 fsType:overlay blockSize:0} overlay_0-1053:{mountpoint:/var/lib/containers/storage/overlay/a284e252fa2c42599e68518dc6956e5bbd13dbb08e62e1f2097a6ed8b6e60928/merged major:0 minor:1053 fsType:overlay blockSize:0} overlay_0-1055:{mountpoint:/var/lib/containers/storage/overlay/2ea34e248d50384bf72201c8394f15edb2a13ed482d5c574f940fb8b615dae22/merged major:0 minor:1055 fsType:overlay blockSize:0} overlay_0-1057:{mountpoint:/var/lib/containers/storage/overlay/edb92de624650ff4571375c3dc0e47459132af6061d902c6507ab9e02556d259/merged major:0 minor:1057 fsType:overlay blockSize:0} overlay_0-1062:{mountpoint:/var/lib/containers/storage/overlay/7415aabf38fa10fa6b09e9f68b1c3823a8da47f4c89b5fb9b6a93806d440074a/merged major:0 minor:1062 fsType:overlay blockSize:0} overlay_0-1064:{mountpoint:/var/lib/containers/storage/overlay/9c647762c9efdc951f851079891c85d52455d8c9bb18d9dcda19662ab9117686/merged major:0 minor:1064 fsType:overlay blockSize:0} overlay_0-108:{mountpoint:/var/lib/containers/storage/overlay/875fdd2ea2f79baee53a4979033c408622905022a1e32adcafb91cb52b6651a6/merged major:0 minor:108 fsType:overlay blockSize:0} overlay_0-1085:{mountpoint:/var/lib/containers/storage/overlay/0db47ddb925fdacfb664cf8ded4982d7095069e7cb0adaad834055fdc028c50a/merged major:0 minor:1085 fsType:overlay blockSize:0} overlay_0-1087:{mountpoint:/var/lib/containers/storage/overlay/a2242080ebbcdab2ae04bfff5da5e4b67359bd24fd7424b09d412da5fe1474ea/merged major:0 minor:1087 fsType:overlay blockSize:0} overlay_0-1090:{mountpoint:/var/lib/containers/storage/overlay/999a48dd787ecdd0edffccfb60eb3ceb28beb07c9151acdfe619e6ec15ac6869/merged major:0 minor:1090 fsType:overlay blockSize:0} overlay_0-1092:{mountpoint:/var/lib/containers/storage/overlay/a7f788a5111070c9956d2e9c728cc1d36ea8c61852eccf5aa07869cd70121eb9/merged major:0 minor:1092 fsType:overlay blockSize:0} overlay_0-1103:{mountpoint:/var/lib/containers/storage/overlay/d6a508da059f140ac96516576b37e323fee1cc76d86f874edf274b78c06bd80e/merged major:0 minor:1103 fsType:overlay blockSize:0} overlay_0-1105:{mountpoint:/var/lib/containers/storage/overlay/43dd5ea29af0ebe01c42d9c49a489acde9573a576dc9ef1fa450ef32dde94544/merged major:0 minor:1105 fsType:overlay blockSize:0} overlay_0-1107:{mountpoint:/var/lib/containers/storage/overlay/d81c2f13bcd9495329db2d105cfc5a909fe0bb72ef8bee6fab7ca0d731ffb675/merged major:0 minor:1107 fsType:overlay blockSize:0} overlay_0-1113:{mountpoint:/var/lib/containers/storage/overlay/54b5f1c5c222e84783737cf9a7e07f86724ae28f6569eb07e9ebb7b593a0d835/merged major:0 minor:1113 fsType:overlay blockSize:0} overlay_0-1118:{mountpoint:/var/lib/containers/storage/overlay/f255ed48392578194b0528d7077cc9021f97020163ce678f12b5566b5d389a2f/merged major:0 minor:1118 fsType:overlay blockSize:0} overlay_0-1120:{mountpoint:/var/lib/containers/storage/overlay/681dde3fa5423d9bca48f71f92b66670e5e1f7e0b57c2d299ed605a3fd0a4e7d/merged major:0 minor:1120 fsType:overlay blockSize:0} overlay_0-1133:{mountpoint:/var/lib/containers/storage/overlay/b9877965b2fd92fbe6753663513a7685a9144ef9adf1940ff03130fe8e1601ca/merged major:0 minor:1133 fsType:overlay blockSize:0} overlay_0-1146:{mountpoint:/var/lib/containers/storage/overlay/e085bbb006a11b45fe77210e46581dcf183888cc4079dd9f9a3b4c74f25db8b4/merged major:0 minor:1146 fsType:overlay blockSize:0} overlay_0-1148:{mountpoint:/var/lib/containers/storage/overlay/30395ffec7439b0eabdbcbb1b3b7c6854ff0ae062a5342758c0acc413a5366c5/merged major:0 minor:1148 fsType:overlay blockSize:0} overlay_0-1155:{mountpoint:/var/lib/containers/storage/overlay/0e1f92543569ac560812316669427ee862d3c8db88bb7314e72cee2cfbf077c1/merged major:0 minor:1155 fsType:overlay blockSize:0} overlay_0-1157:{mountpoint:/var/lib/containers/storage/overlay/9855a80eccd26a45436339442318be25d517e255d9958c1bc777d69697130944/merged major:0 minor:1157 fsType:overlay blockSize:0} overlay_0-116:{mountpoint:/var/lib/containers/storage/overlay/82ba9c4fd49a59d0cdd8eba05d118f1d1403ef2b808d2d424c75960a19da6d1f/merged major:0 minor:116 fsType:overlay blockSize:0} overlay_0-1162:{mountpoint:/var/lib/containers/storage/overlay/203d3e25466ee0b258d76727ebc39a9693baf465b66a772967bfbcc4d0eec18d/merged major:0 minor:1162 fsType:overlay blockSize:0} overlay_0-1166:{mountpoint:/var/lib/containers/storage/overlay/66c6b519e169d879dfb5f385da90c5743da099a09f8425e3bc4a33e8da14798c/merged major:0 minor:1166 fsType:overlay blockSize:0} overlay_0-1167:{mountpoint:/var/lib/containers/storage/overlay/dd9674bcd333c5d704ee01a24ca59586cc6ea6aa7774c646cde437f26471b8c9/merged major:0 minor:1167 fsType:overlay blockSize:0} overlay_0-1184:{mountpoint:/var/lib/containers/storage/overlay/ea956e020de660fc86df0d4974d4cf2cad9ccf8c662e990648df258dcbf02b34/merged major:0 minor:1184 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/dfc1e70ca13e8da6d7c2b972d46299d953bc4ff10b781bd2b729a96cd65af0ed/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-126:{mountpoint:/var/lib/containers/storage/overlay/ddf1ba66b634ea9fc324503b8f9ab91efddfa13e156b03f5c42e50a647deca92/merged major:0 minor:126 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/b9b8b4f08169d5b3e74369964977fbcea297f27f297d1bdf787d9fc3146ced77/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/39bae134f9616b4a4f81b1af4a5119919edc7e715d523ae1f61fc329467783f9/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-140:{mountpoint:/var/lib/containers/storage/overlay/d1d38b2b45dd2db2ac3e33cd0eef40fb393159cf5dc601701fe8c0635c01a573/merged major:0 minor:140 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/0bb0cd2e98961ae08c6588cdb473e9d54bd91b8459125a5c0f6c588ea7219762/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/22d3ac8e990991d11f31184e3294feab8367c6ed93e97de2b64ee7eb2d8aba64/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/81001d048c01b87a46d3c4ff7132f9092d9c03fda358d0748001b5d575357730/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-157:{mountpoint:/var/lib/containers/storage/overlay/535d577e49ac3831c30dbca45205ad8388726a7e29374234ffbddd36d1576f38/merged major:0 minor:157 fsType:overlay blockSize:0} overlay_0-161:{mountpoint:/var/lib/containers/storage/overlay/e9e321af552b6e609f75f6138a0f054a4f4070f673fecf67ec1af4a7791c8a97/merged major:0 minor:161 fsType:overlay blockSize:0} overlay_0-163:{mountpoint:/var/lib/containers/storage/overlay/1522914cd1e2dd357a5f5ce3092cd08ead429ab4652bd609ddd2f79909308bd7/merged major:0 minor:163 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/8e74b8f53b88d6e73a7ba6a391e5f46de0f92f2ca88ab03ef9b9627408d025ce/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-171:{mountpoint:/var/lib/containers/storage/overlay/0797b6a8170cb69156ce6068fc8347a7ed9a5263892b33bb61988c45651da7de/merged major:0 minor:171 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/0900f0696f41ffe2827a2fa162692f814c6b1b229f0817b2765b2e2a2f4612e1/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/c253a661aaf79ea0fdd6321517c52ae3bd2133a1c8982d681597541ef4aa6d11/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/090160045baa9c9ab628db4398cc9a9d9b79b81201df27d2d1b4d755428b1bcd/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/25f6090a7ec28bbb6d6587e8fed46798c9b9f27cf65595ce2170b3290e27f323/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/726805a19ef7e927132b5a62815aac2f7bc824232ed0ec6dcb120ca010c52ad3/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-199:{mountpoint:/var/lib/containers/storage/overlay/484e02bc1b42d53fe3dcd90c7f0bf8f16ca57e67db2395357ad7b36bfc144a57/merged major:0 minor:199 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/2ce43df53d040b0aa42fa1fd2348ee54c05aed1a36b755db3145a6a57009b1f3/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-275:{mountpoint:/var/lib/containers/storage/overlay/87c976a446109b642a55333ab6a4a46c2552db7c164023ca94467c07edb4cbd2/merged major:0 minor:275 fsType:overlay blockSize:0} overlay_0-277:{mountpoint:/var/lib/containers/storage/overlay/0e9cb6e5334b6a2fd45c68e74d83ddc6a91fd792768955088a2fc65e191228c4/merged major:0 minor:277 fsType:overlay blockSize:0} overlay_0-279:{mountpoint:/var/lib/containers/storage/overlay/98d3f89d4db84dbfd4968faf1ef60db906bc92832526a46fb426ca559eef0926/merged major:0 minor:279 fsType:overlay blockSize:0} overlay_0-281:{mountpoint:/var/lib/containers/storage/overlay/a1055561ff2c9b975b1b4fd1470252ea957ef5e3b10629f33cc43ce1c5bddd27/merged major:0 minor:281 fsType:overlay blockSize:0} overlay_0-283:{mountpoint:/var/lib/containers/storage/overlay/ac3c93217cba6c7f28ba74ed339343b92a756e831ea07e619c1b00e4666385b9/merged major:0 minor:283 fsType:overlay blockSize:0} overlay_0-285:{mountpoint:/var/lib/containers/storage/overlay/1687b04533d7dc73dd19ac3dde58b4a79320c7ed4e6f89cd6c99f2279eefe795/merged major:0 minor:285 fsType:overlay blockSize:0} overlay_0-287:{mountpoint:/var/lib/containers/storage/overlay/73f33ff0b86328d126ba6882afab65e4840169f07d24a9e88a6862e58049a2db/merged major:0 minor:287 fsType:overlay blockSize:0} overlay_0-289:{mountpoint:/var/lib/containers/storage/overlay/ad71dd1a19f03919642815b1d29ae232d1c1ce8d0cca0c9a5e1a8ae7722119a7/merged major:0 minor:289 fsType:overlay blockSize:0} overlay_0-291:{mountpoint:/var/lib/containers/storage/overlay/f0d884186b58a64b8103be46174399cafa61a06673ca35fd83c2b8cd39dd141e/merged major:0 minor:291 fsType:overlay blockSize:0} overlay_0-293:{mountpoint:/var/lib/containers/storage/overlay/608d8bf93339fa7cdab3c40878d8a317c58d41650130cf63f956eeda6435d6e8/merged major:0 minor:293 fsType:overlay blockSize:0} overlay_0-295:{mountpoint:/var/lib/containers/storage/overlay/9141e82f378b50bcf266ddb61b58593898d3282ae9e4a34bb49d6c32424b253d/merged major:0 minor:295 fsType:overlay blockSize:0} overlay_0-297:{mountpoint:/var/lib/containers/storage/overlay/4247a682b6f1fcffc1ef63b27ed59f9dad47559afd4aafb068bc59a3aeefdd9f/merged major:0 minor:297 fsType:overlay blockSize:0} overlay_0-299:{mountpoint:/var/lib/containers/storage/overlay/c5056014724f1870a8faf1d1aebe97f63adac83f4d6011411616718c822a3bd5/merged major:0 minor:299 fsType:overlay blockSize:0} overlay_0-304:{mountpoint:/var/lib/containers/storage/overlay/9da12ce5acbc99d4fef59e99c8be0b3712ce74142af896805ec82271d5257282/merged major:0 minor:304 fsType:overlay blockSize:0} overlay_0-324:{mountpoint:/var/lib/containers/storage/overlay/3825cf65b8692459fb632faac751798e9bd2f3e3806f41d76e4512eb6621875e/merged major:0 minor:324 fsType:overlay blockSize:0} overlay_0-325:{mountpoint:/var/lib/containers/storage/overlay/8fa3212daa3357a0681e17219b41175781dbd9e89c86b1f2250c6ffaffd25645/merged major:0 minor:325 fsType:overlay blockSize:0} overlay_0-335:{mountpoint:/var/lib/containers/storage/overlay/67c169712e0af0332225908510bcf8b2de51959f3877264306133d7dca95ba7e/merged major:0 minor:335 fsType:overlay blockSize:0} overlay_0-340:{mountpoint:/var/lib/containers/storage/overlay/c84cc40acdf060cb176bd1731fdd8eefb99670e800aedba273d6f8f37e479007/merged major:0 minor:340 fsType:overlay blockSize:0} overlay_0-341:{mountpoint:/var/lib/containers/storage/overlay/4ab5a6104b0ba1330fe9bcbc9d978ff057efd7c71042123a58640e48342c248e/merged major:0 minor:341 fsType:overlay blockSize:0} overlay_0-346:{mountpoint:/var/lib/containers/storage/overlay/8a49be62bb79778818e6bd2f3b8f06cfd4261ca14903bf58046d111c4541d494/merged major:0 minor:346 fsType:overlay blockSize:0} overlay_0-359:{mountpoint:/var/lib/containers/storage/overlay/5028638cded06b0fc3afb6a17e8bd5e2fa5f9177d043c18b176fa7d06ca73f36/merged major:0 minor:359 fsType:overlay blockSize:0} overlay_0-362:{mountpoint:/var/lib/containers/storage/overlay/7dca36cb1d3c7f4983516ca2d00af234e0f52f3f5d95fa6ca3bae7653c60b5cc/merged major:0 minor:362 fsType:overlay blockSize:0} overlay_0-375:{mountpoint:/var/lib/containers/storage/overlay/a55a5e055f486da89657767b34235c6566b20bfc87064e35958801b833037bb6/merged major:0 minor:375 fsType:overlay blockSize:0} overlay_0-378:{mountpoint:/var/lib/containers/storage/overlay/7b7998f35063d2fea3e72d53d75284bfa40224494ed744ba082c44bd628ed541/merged major:0 minor:378 fsType:overlay blockSize:0} overlay_0-379:{mountpoint:/var/lib/containers/storage/overlay/1661e2a912b5c98211308fc47ca55ade12b24caf9e9065993afc0c3673fc56ec/merged major:0 minor:379 fsType:overlay blockSize:0} overlay_0-385:{mountpoint:/var/lib/containers/storage/overlay/a8a1092d9dfadfccdb81b7c56825d8e6faca928d9d9f11528371b36da73cd0ac/merged major:0 minor:385 fsType:overlay blockSize:0} overlay_0-387:{mountpoint:/var/lib/containers/storage/overlay/7dffd430e81facda1ee86ea6bebd7902f8d15a384ed9be639d2798e27e5dd259/merged major:0 minor:387 fsType:overlay blockSize:0} overlay_0-391:{mountpoint:/var/lib/containers/storage/overlay/30be56ba5f031945d8f5de3a3d4af7f1b778c9a55b436e424f8a2812f145c122/merged major:0 minor:391 fsType:overlay blockSize:0} overlay_0-395:{mountpoint:/var/lib/containers/storage/overlay/309a4640b20eed9eb619fed21b6795db44912f890bce2969d2ef5cfd279de731/merged major:0 minor:395 fsType:overlay blockSize:0} overlay_0-396:{mountpoint:/var/lib/containers/storage/overlay/c5f62d6e1c38857f4f7f2bc73eed9dbeb5aad7c10c4f5f1aa7971dc12efdab46/merged major:0 minor:396 fsType:overlay blockSize:0} overlay_0-398:{mountpoint:/var/lib/containers/storage/overlay/b0b2b58b1577f7359d3e482e2e0a86d3d16c633ac5474a31062265b35efd9d58/merged major:0 minor:398 fsType:overlay blockSize:0} overlay_0-41:{mountpoint:/var/lib/containers/storage/overlay/6ac0d817078d77890fd5d21d84f1c162fe37ccfe5937f1acd4d799837b5d7791/merged major:0 minor:41 fsType:overlay blockSize:0} overlay_0-414:{mountpoint:/var/lib/containers/storage/overlay/80e8a2bdbdd1c7c6a739173ddd1f0f98f0a84f69ef682b65507c36cf4efe419b/merged major:0 minor:414 fsType:overlay blockSize:0} overlay_0-428:{mountpoint:/var/lib/containers/storage/overlay/3d55514a7a41fac4de91ef0296db38c66c1a0f005f8cc69a78797835755d559b/merged major:0 minor:428 fsType:overlay blockSize:0} overlay_0-431:{mountpoint:/var/lib/containers/storage/overlay/abaa8e6c553966540dc5eab143a6bc36a5bb79e439888f3c5e6d8573a8cfd7bd/merged major:0 minor:431 fsType:overlay blockSize:0} overlay_0-434:{mountpoint:/var/lib/containers/storage/overlay/e29312fbc800e25b76a7ada8328ccbbce05515fd5dad6c48076b98e0cfba5f19/merged major:0 minor:434 fsType:overlay blockSize:0} overlay_0-436:{mountpoint:/var/lib/containers/storage/overlay/c8c7c59815e8a3284e79ed8dff95efd0f7a0d0127393041ded9f5488f112ddc3/merged major:0 minor:436 fsType:overlay blockSize:0} overlay_0-438:{mountpoint:/var/lib/containers/storage/overlay/afa204e84a24812ea4b29eded932201c0b7ce1883796bf6ce2f39947440b1cbc/merged major:0 minor:438 fsType:overlay blockSize:0} overlay_0-440:{mountpoint:/var/lib/containers/storage/overlay/eed4e14155be243700df242a7303932494a2c6b702084abffb0663d95741a77d/merged major:0 minor:440 fsType:overlay blockSize:0} overlay_0-442:{mountpoint:/var/lib/containers/storage/overlay/fbfcde58e0e92da4d79c6b6ebcdb40fd671616edeb77f2041f8ec3dca882fbe3/merged major:0 minor:442 fsType:overlay blockSize:0} overlay_0-447:{mountpoint:/var/lib/containers/storage/overlay/72251d50fb8e7c237c6b0ca14cc4192bb185bddfdedb25bc9d5ee88127799a5d/merged major:0 minor:447 fsType:overlay blockSize:0} overlay_0-45:{mountpoint:/var/lib/containers/storage/overlay/2585ee928ee4266966aacf3caf3074426270304b6fd7f9b5590f6fd2b10acdc7/merged major:0 minor:45 fsType:overlay blockSize:0} overlay_0-454:{mountpoint:/var/lib/containers/storage/overlay/2edbf4c238f120413fffee74541e5564eb22033c025c60f8903aa6d0803b5488/merged major:0 minor:454 fsType:overlay blockSize:0} overlay_0-457:{mountpoint:/var/lib/containers/storage/overlay/c36885d5208c6a9fec180827e222e4fa777774758e8957a3c64b881d5f95fb91/merged major:0 minor:457 fsType:overlay blockSize:0} overlay_0-462:{mountpoint:/var/lib/containers/storage/overlay/3fff4929a28e92c5fb03c2f8ee578dc3cce25ee8bb6e55fd99dd207b0fb0d062/merged major:0 minor:462 fsType:overlay blockSize:0} overlay_0-463:{mountpoint:/var/lib/containers/storage/overlay/8c2b29ec1564cfce6613bf2267d71cb87ad2cd664c1e920d323cae903537cbeb/merged major:0 minor:463 fsType:overlay blockSize:0} overlay_0-467:{mountpoint:/var/lib/containers/storage/overlay/bd586ce4749348aa5ebfa990ba7e11be5c4c3f6f0682a5a3c184b456a7cdbed1/merged major:0 minor:467 fsType:overlay blockSize:0} overlay_0-472:{mountpoint:/var/lib/containers/storage/overlay/06dd14689b093cc91914810234188cdebbed0ed5bdd49c0e681631dac11725ee/merged major:0 minor:472 fsType:overlay blockSize:0} overlay_0-483:{mountpoint:/var/lib/containers/storage/overlay/8637b40ad1a6fc35ad49b6b72a76b44ed732eb52d3c33bcf4eb1849b1e5c3569/merged major:0 minor:483 fsType:overlay blockSize:0} overlay_0-505:{mountpoint:/var/lib/containers/storage/overlay/d43bf60ff79b1421a56e01c516f2affc5202d271e400d07ece51421fd12bdff3/merged major:0 minor:505 fsType:overlay blockSize:0} overlay_0-513:{mountpoint:/var/lib/containers/storage/overlay/15fd64ddadb46ac804245fc5f1ec1d537e837f0ae998a696d6023545c8fdc845/merged major:0 minor:513 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/8c773cec56df11684d32c945fff551ec6e9c46d41d18a9d253bd1059e3ff9e95/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-520:{mountpoint:/var/lib/containers/storage/overlay/7cd884cc30cedd71f24a3c617c03c07b4634bce3319a1a9d65b79235ad9c776f/merged major:0 minor:520 fsType:overlay blockSize:0} overlay_0-530:{mountpoint:/var/lib/containers/storage/overlay/aba398f8b6e6a807734dc3729f99fb9dbc7767a743beb0e5216a8fe00f5c6485/merged major:0 minor:530 fsType:overlay blockSize:0} overlay_0-532:{mountpoint:/var/lib/containers/storage/overlay/61e68f754d8a44ff2dca5368f083dce8fe3ec0e357977b5f79a92b88bcf2a6ad/merged major:0 minor:532 fsType:overlay blockSize:0} overlay_0-534:{mountpoint:/var/lib/containers/storage/overlay/b4c153e8ef090952f83f84f2bb1edc94f146827a140c39902f38e1957f310ecb/merged major:0 minor:534 fsType:overlay blockSize:0} overlay_0-54:{mountpoint:/var/lib/containers/storage/overlay/3dd178882bdbd62b795b2a62b421c51724e6d7561f4fabbdd5db02bdc5514cb8/merged major:0 minor:54 fsType:overlay blockSize:0} overlay_0-543:{mountpoint:/var/lib/containers/storage/overlay/8d78f657d55d87d8ed6242262de292d973a980fa7b344bec6e5c1e91cd78ccb3/merged major:0 minor:543 fsType:overlay blockSize:0} overlay_0-548:{mountpoint:/var/lib/containers/storage/overlay/26d2ee3ec6d6cabf195bf36e3cecbc12e488c36fec1a6f99f08290704b84d088/merged major:0 minor:548 fsType:overlay blockSize:0} overlay_0-550:{mountpoint:/var/lib/containers/storage/overlay/4914eb55615fbcd6c10be158d7c46e60be67fc9b53b3848c448b682d9fb929d4/merged major:0 minor:550 fsType:overlay blockSize:0} overlay_0-572:{mountpoint:/var/lib/containers/storage/overlay/c903c9abdac73533437b2e85adbc1853ce44e785cfd91b3e4dbbd5c43fce898f/merged major:0 minor:572 fsType:overlay blockSize:0} overlay_0-589:{mountpoint:/var/lib/containers/storage/overlay/8bab3047d95f4bf40db984fd8d9bcc608ac70dfe0aefd2d09505425edeef618b/merged major:0 minor:589 fsType:overlay blockSize:0} overlay_0-592:{mountpoint:/var/lib/containers/storage/overlay/34d380410e71f3e6ae9cfcc1f8faccc869bc30e3279966e5fe71b75e3d0488d1/merged major:0 minor:592 fsType:overlay blockSize:0} overlay_0-594:{mountpoint:/var/lib/containers/storage/overlay/0d7243ce4a1a74a7158d61792690e26b1318cd86a6bc48c1778131bf95dccda7/merged major:0 minor:594 fsType:overlay blockSize:0} overlay_0-599:{mountpoint:/var/lib/containers/storage/overlay/9a0f80a3bd93399131badd60dd27f47a1af8865330b8dccdb32e0a2d4275657b/merged major:0 minor:599 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/cceedef206d705f1ae4fbebb0c7a246ea57f6c1ac2133988b90d9dc31f5255e5/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-600:{mountpoint:/var/lib/containers/storage/overlay/2dd86d80ec743f784f380d3b41df21350b33af161dbf0e397e4682a01f10a01f/merged major:0 minor:600 fsType:overlay blockSize:0} overlay_0-603:{mountpoint:/var/lib/containers/storage/overlay/58f67cc36c08d16deaec57e8dad8c0043e3fc2ced1ffc1e64cb996d41e2de4a4/merged major:0 minor:603 fsType:overlay blockSize:0} overlay_0-605:{mountpoint:/var/lib/containers/storage/overlay/649effbbf06bfc2804585de1578c9950ff7bb443f5620ab10e6bb032b7a1bf51/merged major:0 minor:605 fsType:overlay blockSize:0} overlay_0-610:{mountpoint:/var/lib/containers/storage/overlay/4d4e86d6b0a13a0814426db27a606d2507ff60ca7566ad580471dc845a03c27d/merged major:0 minor:610 fsType:overlay blockSize:0} overlay_0-618:{mountpoint:/var/lib/containers/storage/overlay/68fbacd6367c79cdf33433754092042c9ba4e2ac568090ed038fd8c37a35a21c/merged major:0 minor:618 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/7e8443837613d5df31993a60c212d287d7c17ec00f04f3836fce05746699617c/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-622:{mountpoint:/var/lib/containers/storage/overlay/318f2086a53bfd946262896f5ab8bfd99b5c94ee207daf539f2b520e02f0c6ee/merged major:0 minor:622 fsType:overlay blockSize:0} overlay_0-625:{mountpoint:/var/lib/containers/storage/overlay/0981395f3e15ddc5f13a5d795314490109d11c4f9c9cb6c0f01301a573b2eb35/merged major:0 minor:625 fsType:overlay blockSize:0} overlay_0-627:{mountpoint:/var/lib/containers/storage/overlay/d2e903897da6e320a6bb1b6593701ea710b6559af9c6084aa3eeb1aa62ed7361/merged major:0 minor:627 fsType:overlay blockSize:0} overlay_0-629:{mountpoint:/var/lib/containers/storage/overlay/00487bc8527992922f47fc371a4cc9de0f5ccf562012b0e818533975dea25a9c/merged major:0 minor:629 fsType:overlay blockSize:0} overlay_0-631:{mountpoint:/var/lib/containers/storage/overlay/adb6d4ad37006df16273d23204c2f7ae22641c1e928d68eb7ce6d1d8963193f6/merged major:0 minor:631 fsType:overlay blockSize:0} overlay_0-633:{mountpoint:/var/lib/containers/storage/overlay/923b920a8ec7b2151b5dba86d1263c7893d15ba21938d45c2b2b2ece0a843b82/merged major:0 minor:633 fsType:overlay blockSize:0} overlay_0-637:{mountpoint:/var/lib/containers/storage/overlay/139e47481c1e5b2ee786414a8d1bf5f0396059fd19921653f08f19f1313f1845/merged major:0 minor:637 fsType:overlay blockSize:0} overlay_0-639:{mountpoint:/var/lib/containers/storage/overlay/b14dd2f2903c9c4322a79c0962de5a9ff2ef123bac937e7e14d9abb50890fe0d/merged major:0 minor:639 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/01d8ddaafbfe8ab31b73cdc1e8f4cc1260c262c81acd0ab9871a57224aceb67f/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-641:{mountpoint:/var/lib/containers/storage/overlay/3d038f05b6dfc8989955550ca7c3ea163e3d42482da6e02587282548db2da6c1/merged major:0 minor:641 fsType:overlay blockSize:0} overlay_0-642:{mountpoint:/var/lib/containers/storage/overlay/b6d084c58813761ad8a20de729364e9a171a01f7c7584cd7e9ed5e15fc78cc27/merged major:0 minor:642 fsType:overlay blockSize:0} overlay_0-657:{mountpoint:/var/lib/containers/storage/overlay/6397be00763e2274e64748520157eb9db0cdbea17ed89daa6c2bf012f0f8fbbd/merged major:0 minor:657 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/934b3266bb32160b91282943d0c5b51b7a2219c19b9394a0eef11c89e9c3591e/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-67:{mountpoint:/var/lib/containers/storage/overlay/90fa6e22e38333b8c72adcad96f457f797fc363de7341e8472c56dc3ab1ccff2/merged major:0 minor:67 fsType:overlay blockSize:0} overlay_0-682:{mountpoint:/var/lib/containers/storage/overlay/d22ed6caaeff03023b2b316eca99a71051d1a0f81f82bb1782906a567c77fda5/merged major:0 minor:682 fsType:overlay blockSize:0} overlay_0-692:{mountpoint:/var/lib/containers/storage/overlay/9216a5a3f89cf898046c9e295231266794b913e50422bf00af80ce7d12669b66/merged major:0 minor:692 fsType:overlay blockSize:0} overlay_0-699:{mountpoint:/var/lib/containers/storage/overlay/e1bf33013302e63463d0d9d7f3be9a17d98918752ddb952137927b2c946a9a92/merged major:0 minor:699 fsType:overlay blockSize:0} overlay_0-70:{mountpoint:/var/lib/containers/storage/overlay/69debf5a7a6490e832c7fbd4fa53d49938669f73d3709c194287df26d353874a/merged major:0 minor:70 fsType:overlay blockSize:0} overlay_0-702:{mountpoint:/var/lib/containers/storage/overlay/874a52028ae5ed8990cb6afbd31c6cb6ef0599a61176738a31095b4106239b2c/merged major:0 minor:702 fsType:overlay blockSize:0} overlay_0-704:{mountpoint:/var/lib/containers/storage/overlay/2b3f6775af6c032817c421f9503df73e6af85152b8ad49a7a349d2e17fb19f49/merged major:0 minor:704 fsType:overlay blockSize:0} overlay_0-713:{mountpoint:/var/lib/containers/storage/overlay/cb1e7ee9aef5f860e0a5b83734c7bbea684815c68bb0d4c8f134cd5d6cf2fc24/merged major:0 minor:713 fsType:overlay blockSize:0} overlay_0-72:{mountpoint:/var/lib/containers/storage/overlay/802b3c9dfa8bc544736c47a0d0c83641a66f5d9ffc6c1d444cde26b62be63b9f/merged major:0 minor:72 fsType:overlay blockSize:0} overlay_0-731:{mountpoint:/var/lib/containers/storage/overlay/640ae53bec25feb1bf30bf14ed4590553884e5453ee42df87193b9c540355f1a/merged major:0 minor:731 fsType:overlay blockSize:0} overlay_0-735:{mountpoint:/var/lib/containers/storage/overlay/3900814baa01f54d1c23e07ea435540a2e0296c3db53c0057a03554d7698701b/merged major:0 minor:735 fsType:overlay blockSize:0} overlay_0-738:{mountpoint:/var/lib/containers/storage/overlay/79db0ab897d6cdf19f18d158ca3c290ab69b8b21b5bcf19243d2adf0f8fce6da/merged major:0 minor:738 fsType:overlay blockSize:0} overlay_0-74:{mountpoint:/var/lib/containers/storage/overlay/ea3d096dd657713a236570666669ac9c02de76ea8915783421256c242fd8938c/merged major:0 minor:74 fsType:overlay blockSize:0} overlay_0-744:{mountpoint:/var/lib/containers/storage/overlay/d37dc5f761dee41e0e6b2a74c13e906cfbdb9148bf93651cb33c3059f5c15292/merged major:0 minor:744 fsType:overlay blockSize:0} overlay_0-749:{mountpoint:/var/lib/containers/storage/overlay/48c106011724e4250edfee4df17c8c4050e4fe5ca65323df9e60457db98dd26f/merged major:0 minor:749 fsType:overlay blockSize:0} overlay_0-75:{mountpoint:/var/lib/containers/storage/overlay/9bb28fd12a5f7f71c577fb3a8c536fc5fd01d1a5f89f7afc7f1c60e10dc1cd40/merged major:0 minor:75 fsType:overlay blockSize:0} overlay_0-750:{mountpoint:/var/lib/containers/storage/overlay/f458ca573d5c2e34b0961d3414d5a19e965b45f0d117c4a663091951da835ad1/merged major:0 minor:750 fsType:overlay blockSize:0} overlay_0-752:{mountpoint:/var/lib/containers/storage/overlay/9fa5d2e81da848978c1c11810272325067d8ce53e6c398384a9e696d7c8bba75/merged major:0 minor:752 fsType:overlay blockSize:0} overlay_0-757:{mountpoint:/var/lib/containers/storage/overlay/ff63aae41f7e359198241b1a1c1e0bbce635616f5bf6bc53da2693f6cd4bb4f8/merged major:0 minor:757 fsType:overlay blockSize:0} overlay_0-758:{mountpoint:/var/lib/containers/storage/overlay/c7333938d98bebd0c1fd32d3ddf9b1369c808a8a1b983dbd8d272d6498220d61/merged major:0 minor:758 fsType:overlay blockSize:0} overlay_0-762:{mountpoint:/var/lib/containers/storage/overlay/be96528b41cb7b1d650be14e7de23c93e35cf9130061c57bf5e9181387e5e858/merged major:0 minor:762 fsType:overlay blockSize:0} overlay_0-768:{mountpoint:/var/lib/containers/storage/overlay/b54c6bdac11911119f3d492396f43db77c2ea301b676e9fd06686297e89faa0b/merged major:0 minor:768 fsType:overlay blockSize:0} overlay_0-769:{mountpoint:/var/lib/containers/storage/overlay/b6789f26e2f3e6ace86ed3399245e9e0b4fb8d861dcca44fbf43bc37f63b7fae/merged major:0 minor:769 fsType:overlay blockSize:0} overlay_0-77:{mountpoint:/var/lib/containers/storage/overlay/ff8d9385492d207783c2acb31e679d4d1eab09614cf621cc793ea5184dfdbc89/merged major:0 minor:77 fsType:overlay blockSize:0} overlay_0-775:{mountpoint:/var/lib/containers/storage/overlay/d955f26483e024e60dc082cb5d7d04935b46520d2c283033630c91bd9144fa5d/merged major:0 minor:775 fsType:overlay blockSize:0} overlay_0-777:{mountpoint:/var/lib/containers/storage/overlay/640145cb6672d732168b306c1f4116aa181e673ef9d6b714604844e111db9a28/merged major:0 minor:777 fsType:overlay blockSize:0} overlay_0-788:{mountpoint:/var/lib/containers/storage/overlay/2c249f3f3f8c23947115c510dbd5c6a9ef3de130302c1c52b2b7c7d6345de714/merged major:0 minor:788 fsType:overlay blockSize:0} overlay_0-79:{mountpoint:/var/lib/containers/storage/overlay/f639032e965a6461674a0b5851fc1e6e6abb9d2abc37542c832b8c098be5d2a2/merged major:0 minor:79 fsType:overlay blockSize:0} overlay_0-790:{mountpoint:/var/lib/containers/storage/overlay/f33201b4b04366d5fc8246a7c414d7d7ddadb77aeaa3a7734f61744724ea7cb7/merged major:0 minor:790 fsType:overlay blockSize:0} overlay_0-799:{mountpoint:/var/lib/containers/storage/overlay/17725fb1bced621d1cde8623ca21345d8b5b89b504de0b2b50b865ffe317d8b4/merged major:0 minor:799 fsType:overlay blockSize:0} overlay_0-801:{mountpoint:/var/lib/containers/storage/overlay/34da720fc8271108be9617ca89476a7ee571f8c091c00cdf8777fd7c5ef6580b/merged major:0 minor:801 fsType:overlay blockSize:0} overlay_0-809:{mountpoint:/var/lib/containers/storage/overlay/d2098f0015487d897a2f8752cbe4ad052925bd39d60bc66ced8071226f977ca9/merged major:0 minor:809 fsType:overlay blockSize:0} overlay_0-81:{mountpoint:/var/lib/containers/storage/overlay/3a4d2c246142a4e9f7ff99b6457b29cb45625aff308d357fd5f154f3f430fed6/merged major:0 minor:81 fsType:overlay blockSize:0} overlay_0-811:{mountpoint:/var/lib/containers/storage/overlay/db035651d53eb7ceed7ea6134ea4d4cb467d59f4e135b3c920577a15ca03c380/merged major:0 minor:811 fsType:overlay blockSize:0} overlay_0-834:{mountpoint:/var/lib/containers/storage/overlay/3b4fea7392225f49ac2544106b57591602ef9499c79ea09e0589c2fc193bdcad/merged major:0 minor:834 fsType:overlay blockSize:0} overlay_0-855:{mountpoint:/var/lib/containers/storage/overlay/6cb58bec7e82152a02ed4e19b57dc15615de9b80c6ab1502e8f4748034d5bfea/merged major:0 minor:855 fsType:overlay blockSize:0} overlay_0-857:{mountpoint:/var/lib/containers/storage/overlay/fa7979499921f1389f6b5eea73bf5dc2a921d08ca27321ee00c7fa7612055905/merged major:0 minor:857 fsType:overlay blockSize:0} overlay_0-859:{mountpoint:/var/lib/containers/storage/overlay/47dcef13c07fc6c826e4b8d1beba267d3fb56914cba20454da3f85667823965a/merged major:0 minor:859 fsType:overlay blockSize:0} overlay_0-86:{mountpoint:/var/lib/containers/storage/overlay/529fb8b5d1c749e232a740700b493e1a6955e71f5e829a6409465748bfc3afcb/merged major:0 minor:86 fsType:overlay blockSize:0} overlay_0-865:{mountpoint:/var/lib/containers/storage/overlay/985438cd381184bb186a4a7bae2015a56b619582ca1d199314cde3b6a099f856/merged major:0 minor:865 fsType:overlay blockSize:0} overlay_0-867:{mountpoint:/var/lib/containers/storage/overlay/a7757d3d02cf79e53c85c0f2b41fd638108f78ec8d93140b219ac57993b49e67/merged major:0 minor:867 fsType:overlay blockSize:0} overlay_0-874:{mountpoint:/var/lib/containers/storage/overlay/3168ba264c6cf4e63d2db3c10b8ffe80469c7fdf60cb4ae7c11d7b9dcf5748be/merged major:0 minor:874 fsType:overlay blockSize:0} overlay_0-876:{mountpoint:/var/lib/containers/storage/overlay/d9da83fe3e8e7b644f822e8edf8cf2cda98d666abfa7b0f028acf1588914bd57/merged major:0 minor:876 fsType:overlay blockSize:0} overlay_0-878:{mountpoint:/var/lib/containers/storage/overlay/dbbbd3898ff2df50b9113463f39fd83ec2c5162f1fd27acf3a1cc59108c51515/merged major:0 minor:878 fsType:overlay blockSize:0} overlay_0-88:{mountpoint:/var/lib/containers/storage/overlay/fa5392bc26916fab01630a8beb47b4bf0abee242c998c9a9a616cce86bdb9e38/merged major:0 minor:88 fsType:overlay blockSize:0} overlay_0-880:{mountpoint:/var/lib/containers/storage/overlay/fc433db4064ed2bc9add1c9570fbc3469bddcfb86b88342bf19c8a7be845b650/merged major:0 minor:880 fsType:overlay blockSize:0} overlay_0-883:{mountpoint:/var/lib/containers/storage/overlay/389716490efb255f1aac4b4a7037cf032909d5ff3da0aff5504a42f8e53042b6/merged major:0 minor:883 fsType:overlay blockSize:0} overlay_0-885:{mountpoint:/var/lib/containers/storage/overlay/869b8a929ecd04f8b1fef075ee6423088eb9593cae37e3e330313cfb66de69ad/merged major:0 minor:885 fsType:overlay blockSize:0} overlay_0-887:{mountpoint:/var/lib/containers/storage/overlay/f373b5f4163f5ff4bf6f14df2cf5043d0ddf84c0876c7838e5f53d9f020f3bf3/merged major:0 minor:887 fsType:overlay blockSize:0} overlay_0-889:{mountpoint:/var/lib/containers/storage/overlay/994ba50522439ca3959c5b1e87cbdaa0489eadd4f34fe9717e55f863651e7bc2/merged major:0 minor:889 fsType:overlay blockSize:0} overlay_0-891:{mountpoint:/var/lib/containers/storage/overlay/6eebe56f79fd32dc16302752c1ebc8f125905baeeac3363f82bae213491ffd04/merged major:0 minor:891 fsType:overlay blockSize:0} overlay_0-896:{mountpoint:/var/lib/containers/storage/overlay/3e93e3e7b7d77 Mar 12 18:29:20.623955 master-0 kubenswrapper[29097]: ce0b835045c0d09522ade3ed55a1c4a68d9a0e1a236623b2e66/merged major:0 minor:896 fsType:overlay blockSize:0} overlay_0-90:{mountpoint:/var/lib/containers/storage/overlay/1132648ecda8c226151a1a71f6d66f4cc2793b0502dfb9e1eab9b817c2ed8d1b/merged major:0 minor:90 fsType:overlay blockSize:0} overlay_0-91:{mountpoint:/var/lib/containers/storage/overlay/310d66b90d17bb4e26b77b72ec81c12c4e4046959b32de9e809c6c5bf2d6c4a0/merged major:0 minor:91 fsType:overlay blockSize:0} overlay_0-913:{mountpoint:/var/lib/containers/storage/overlay/0f1d6508c7b4c198450719e82350e5cadabbe089ff2f37135e4e03952132659f/merged major:0 minor:913 fsType:overlay blockSize:0} overlay_0-951:{mountpoint:/var/lib/containers/storage/overlay/e5eb8f608bd8304fafb0be78645a850c5e22a98d346aeb604be8ca668e2b5f0d/merged major:0 minor:951 fsType:overlay blockSize:0} overlay_0-955:{mountpoint:/var/lib/containers/storage/overlay/31ad4b32953b65c41996d8995b2c9a0d42332b3103ce61a9ea1dcdf24dada2ab/merged major:0 minor:955 fsType:overlay blockSize:0} overlay_0-957:{mountpoint:/var/lib/containers/storage/overlay/19dc97cffdc1092aebf9af4b7fd600e9544bf8c17bb463f507730f992407bcad/merged major:0 minor:957 fsType:overlay blockSize:0} overlay_0-960:{mountpoint:/var/lib/containers/storage/overlay/67dcdb12fc49f9cd3f73f679d2fdc514ce58db984fa4469368e57a03a4f8758b/merged major:0 minor:960 fsType:overlay blockSize:0} overlay_0-963:{mountpoint:/var/lib/containers/storage/overlay/386e66e57a2a3eee9c0f12dd9cdc952a15b1d1bddd425d8e61281201c3c9d257/merged major:0 minor:963 fsType:overlay blockSize:0} overlay_0-969:{mountpoint:/var/lib/containers/storage/overlay/bb40530aaa19612a398f728235616b4a7aad7e5e0f7cae5bae5358d63a78728c/merged major:0 minor:969 fsType:overlay blockSize:0} overlay_0-98:{mountpoint:/var/lib/containers/storage/overlay/d507999c2255e488350ca5cc9bcdaee817df76fa314b2593d404fef7317ebc73/merged major:0 minor:98 fsType:overlay blockSize:0} overlay_0-99:{mountpoint:/var/lib/containers/storage/overlay/000dbc1436b185ecff76fc6d8dda9d7f309c4a1651c2b5b19babc824ddecfb87/merged major:0 minor:99 fsType:overlay blockSize:0} overlay_0-995:{mountpoint:/var/lib/containers/storage/overlay/5d08e9df1298a64cac4311dab080fc9ff1910b07f5036697a682b6ef54dc87c8/merged major:0 minor:995 fsType:overlay blockSize:0} overlay_0-997:{mountpoint:/var/lib/containers/storage/overlay/0e8abc198c42aeef2c6b4a59bab11d652f57b04c82d081325dd8b8139dbbb0a8/merged major:0 minor:997 fsType:overlay blockSize:0} overlay_0-999:{mountpoint:/var/lib/containers/storage/overlay/724204e5b09b5972bf8fea369bef92f76f9e303e2ad8a126085ad43d2b2ec99d/merged major:0 minor:999 fsType:overlay blockSize:0}] Mar 12 18:29:20.655021 master-0 kubenswrapper[29097]: I0312 18:29:20.653670 29097 manager.go:217] Machine: {Timestamp:2026-03-12 18:29:20.652925642 +0000 UTC m=+0.206905759 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654116352 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:14bcec6218994562885f2bb31137a053 SystemUUID:14bcec62-1899-4562-885f-2bb31137a053 BootID:8ea9dfaa-21ba-4398-883d-eae43b35536d Filesystems:[{Device:/var/lib/kubelet/pods/492e9833-4513-4f2f-b865-d05a8973fadc/volumes/kubernetes.io~projected/kube-api-access-5kn2k DeviceMajor:0 DeviceMinor:698 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/37cd9c0a-697e-4e67-932b-b331ff77c8c0/volumes/kubernetes.io~projected/kube-api-access-pfpb9 DeviceMajor:0 DeviceMinor:233 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/1287cbb9-c9f6-48d2-9fda-f4464074e41b/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert DeviceMajor:0 DeviceMinor:804 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/d4ae1240-e04e-48e9-88df-9f1a53508da7/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:213 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-752 DeviceMajor:0 DeviceMinor:752 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1167 DeviceMajor:0 DeviceMinor:1167 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:138 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6/volumes/kubernetes.io~secret/certs DeviceMajor:0 DeviceMinor:1036 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327/volumes/kubernetes.io~projected/kube-api-access-x6595 DeviceMajor:0 DeviceMinor:820 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-960 DeviceMajor:0 DeviceMinor:960 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9c555e5bffa63ad24656c5dfa5ef32654f3cce81a377d07d84caf4aca5f33e3f/userdata/shm DeviceMajor:0 DeviceMinor:56 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-859 DeviceMajor:0 DeviceMinor:859 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-692 DeviceMajor:0 DeviceMinor:692 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-599 DeviceMajor:0 DeviceMinor:599 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-395 DeviceMajor:0 DeviceMinor:395 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1085 DeviceMajor:0 DeviceMinor:1085 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b38e7fcd-8f7a-4d4f-8702-7ef205261054/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:836 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/af342b8bd10a5707fb2e78e4192b40fedda1b3166a7a1ba47d9935fc638c9b76/userdata/shm DeviceMajor:0 DeviceMinor:846 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-809 DeviceMajor:0 DeviceMinor:809 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-277 DeviceMajor:0 DeviceMinor:277 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/17f1eb5b22dadcc1a27bca5d2e41cabae79a53d549f65fc68a87a8776fc86dbf/userdata/shm DeviceMajor:0 DeviceMinor:44 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1053 DeviceMajor:0 DeviceMinor:1053 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1092 DeviceMajor:0 DeviceMinor:1092 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-440 DeviceMajor:0 DeviceMinor:440 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/492e9833-4513-4f2f-b865-d05a8973fadc/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:720 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/518ffff8-8119-41be-8b76-ce49d5751254/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:1011 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c14a32fc9f0111ac97c8d0756c820cfe5f40ed691d6de42ac60400f58318b138/userdata/shm DeviceMajor:0 DeviceMinor:114 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/51eb717b-d11f-4bc3-8df6-deb51d5889f3/volumes/kubernetes.io~secret/package-server-manager-serving-cert DeviceMajor:0 DeviceMinor:561 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1162 DeviceMajor:0 DeviceMinor:1162 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:559 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/f3a2cda2-b70f-4128-a1be-48503f5aad6d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:209 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-913 DeviceMajor:0 DeviceMinor:913 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ee55b576-6b8d-4217-b5a7-93b023a1e885/volumes/kubernetes.io~projected/kube-api-access-j5lf8 DeviceMajor:0 DeviceMinor:992 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/3d77a98a-0176-4924-81d3-8e9890852b38/volumes/kubernetes.io~secret/kube-state-metrics-tls DeviceMajor:0 DeviceMinor:1076 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-391 DeviceMajor:0 DeviceMinor:391 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bf988f8c0a2c5b4133ef3fafc379f42b9d2b5f0585dc6f41596f02be776951fc/userdata/shm DeviceMajor:0 DeviceMinor:828 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/fb529297-b3de-4167-a91e-0a63725b3b0f/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:487 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-642 DeviceMajor:0 DeviceMinor:642 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1133 DeviceMajor:0 DeviceMinor:1133 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e720e1d0-5a6d-4b76-8b25-5963e24950f5/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:220 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-293 DeviceMajor:0 DeviceMinor:293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1cd179b2f7f0fe564fa7a9477bf555cfaf8b89c4a460b563f7a642b74759364e/userdata/shm DeviceMajor:0 DeviceMinor:546 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-41 DeviceMajor:0 DeviceMinor:41 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-140 DeviceMajor:0 DeviceMinor:140 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d94dc349-c5cb-4f12-8e48-867030af4981/volumes/kubernetes.io~projected/kube-api-access-zjmcv DeviceMajor:0 DeviceMinor:228 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1120 DeviceMajor:0 DeviceMinor:1120 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-963 DeviceMajor:0 DeviceMinor:963 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dd6da0ee34b8892cb152f5bedf147197e5f0cc5b453d7c1a52f8c962aaace2e8/userdata/shm DeviceMajor:0 DeviceMinor:832 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-999 DeviceMajor:0 DeviceMinor:999 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f58312bc5c3e22538ea35107690dd2a543db5a56cd4a19ebaf6640fbb1518551/userdata/shm DeviceMajor:0 DeviceMinor:1099 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/9f1f60fa-d79d-4f31-b5bf-2ad333151537/volumes/kubernetes.io~secret/client-ca-bundle DeviceMajor:0 DeviceMinor:1138 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-699 DeviceMajor:0 DeviceMinor:699 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-385 DeviceMajor:0 DeviceMinor:385 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-438 DeviceMajor:0 DeviceMinor:438 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-77 DeviceMajor:0 DeviceMinor:77 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-79 DeviceMajor:0 DeviceMinor:79 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b6d288e3-8e73-44d2-874d-64c6c98dd991/volumes/kubernetes.io~projected/kube-api-access-vdb9w DeviceMajor:0 DeviceMinor:94 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6d3dbd3e29a6d7e3e111f4c45f534b1d831fee55b19f442dc477ede7e14f8ccb/userdata/shm DeviceMajor:0 DeviceMinor:155 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/0fb78c61-2051-42e2-8668-fa7404ccac43/volumes/kubernetes.io~secret/samples-operator-tls DeviceMajor:0 DeviceMinor:794 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/45aa4887-c913-4ece-ae34-fcde33832621/volumes/kubernetes.io~projected/kube-api-access-4vr66 DeviceMajor:0 DeviceMinor:227 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/88d123f937ec5c733d7e95dda1a51126ac31987054c509b5e60506575b947b18/userdata/shm DeviceMajor:0 DeviceMinor:569 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/677d598751a9389168de4b8d58e7ebd447bfc781ea7149cdcbbd5de656faaac5/userdata/shm DeviceMajor:0 DeviceMinor:380 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/d4ae1240-e04e-48e9-88df-9f1a53508da7/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:239 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed/volumes/kubernetes.io~secret/node-tuning-operator-tls DeviceMajor:0 DeviceMinor:415 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/70abcecf11f5f6f42b55c74bce2244e7addd9a94c042b787c6169811b3dbde3f/userdata/shm DeviceMajor:0 DeviceMinor:822 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/e697746f-fb9e-4d10-ab61-33c68e62cc0d/volumes/kubernetes.io~projected/kube-api-access-vct98 DeviceMajor:0 DeviceMinor:240 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/9b41258c-ac1d-4e00-ac5e-732d85441f12/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:389 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bd825685a2a078da6de9c77b8a86a4456fa5c958068f18e079159355b91a76d4/userdata/shm DeviceMajor:0 DeviceMinor:827 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-622 DeviceMajor:0 DeviceMinor:622 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/47850839-bb4b-41e9-ac31-f1cabbb4926d/volumes/kubernetes.io~projected/kube-api-access-krrkl DeviceMajor:0 DeviceMinor:246 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-171 DeviceMajor:0 DeviceMinor:171 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/aee40f88-83e4-45c8-8331-969943f9f9aa/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:816 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-472 DeviceMajor:0 DeviceMinor:472 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-600 DeviceMajor:0 DeviceMinor:600 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-340 DeviceMajor:0 DeviceMinor:340 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-457 DeviceMajor:0 DeviceMinor:457 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:219 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-520 DeviceMajor:0 DeviceMinor:520 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-777 DeviceMajor:0 DeviceMinor:777 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9f1f60fa-d79d-4f31-b5bf-2ad333151537/volumes/kubernetes.io~projected/kube-api-access-hqlfx DeviceMajor:0 DeviceMinor:1141 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/9717d467-af1a-4de0-88e0-c47ec4d12d6e/volumes/kubernetes.io~projected/kube-api-access-kbzcs DeviceMajor:0 DeviceMinor:545 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/d1b3859c-20a1-4a1c-8508-86ed843768f5/volumes/kubernetes.io~projected/kube-api-access-gw4m5 DeviceMajor:0 DeviceMinor:557 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1024 DeviceMajor:0 DeviceMinor:1024 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/be2da107-a419-423f-a657-44d681291f28/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:471 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/419bbcc10e95d196cd0f08dbf057bbc2aa7a617fdfb7f0d1b356baa7bbabca04/userdata/shm DeviceMajor:0 DeviceMinor:273 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/030160af-c915-4f00-903a-1c4b5c2b719a/volumes/kubernetes.io~projected/kube-api-access-9p4dz DeviceMajor:0 DeviceMinor:491 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-447 DeviceMajor:0 DeviceMinor:447 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-116 DeviceMajor:0 DeviceMinor:116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/055f5c67-f512-4510-99c5-e194944b0599/volumes/kubernetes.io~projected/kube-api-access-tkt7d DeviceMajor:0 DeviceMinor:243 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/d1b3859c-20a1-4a1c-8508-86ed843768f5/volumes/kubernetes.io~secret/catalogserver-certs DeviceMajor:0 DeviceMinor:556 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6dd6381115d9cbf9ba7c1a108737553c31041609750cc0e631e36ed92f66311d/userdata/shm DeviceMajor:0 DeviceMinor:100 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/51eb717b-d11f-4bc3-8df6-deb51d5889f3/volumes/kubernetes.io~projected/kube-api-access-gbnx8 DeviceMajor:0 DeviceMinor:225 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-505 DeviceMajor:0 DeviceMinor:505 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-735 DeviceMajor:0 DeviceMinor:735 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4e02da5dec5be8e8f6d924d6c2fb726f7b25e71cacfc4eb1074f2a274b8a70bf/userdata/shm DeviceMajor:0 DeviceMinor:371 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-713 DeviceMajor:0 DeviceMinor:713 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/86c89d73955641e1c897c48a0e24b070831cd22e13c7526466c6f9aac066f9fb/userdata/shm DeviceMajor:0 DeviceMinor:746 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-969 DeviceMajor:0 DeviceMinor:969 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f61feb293ec886a1805e62ec052aa4ca410bad475cca6977bbcf9b16b205a3fd/userdata/shm DeviceMajor:0 DeviceMinor:512 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7bb177ed28141c5fad6532f2da685328c613d7795aff40f2cb06337556b42750/userdata/shm DeviceMajor:0 DeviceMinor:850 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-428 DeviceMajor:0 DeviceMinor:428 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-543 DeviceMajor:0 DeviceMinor:543 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cdac8f08c2f48b7e4861a00ec2d8e5264134cf6ddac6c83f56497027e5816cb7/userdata/shm DeviceMajor:0 DeviceMinor:765 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-731 DeviceMajor:0 DeviceMinor:731 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1157 DeviceMajor:0 DeviceMinor:1157 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e720e1d0-5a6d-4b76-8b25-5963e24950f5/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:237 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/bce831df-c604-4608-a24e-b14d62c5287a/volumes/kubernetes.io~projected/kube-api-access-wfjj6 DeviceMajor:0 DeviceMinor:328 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2acd016733769b6d86086e869e4b5b990685163236e57389d21ff18ee823169b/userdata/shm DeviceMajor:0 DeviceMinor:374 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-855 DeviceMajor:0 DeviceMinor:855 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9f1f60fa-d79d-4f31-b5bf-2ad333151537/volumes/kubernetes.io~secret/secret-metrics-server-tls DeviceMajor:0 DeviceMinor:1139 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-281 DeviceMajor:0 DeviceMinor:281 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-398 DeviceMajor:0 DeviceMinor:398 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b2c6cd11-b1ed-4fed-a4ce-4eee0af20868/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert DeviceMajor:0 DeviceMinor:792 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-865 DeviceMajor:0 DeviceMinor:865 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ecf7670cd0c657ac23db39730253e7de6d6d1e9634e025fe447cc9e07fe1d91a/userdata/shm DeviceMajor:0 DeviceMinor:266 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-297 DeviceMajor:0 DeviceMinor:297 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d3e5b8c8-a100-4880-a0b9-9c3989d4e739/volumes/kubernetes.io~projected/kube-api-access-jrg6p DeviceMajor:0 DeviceMinor:780 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/604044f4-9b0b-4747-827d-843f3cfa7077/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:815 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-885 DeviceMajor:0 DeviceMinor:885 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1062 DeviceMajor:0 DeviceMinor:1062 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8ad05507-e242-4ff8-ae80-c16ff9ee68e2/volumes/kubernetes.io~projected/kube-api-access-th8tc DeviceMajor:0 DeviceMinor:234 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/e22c7035-4b7a-48cb-9abb-db277b387842/volumes/kubernetes.io~projected/kube-api-access-pmxc2 DeviceMajor:0 DeviceMinor:244 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/33feec78-4592-4343-965b-aa1b7044fcf3/volumes/kubernetes.io~projected/kube-api-access-ptrtx DeviceMajor:0 DeviceMinor:303 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/467832511abdd120edabe55a66306c8828fbfde7aa084b7647ffcfdeb1475b2c/userdata/shm DeviceMajor:0 DeviceMinor:541 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/055f5c67-f512-4510-99c5-e194944b0599/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:218 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27/volumes/kubernetes.io~projected/kube-api-access-ggsdx DeviceMajor:0 DeviceMinor:235 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-997 DeviceMajor:0 DeviceMinor:997 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ab67b82c7d40212f9275c2d813cf0ddfb1de4b0eb68ab01e5d797fad81a0d351/userdata/shm DeviceMajor:0 DeviceMinor:258 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327/volumes/kubernetes.io~secret/webhook-certs DeviceMajor:0 DeviceMinor:812 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/b8dd13a7-10e5-431b-8d30-405dcfea02f5/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:128 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-762 DeviceMajor:0 DeviceMinor:762 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-454 DeviceMajor:0 DeviceMinor:454 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f3a2cda2-b70f-4128-a1be-48503f5aad6d/volumes/kubernetes.io~projected/kube-api-access-tcvfv DeviceMajor:0 DeviceMinor:241 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c7f70b704680d5914ffad158cbccda7455cb9abad7ebd364fad668180fbeff37/userdata/shm DeviceMajor:0 DeviceMinor:112 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/e5fb0152-3efd-4000-bce3-fa90b75316ae/volumes/kubernetes.io~projected/kube-api-access-pkftr DeviceMajor:0 DeviceMinor:819 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/025f6ef7726027b226244a49b1b7aa7b4b726a6a64b08241b8944ae1790681b8/userdata/shm DeviceMajor:0 DeviceMinor:567 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/4519000b-e475-4c26-a1c0-bf05cd9c242b/volumes/kubernetes.io~projected/kube-api-access-5x57x DeviceMajor:0 DeviceMinor:111 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/07e040d6dfa9951cac42e33315b3d655ef1dac90f6ba66c364219500701a9ef4/userdata/shm DeviceMajor:0 DeviceMinor:739 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/238ce3dd6965f9273cbd743e0b3e1979d392d0ae170e37e7a7824e217686dfd8/userdata/shm DeviceMajor:0 DeviceMinor:417 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/59af426bb753de2f517179014e6cfd5fa8b94b02ab3fedab6e4b42ba0bebac29/userdata/shm DeviceMajor:0 DeviceMinor:575 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/4048e453-a983-4708-89b6-a81af0067e29/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:696 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-67 DeviceMajor:0 DeviceMinor:67 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b8dd13a7-10e5-431b-8d30-405dcfea02f5/volumes/kubernetes.io~projected/kube-api-access-7rhmv DeviceMajor:0 DeviceMinor:154 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/ab926874-9722-4e65-9084-27b2f9915450/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:242 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-442 DeviceMajor:0 DeviceMinor:442 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1064 DeviceMajor:0 DeviceMinor:1064 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-396 DeviceMajor:0 DeviceMinor:396 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-704 DeviceMajor:0 DeviceMinor:704 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-592 DeviceMajor:0 DeviceMinor:592 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-883 DeviceMajor:0 DeviceMinor:883 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-414 DeviceMajor:0 DeviceMinor:414 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-463 DeviceMajor:0 DeviceMinor:463 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa/volumes/kubernetes.io~projected/kube-api-access-s55hv DeviceMajor:0 DeviceMinor:230 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-548 DeviceMajor:0 DeviceMinor:548 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4048e453-a983-4708-89b6-a81af0067e29/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:683 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/66ebd53076fd17791334e64546a2f2ecb5fadc07eb32a382f94ef82be445ec00/userdata/shm DeviceMajor:0 DeviceMinor:1049 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/266b9f4f-3fb4-474d-84df-0a6c687c7e9a/volumes/kubernetes.io~projected/kube-api-access-6tmqs DeviceMajor:0 DeviceMinor:540 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-618 DeviceMajor:0 DeviceMinor:618 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0fb78c61-2051-42e2-8668-fa7404ccac43/volumes/kubernetes.io~projected/kube-api-access-zsdjs DeviceMajor:0 DeviceMinor:796 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6/volumes/kubernetes.io~secret/node-bootstrap-token DeviceMajor:0 DeviceMinor:1046 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/236f2886-bb69-49a7-9471-36454fd1cbd3/volumes/kubernetes.io~projected/kube-api-access-b6ggg DeviceMajor:0 DeviceMinor:224 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-285 DeviceMajor:0 DeviceMinor:285 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc/volumes/kubernetes.io~projected/kube-api-access-2lj7z DeviceMajor:0 DeviceMinor:531 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-157 DeviceMajor:0 DeviceMinor:157 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-86 DeviceMajor:0 DeviceMinor:86 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-955 DeviceMajor:0 DeviceMinor:955 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/604044f4-9b0b-4747-827d-843f3cfa7077/volumes/kubernetes.io~projected/kube-api-access-fqzmm DeviceMajor:0 DeviceMinor:821 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-90 DeviceMajor:0 DeviceMinor:90 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ee55b576-6b8d-4217-b5a7-93b023a1e885/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:988 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a27b7d74527b56755a6c2c471b3ca3c73b2cfc54277efe40b5551df95fef2671/userdata/shm DeviceMajor:0 DeviceMinor:327 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-483 DeviceMajor:0 DeviceMinor:483 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe/volumes/kubernetes.io~projected/kube-api-access-tdlcw DeviceMajor:0 DeviceMinor:142 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/a1e2340b-ebca-40de-b1e0-8133999cd860/volumes/kubernetes.io~projected/kube-api-access-6vpbp DeviceMajor:0 DeviceMinor:245 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/9b41258c-ac1d-4e00-ac5e-732d85441f12/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:367 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/adb0dbbf-458d-46f5-b236-d4904e125418/volumes/kubernetes.io~secret/node-exporter-tls DeviceMajor:0 DeviceMinor:1089 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/473fbb88708371a6bebda6f8bc6fbc876db715f79766d57af954db0a509d99f7/userdata/shm DeviceMajor:0 DeviceMinor:1014 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6a236b16d484393f57933afd10e1f7dd4dd1ef7cdb2b760e6b6520b399e5ee85/userdata/shm DeviceMajor:0 DeviceMinor:1018 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e11e6ab5433862f323f8ba8f5b3beee99fbf9268c9b94118367fbf5cbb898018/userdata/shm DeviceMajor:0 DeviceMinor:449 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cbd303c81d220cd5ed6e63d675881c37da5cce6a8a3c62add5c0bf5721b5fd9f/userdata/shm DeviceMajor:0 DeviceMinor:718 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-81 DeviceMajor:0 DeviceMinor:81 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8c241720-7815-40fd-8d4a-1685a43b5893/volumes/kubernetes.io~projected/kube-api-access-l8qw4 DeviceMajor:0 DeviceMinor:366 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-434 DeviceMajor:0 DeviceMinor:434 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/78c13011-7a79-445f-807c-4f5e75643549/volumes/kubernetes.io~secret/openshift-state-metrics-tls DeviceMajor:0 DeviceMinor:1074 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/91d19dd0041e348f5ab95fd10ff19be4195ac501d593d4346b94e73b4b7bfba3/userdata/shm DeviceMajor:0 DeviceMinor:252 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/34cbf061-4c76-476e-bed9-0a133c744862/volumes/kubernetes.io~projected/kube-api-access-gmsnk DeviceMajor:0 DeviceMinor:737 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1026 DeviceMajor:0 DeviceMinor:1026 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8ad05507-e242-4ff8-ae80-c16ff9ee68e2/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:406 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1113 DeviceMajor:0 DeviceMinor:1113 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64/volumes/kubernetes.io~secret/marketplace-operator-metrics DeviceMajor:0 DeviceMinor:562 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/e22c7035-4b7a-48cb-9abb-db277b387842/volumes/kubernetes.io~secret/image-registry-operator-tls DeviceMajor:0 DeviceMinor:405 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-876 DeviceMajor:0 DeviceMinor:876 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9198031b5b07e64ace92b4c83419c05c8903b54c6277e82aff6a36c6cdfe7576/userdata/shm DeviceMajor:0 DeviceMinor:1083 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/e697746f-fb9e-4d10-ab61-33c68e62cc0d/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:221 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/60ce66e2a62ed17df4cad9067d0bb6d4940a38dc4b5a5337ba95a9117aca3c70/userdata/shm DeviceMajor:0 DeviceMinor:265 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-335 DeviceMajor:0 DeviceMinor:335 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/aee40f88-83e4-45c8-8331-969943f9f9aa/volumes/kubernetes.io~projected/kube-api-access-th72r DeviceMajor:0 DeviceMinor:824 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-362 DeviceMajor:0 DeviceMinor:362 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9efb49a7f1b6a902873e9b844b8f9a0a68e95cea55ba6d37aefc0f305d7e46f9/userdata/shm DeviceMajor:0 DeviceMinor:84 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-631 DeviceMajor:0 DeviceMinor:631 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-788 DeviceMajor:0 DeviceMinor:788 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:563 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-629 DeviceMajor:0 DeviceMinor:629 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b648b6de-59a6-42da-84e2-77ea0264ae25/volumes/kubernetes.io~projected/kube-api-access-7n4d5 DeviceMajor:0 DeviceMinor:1012 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/41c1bd85-369e-4341-9e80-8b4b248b5572/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1045 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/427d9fb4da53770ccc67c12bc3aaae3e507cd75207365e40c1611eeaaf274b84/userdata/shm DeviceMajor:0 DeviceMinor:47 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-126 DeviceMajor:0 DeviceMinor:126 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/78c13011-7a79-445f-807c-4f5e75643549/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1070 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/49ed17fafdb495990cffcb60e09d22b57348e9bbf59679c7126d84628d0f24f1/userdata/shm DeviceMajor:0 DeviceMinor:711 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/fb529297-b3de-4167-a91e-0a63725b3b0f/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:486 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/3d77a98a-0176-4924-81d3-8e9890852b38/volumes/kubernetes.io~projected/kube-api-access-f72ng DeviceMajor:0 DeviceMinor:1079 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fa6c2fe81e494b2ba395dd1830ab3075ce81e641a81c84edd4df4d6a6849559f/userdata/shm DeviceMajor:0 DeviceMinor:1142 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-74 DeviceMajor:0 DeviceMinor:74 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/062f1b21-2ffc-47da-8334-427c3b2a1a90/volumes/kubernetes.io~projected/kube-api-access-jn9nf DeviceMajor:0 DeviceMinor:236 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1d311f9e8cf6ad3cdfb6335b00f9729ed813d3bacc476060ca21b806ee856231/userdata/shm DeviceMajor:0 DeviceMinor:267 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/e94d098b-fbcc-4e85-b8ad-42f3a21c822c/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls DeviceMajor:0 DeviceMinor:560 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/e5fb0152-3efd-4000-bce3-fa90b75316ae/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:814 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-346 DeviceMajor:0 DeviceMinor:346 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-750 DeviceMajor:0 DeviceMinor:750 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1055 DeviceMajor:0 DeviceMinor:1055 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/run/containers/storage/overlay-containers/477c2eb598b783cdff738fbc37cca5c05334e3fbafc9bde3daf1f7428b823f9e/userdata/shm DeviceMajor:0 DeviceMinor:1016 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-91 DeviceMajor:0 DeviceMinor:91 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-161 DeviceMajor:0 DeviceMinor:161 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-70 DeviceMajor:0 DeviceMinor:70 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-880 DeviceMajor:0 DeviceMinor:880 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1107 DeviceMajor:0 DeviceMinor:1107 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e697746f-fb9e-4d10-ab61-33c68e62cc0d/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:222 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-341 DeviceMajor:0 DeviceMinor:341 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7982749b657bfab7994ceaf29145e70be8f2384ed3fb94c1cb38726c467e71d6/userdata/shm DeviceMajor:0 DeviceMinor:576 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d8368573a80125faaafb84704a42eab10d08f21db89fb0224a3e775974fbecf4/userdata/shm DeviceMajor:0 DeviceMinor:840 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1022 DeviceMajor:0 DeviceMinor:1022 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9f1f60fa-d79d-4f31-b5bf-2ad333151537/volumes/kubernetes.io~secret/secret-metrics-client-certs DeviceMajor:0 DeviceMinor:1140 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/74bbf11cd33cced50ba626f06b188adf24ce7f72b1161eb2c06db1ce6ae46dd5/userdata/shm DeviceMajor:0 DeviceMinor:139 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bafb4a547df5e8f39b94a88d98e85d34e5a0230468f2013bc4da6ee9fbc59ee3/userdata/shm DeviceMajor:0 DeviceMinor:263 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-639 DeviceMajor:0 DeviceMinor:639 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-603 DeviceMajor:0 DeviceMinor:603 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/518ffff8-8119-41be-8b76-ce49d5751254/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:1010 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5a8691e7dd271734f7d5ba67a7c54479d001ddcb15882ee789f7857b1fdecfe2/userdata/shm DeviceMajor:0 DeviceMinor:1051 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/3d77a98a-0176-4924-81d3-8e9890852b38/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1077 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64/volumes/kubernetes.io~projected/kube-api-access-fdlxn DeviceMajor:0 DeviceMinor:238 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/85053d5f110db3eb5945372c71fba0aaee9c7dfe111d937780aa0b35eca2e681/userdata/shm DeviceMajor:0 DeviceMinor:416 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1118 DeviceMajor:0 DeviceMinor:1118 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1103 DeviceMajor:0 DeviceMinor:1103 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-467 DeviceMajor:0 DeviceMinor:467 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-99 DeviceMajor:0 DeviceMinor:99 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-532 DeviceMajor:0 DeviceMinor:532 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-295 DeviceMajor:0 DeviceMinor:295 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1c062db0efa15fd6679d2718a5857a9b8db81f25fe5e3a47bd35e6f192db0dd6/userdata/shm DeviceMajor:0 DeviceMinor:250 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bc7646036582f47fdf8d0b7175478e014e467156ca033a5485ffd89d7588c9e5/userdata/shm DeviceMajor:0 DeviceMinor:853 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-749 DeviceMajor:0 DeviceMinor:749 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e94d098b-fbcc-4e85-b8ad-42f3a21c822c/volumes/kubernetes.io~projected/kube-api-access-bttzm DeviceMajor:0 DeviceMinor:247 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a3668de3fedf57192290be88e895d89eca099cb587eeab867bde241aeee908bc/userdata/shm DeviceMajor:0 DeviceMinor:838 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-283 DeviceMajor:0 DeviceMinor:283 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-436 DeviceMajor:0 DeviceMinor:436 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-657 DeviceMajor:0 DeviceMinor:657 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e7b7f3534352e488adef6510c4ad914236d57c0ac52f6e0d4e107e52563cb840/userdata/shm DeviceMajor:0 DeviceMinor:119 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/52d3dc87-0bf8-4a62-a6fc-0ffa6060c6a8/volumes/kubernetes.io~secret/tls-certificates DeviceMajor:0 DeviceMinor:1005 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1146 DeviceMajor:0 DeviceMinor:1146 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-513 DeviceMajor:0 DeviceMinor:513 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8b4322c396b926726b3445bf3f4c514365e3dc0962cabf32996b7feaa6ce265c/userdata/shm DeviceMajor:0 DeviceMinor:570 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-98 DeviceMajor:0 DeviceMinor:98 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d94dc349-c5cb-4f12-8e48-867030af4981/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:373 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/17d1a088b8419eadaf38001b6fa832ae43cb7cf4605e77942c3aeacc31e4a82a/userdata/shm DeviceMajor:0 DeviceMinor:844 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-867 DeviceMajor:0 DeviceMinor:867 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-957 DeviceMajor:0 DeviceMinor:957 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/38c299b0655225599ee9b03928de90a7927480cc6329508c816e9e361bbcfa16/userdata/shm DeviceMajor:0 DeviceMinor:256 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/da8a3dd02c7bc3e5376ebe604c414570540ccdc280e818957636de9c32beb180/userdata/shm DeviceMajor:0 DeviceMinor:574 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1148 DeviceMajor:0 DeviceMinor:1148 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1287cbb9-c9f6-48d2-9fda-f4464074e41b/volumes/kubernetes.io~projected/kube-api-access-hjz8k DeviceMajor:0 DeviceMinor:807 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/e5fb0152-3efd-4000-bce3-fa90b75316ae/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls DeviceMajor:0 DeviceMinor:813 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/0cc54e47-af53-448a-b1c9-043710890a32/volumes/kubernetes.io~projected/kube-api-access-bdc26 DeviceMajor:0 DeviceMinor:730 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-889 DeviceMajor:0 DeviceMinor:889 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/518ffff8-8119-41be-8b76-ce49d5751254/volumes/ Mar 12 18:29:20.655695 master-0 kubenswrapper[29097]: kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:1009 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e3cce5ce786ddb4af71a8112135cad1426f074e7b12c67ae740271017aa946b3/userdata/shm DeviceMajor:0 DeviceMinor:254 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:410 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/b38e7fcd-8f7a-4d4f-8702-7ef205261054/volumes/kubernetes.io~projected/kube-api-access-zp5gk DeviceMajor:0 DeviceMinor:837 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6/volumes/kubernetes.io~projected/kube-api-access-h65dg DeviceMajor:0 DeviceMinor:1047 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1155 DeviceMajor:0 DeviceMinor:1155 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e54bd9f4ed4a4d50ffb03de0e433f3897b33df6a46c77fdb71d0900fa9a91e17/userdata/shm DeviceMajor:0 DeviceMinor:261 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-633 DeviceMajor:0 DeviceMinor:633 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-534 DeviceMajor:0 DeviceMinor:534 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:566 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/8fe3d699-023e-4de7-8d42-6c9d8a5e68f3/volumes/kubernetes.io~secret/signing-key DeviceMajor:0 DeviceMinor:368 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-375 DeviceMajor:0 DeviceMinor:375 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-387 DeviceMajor:0 DeviceMinor:387 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-45 DeviceMajor:0 DeviceMinor:45 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-287 DeviceMajor:0 DeviceMinor:287 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1184 DeviceMajor:0 DeviceMinor:1184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fb3de63e9ae8f0f90ed99bf3dc6471ec32942e542a8f9f641416a08fbffeda83/userdata/shm DeviceMajor:0 DeviceMinor:577 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/1c016b1e-d47c-47d4-a15f-4160e7731c82/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:705 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/062f1b21-2ffc-47da-8334-427c3b2a1a90/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:223 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-641 DeviceMajor:0 DeviceMinor:641 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e22c7035-4b7a-48cb-9abb-db277b387842/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:231 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0182a4eff93f7ac8355fe5920af6a23f38515c1d4a493448a8ac4ea00cfb1b71/userdata/shm DeviceMajor:0 DeviceMinor:573 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-637 DeviceMajor:0 DeviceMinor:637 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-462 DeviceMajor:0 DeviceMinor:462 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-325 DeviceMajor:0 DeviceMinor:325 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-744 DeviceMajor:0 DeviceMinor:744 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a1e2340b-ebca-40de-b1e0-8133999cd860/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:217 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-769 DeviceMajor:0 DeviceMinor:769 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-54 DeviceMajor:0 DeviceMinor:54 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed/volumes/kubernetes.io~projected/kube-api-access-wlf77 DeviceMajor:0 DeviceMinor:232 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1057 DeviceMajor:0 DeviceMinor:1057 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1020 DeviceMajor:0 DeviceMinor:1020 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/47850839-bb4b-41e9-ac31-f1cabbb4926d/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:564 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-702 DeviceMajor:0 DeviceMinor:702 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ee4c1949-96b4-4444-9675-9df1d46f681e/volumes/kubernetes.io~projected/kube-api-access-x4wsx DeviceMajor:0 DeviceMinor:831 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/d94dc349-c5cb-4f12-8e48-867030af4981/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:226 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/d1b3859c-20a1-4a1c-8508-86ed843768f5/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:552 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-589 DeviceMajor:0 DeviceMinor:589 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1090 DeviceMajor:0 DeviceMinor:1090 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-275 DeviceMajor:0 DeviceMinor:275 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9c936c5cf4325b6eaecd87ab37df8b339b08dfc494b408b448e5f3edd8efcd5a/userdata/shm DeviceMajor:0 DeviceMinor:421 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1105 DeviceMajor:0 DeviceMinor:1105 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/94fe08ab3fbb45153add02b6a25a4870b04bc9f5d9d03ddb9283e70a2fe32299/userdata/shm DeviceMajor:0 DeviceMinor:93 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-625 DeviceMajor:0 DeviceMinor:625 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-790 DeviceMajor:0 DeviceMinor:790 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8fe3d699-023e-4de7-8d42-6c9d8a5e68f3/volumes/kubernetes.io~projected/kube-api-access-xtrvs DeviceMajor:0 DeviceMinor:369 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-775 DeviceMajor:0 DeviceMinor:775 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/455f0aad-add2-49d0-995c-f92467bce2d6/volumes/kubernetes.io~projected/kube-api-access-pxsgv DeviceMajor:0 DeviceMinor:118 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-199 DeviceMajor:0 DeviceMinor:199 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-550 DeviceMajor:0 DeviceMinor:550 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f5e09875-4445-4584-94f0-243148307bb0/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:805 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/f5e09875-4445-4584-94f0-243148307bb0/volumes/kubernetes.io~projected/kube-api-access-clsd9 DeviceMajor:0 DeviceMinor:795 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/ee4c1949-96b4-4444-9675-9df1d46f681e/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:748 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-359 DeviceMajor:0 DeviceMinor:359 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/38a4bf73-479e-4bbf-9aa3-639fc288c8bc/volumes/kubernetes.io~projected/kube-api-access-2pn9h DeviceMajor:0 DeviceMinor:105 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/74eb1407-de29-42e5-9e6c-ce1bec3a9d80/volumes/kubernetes.io~projected/kube-api-access-b6ggc DeviceMajor:0 DeviceMinor:125 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a8072978949c36bf7009bf60cefe0dca093e821c0beb2fafb1092bfaa0b6ca78/userdata/shm DeviceMajor:0 DeviceMinor:842 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-530 DeviceMajor:0 DeviceMinor:530 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a91d85c0ce3e6a8b926dbcc4b0882326fc962f35e4dc2d7cda43fa3db3301729/userdata/shm DeviceMajor:0 DeviceMinor:729 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-291 DeviceMajor:0 DeviceMinor:291 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-605 DeviceMajor:0 DeviceMinor:605 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c8f19a12a173a3644a8c884e60505576df72aa86c56475b9cb55da771d09977f/userdata/shm DeviceMajor:0 DeviceMinor:1080 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/34cbf061-4c76-476e-bed9-0a133c744862/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls DeviceMajor:0 DeviceMinor:515 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-758 DeviceMajor:0 DeviceMinor:758 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-279 DeviceMajor:0 DeviceMinor:279 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-378 DeviceMajor:0 DeviceMinor:378 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d92dddc8-a810-43f5-8beb-32d1c8ad8381/volumes/kubernetes.io~projected/kube-api-access-l22gw DeviceMajor:0 DeviceMinor:259 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3e5cfbce195dab841bc3d549f7ec807dce5f9f747be2dff9f428eff5e81f95a6/userdata/shm DeviceMajor:0 DeviceMinor:337 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/9b41258c-ac1d-4e00-ac5e-732d85441f12/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:384 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/b38e7fcd-8f7a-4d4f-8702-7ef205261054/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:835 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-878 DeviceMajor:0 DeviceMinor:878 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-594 DeviceMajor:0 DeviceMinor:594 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3092260e94cf7b40349ae07a2ae8e596f460006829227c4e274b91910ac605bd/userdata/shm DeviceMajor:0 DeviceMinor:169 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/41c1bd85-369e-4341-9e80-8b4b248b5572/volumes/kubernetes.io~secret/prometheus-operator-tls DeviceMajor:0 DeviceMinor:1037 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/41c1bd85-369e-4341-9e80-8b4b248b5572/volumes/kubernetes.io~projected/kube-api-access-q7pjn DeviceMajor:0 DeviceMinor:1048 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/266b9f4f-3fb4-474d-84df-0a6c687c7e9a/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:536 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-799 DeviceMajor:0 DeviceMinor:799 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b2474f5d479286c70d652654d0e6946155d42c6b7dd11abc4f60fd4bf3123854/userdata/shm DeviceMajor:0 DeviceMinor:248 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-768 DeviceMajor:0 DeviceMinor:768 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-891 DeviceMajor:0 DeviceMinor:891 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-299 DeviceMajor:0 DeviceMinor:299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/78c13011-7a79-445f-807c-4f5e75643549/volumes/kubernetes.io~projected/kube-api-access-bmntw DeviceMajor:0 DeviceMinor:1075 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-682 DeviceMajor:0 DeviceMinor:682 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fb529297-b3de-4167-a91e-0a63725b3b0f/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:488 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-304 DeviceMajor:0 DeviceMinor:304 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-951 DeviceMajor:0 DeviceMinor:951 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f/volumes/kubernetes.io~projected/kube-api-access-9jgbv DeviceMajor:0 DeviceMinor:123 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/9b41258c-ac1d-4e00-ac5e-732d85441f12/volumes/kubernetes.io~projected/kube-api-access-7lmj2 DeviceMajor:0 DeviceMinor:444 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-801 DeviceMajor:0 DeviceMinor:801 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/adb0dbbf-458d-46f5-b236-d4904e125418/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1078 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-102 DeviceMajor:0 DeviceMinor:102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ab926874-9722-4e65-9084-27b2f9915450/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:216 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1166 DeviceMajor:0 DeviceMinor:1166 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652/volumes/kubernetes.io~projected/kube-api-access-k59mb DeviceMajor:0 DeviceMinor:558 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-75 DeviceMajor:0 DeviceMinor:75 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-995 DeviceMajor:0 DeviceMinor:995 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fb529297-b3de-4167-a91e-0a63725b3b0f/volumes/kubernetes.io~projected/kube-api-access-tmzf4 DeviceMajor:0 DeviceMinor:585 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/4687cf53-55d7-42b7-b24d-e57da3989fd6/volumes/kubernetes.io~secret/machine-api-operator-tls DeviceMajor:0 DeviceMinor:817 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-738 DeviceMajor:0 DeviceMinor:738 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-834 DeviceMajor:0 DeviceMinor:834 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-72 DeviceMajor:0 DeviceMinor:72 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c/volumes/kubernetes.io~projected/kube-api-access-xmvnh DeviceMajor:0 DeviceMinor:764 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-757 DeviceMajor:0 DeviceMinor:757 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-289 DeviceMajor:0 DeviceMinor:289 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b2c6cd11-b1ed-4fed-a4ce-4eee0af20868/volumes/kubernetes.io~projected/kube-api-access-md9dt DeviceMajor:0 DeviceMinor:802 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/adb0dbbf-458d-46f5-b236-d4904e125418/volumes/kubernetes.io~projected/kube-api-access-52svc DeviceMajor:0 DeviceMinor:1082 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-811 DeviceMajor:0 DeviceMinor:811 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827056128 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-431 DeviceMajor:0 DeviceMinor:431 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-108 DeviceMajor:0 DeviceMinor:108 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1c016b1e-d47c-47d4-a15f-4160e7731c82/volumes/kubernetes.io~projected/kube-api-access-clz8x DeviceMajor:0 DeviceMinor:716 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/b8dd13a7-10e5-431b-8d30-405dcfea02f5/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/4687cf53-55d7-42b7-b24d-e57da3989fd6/volumes/kubernetes.io~projected/kube-api-access-68xhl DeviceMajor:0 DeviceMinor:818 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/b6d288e3-8e73-44d2-874d-64c6c98dd991/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/518ffff8-8119-41be-8b76-ce49d5751254/volumes/kubernetes.io~projected/kube-api-access-4glbr DeviceMajor:0 DeviceMinor:1013 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-163 DeviceMajor:0 DeviceMinor:163 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-379 DeviceMajor:0 DeviceMinor:379 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/37cd9c0a-697e-4e67-932b-b331ff77c8c0/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:214 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7f1277c10cbb7843daf01cf48e1bbb02b9db679e347497370ac485520e63be09/userdata/shm DeviceMajor:0 DeviceMinor:365 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-857 DeviceMajor:0 DeviceMinor:857 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-887 DeviceMajor:0 DeviceMinor:887 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/74eb1407-de29-42e5-9e6c-ce1bec3a9d80/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-627 DeviceMajor:0 DeviceMinor:627 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1034 DeviceMajor:0 DeviceMinor:1034 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/236f2886-bb69-49a7-9471-36454fd1cbd3/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:215 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9f552493910bdb73df860ba3e68ba62d10417ad2cd26090e67bcb0c06153f976/userdata/shm DeviceMajor:0 DeviceMinor:786 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/49a13810e28c69eccfa523be3ac0813defa610868fd3abaf3cd37d9177c29502/userdata/shm DeviceMajor:0 DeviceMinor:993 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-324 DeviceMajor:0 DeviceMinor:324 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/be2da107-a419-423f-a657-44d681291f28/volumes/kubernetes.io~projected/kube-api-access-jfp84 DeviceMajor:0 DeviceMinor:717 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-896 DeviceMajor:0 DeviceMinor:896 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-88 DeviceMajor:0 DeviceMinor:88 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:522 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-610 DeviceMajor:0 DeviceMinor:610 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/81a2ffe73dc94d42d0d0d238f88887bd148c25d4cd10443967e58bd472ed7cfd/userdata/shm DeviceMajor:0 DeviceMinor:424 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/030160af-c915-4f00-903a-1c4b5c2b719a/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:229 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-572 DeviceMajor:0 DeviceMinor:572 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1087 DeviceMajor:0 DeviceMinor:1087 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:523 Capacity:32475516928 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/64939e6ed4a35637d0a2c2bc028af5d4314b96efb512849766779eb1c4382a35/userdata/shm DeviceMajor:0 DeviceMinor:270 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-874 DeviceMajor:0 DeviceMinor:874 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/16d696d609e4d9275dce2cfecd0a4d1078c8e60ea7f137e92635d2bfd874a46b/userdata/shm DeviceMajor:0 DeviceMinor:129 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:0182a4eff93f7ac MacAddress:96:c2:e1:f4:91:ae Speed:10000 Mtu:8900} {Name:025f6ef7726027b MacAddress:d6:83:14:bc:ba:80 Speed:10000 Mtu:8900} {Name:07e040d6dfa9951 MacAddress:52:eb:78:36:81:6c Speed:10000 Mtu:8900} {Name:17d1a088b8419ea MacAddress:0e:5b:a7:fe:09:8a Speed:10000 Mtu:8900} {Name:1c062db0efa15fd MacAddress:e2:61:e0:b0:ad:06 Speed:10000 Mtu:8900} {Name:1d311f9e8cf6ad3 MacAddress:e2:21:fb:1d:46:fd Speed:10000 Mtu:8900} {Name:238ce3dd6965f92 MacAddress:be:98:63:22:23:68 Speed:10000 Mtu:8900} {Name:2acd016733769b6 MacAddress:b6:2c:80:ed:9a:90 Speed:10000 Mtu:8900} {Name:3092260e94cf7b4 MacAddress:d6:36:e2:06:3b:64 Speed:10000 Mtu:8900} {Name:38c299b06552255 MacAddress:36:f7:78:59:28:d4 Speed:10000 Mtu:8900} {Name:3e5cfbce195dab8 MacAddress:aa:0f:c0:ea:ae:af Speed:10000 Mtu:8900} {Name:467832511abdd12 MacAddress:2a:c1:7b:c9:75:52 Speed:10000 Mtu:8900} {Name:473fbb88708371a MacAddress:e6:52:64:46:d5:b1 Speed:10000 Mtu:8900} {Name:477c2eb598b783c MacAddress:32:75:6a:c4:b1:63 Speed:10000 Mtu:8900} {Name:49a13810e28c69e MacAddress:22:8b:08:c7:6f:a0 Speed:10000 Mtu:8900} {Name:4e02da5dec5be8e MacAddress:16:27:72:f1:17:b2 Speed:10000 Mtu:8900} {Name:59af426bb753de2 MacAddress:86:35:84:fe:5c:83 Speed:10000 Mtu:8900} {Name:60ce66e2a62ed17 MacAddress:f6:c8:94:ad:11:f9 Speed:10000 Mtu:8900} {Name:64939e6ed4a3563 MacAddress:2e:ef:be:ec:43:7b Speed:10000 Mtu:8900} {Name:66ebd53076fd177 MacAddress:4e:9f:b4:3d:81:9d Speed:10000 Mtu:8900} {Name:70abcecf11f5f6f MacAddress:da:e2:49:82:2e:f2 Speed:10000 Mtu:8900} {Name:7982749b657bfab MacAddress:86:f6:d6:ce:a0:e9 Speed:10000 Mtu:8900} {Name:7bb177ed28141c5 MacAddress:02:52:da:0d:44:7f Speed:10000 Mtu:8900} {Name:7f1277c10cbb784 MacAddress:fa:2c:88:f7:3a:5d Speed:10000 Mtu:8900} {Name:81a2ffe73dc94d4 MacAddress:1e:60:dd:ea:39:db Speed:10000 Mtu:8900} {Name:85053d5f110db3e MacAddress:e2:9c:c7:e2:26:b5 Speed:10000 Mtu:8900} {Name:86c89d73955641e MacAddress:46:9d:aa:be:61:05 Speed:10000 Mtu:8900} {Name:88d123f937ec5c7 MacAddress:6a:a9:dc:c6:84:76 Speed:10000 Mtu:8900} {Name:8b4322c396b9267 MacAddress:5a:57:72:14:8e:ab Speed:10000 Mtu:8900} {Name:9198031b5b07e64 MacAddress:b2:f8:80:12:b5:cc Speed:10000 Mtu:8900} {Name:91d19dd0041e348 MacAddress:ba:26:c2:c0:47:23 Speed:10000 Mtu:8900} {Name:9c936c5cf4325b6 MacAddress:3e:49:ba:b4:8c:a8 Speed:10000 Mtu:8900} {Name:9f552493910bdb7 MacAddress:76:38:08:9a:bd:62 Speed:10000 Mtu:8900} {Name:a3668de3fedf571 MacAddress:86:a6:fc:d6:3b:6d Speed:10000 Mtu:8900} {Name:a8072978949c36b MacAddress:c2:0f:9d:0a:07:58 Speed:10000 Mtu:8900} {Name:a91d85c0ce3e6a8 MacAddress:fe:0f:87:50:11:23 Speed:10000 Mtu:8900} {Name:ab67b82c7d40212 MacAddress:9e:8a:9a:15:68:3e Speed:10000 Mtu:8900} {Name:af342b8bd10a570 MacAddress:aa:a8:e6:be:c0:3e Speed:10000 Mtu:8900} {Name:b2474f5d479286c MacAddress:6e:fb:ed:80:6c:ec Speed:10000 Mtu:8900} {Name:bafb4a547df5e8f MacAddress:da:8f:77:d0:de:ff Speed:10000 Mtu:8900} {Name:bc7646036582f47 MacAddress:f2:4b:2b:f9:da:fd Speed:10000 Mtu:8900} {Name:bd825685a2a078d MacAddress:c6:65:e1:87:5b:9f Speed:10000 Mtu:8900} {Name:bf988f8c0a2c5b4 MacAddress:ee:c7:80:86:03:31 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:4a:86:78:1f:90:b3 Speed:0 Mtu:8900} {Name:c7f70b704680d59 MacAddress:1e:0c:2e:09:33:c2 Speed:10000 Mtu:8900} {Name:c8f19a12a173a36 MacAddress:46:d5:fb:d4:a1:90 Speed:10000 Mtu:8900} {Name:cbd303c81d220cd MacAddress:b6:c7:c0:c1:86:b3 Speed:10000 Mtu:8900} {Name:cdac8f08c2f48b7 MacAddress:f6:6d:7d:d5:2d:75 Speed:10000 Mtu:8900} {Name:d8368573a80125f MacAddress:0a:86:f8:db:07:b3 Speed:10000 Mtu:8900} {Name:da8a3dd02c7bc3e MacAddress:3e:80:0e:b7:71:54 Speed:10000 Mtu:8900} {Name:e11e6ab5433862f MacAddress:c2:7d:b9:23:dd:86 Speed:10000 Mtu:8900} {Name:e3cce5ce786ddb4 MacAddress:32:69:3f:87:54:50 Speed:10000 Mtu:8900} {Name:e54bd9f4ed4a4d5 MacAddress:9e:7a:3c:3e:9c:b3 Speed:10000 Mtu:8900} {Name:ecf7670cd0c657a MacAddress:12:0a:1a:01:46:d0 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:b1:d2:12 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:2e:3d:5d Speed:-1 Mtu:9000} {Name:fa6c2fe81e494b2 MacAddress:6a:52:56:de:f4:f0 Speed:10000 Mtu:8900} {Name:fb3de63e9ae8f0f MacAddress:86:3e:c6:72:93:8e Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:6a:b6:1d:75:96:b7 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654116352 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 12 18:29:20.655695 master-0 kubenswrapper[29097]: I0312 18:29:20.655026 29097 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 12 18:29:20.655695 master-0 kubenswrapper[29097]: I0312 18:29:20.655096 29097 manager.go:233] Version: {KernelVersion:5.14.0-427.111.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602172219-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 12 18:29:20.655695 master-0 kubenswrapper[29097]: I0312 18:29:20.655370 29097 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 12 18:29:20.655695 master-0 kubenswrapper[29097]: I0312 18:29:20.655551 29097 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 18:29:20.656108 master-0 kubenswrapper[29097]: I0312 18:29:20.655585 29097 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 18:29:20.656108 master-0 kubenswrapper[29097]: I0312 18:29:20.655803 29097 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 18:29:20.656108 master-0 kubenswrapper[29097]: I0312 18:29:20.655813 29097 container_manager_linux.go:303] "Creating device plugin manager" Mar 12 18:29:20.656108 master-0 kubenswrapper[29097]: I0312 18:29:20.655823 29097 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 12 18:29:20.656108 master-0 kubenswrapper[29097]: I0312 18:29:20.655849 29097 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 12 18:29:20.656108 master-0 kubenswrapper[29097]: I0312 18:29:20.655932 29097 state_mem.go:36] "Initialized new in-memory state store" Mar 12 18:29:20.656108 master-0 kubenswrapper[29097]: I0312 18:29:20.656031 29097 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 12 18:29:20.656372 master-0 kubenswrapper[29097]: I0312 18:29:20.656116 29097 kubelet.go:418] "Attempting to sync node with API server" Mar 12 18:29:20.656372 master-0 kubenswrapper[29097]: I0312 18:29:20.656135 29097 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 18:29:20.656372 master-0 kubenswrapper[29097]: I0312 18:29:20.656186 29097 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 12 18:29:20.656372 master-0 kubenswrapper[29097]: I0312 18:29:20.656202 29097 kubelet.go:324] "Adding apiserver pod source" Mar 12 18:29:20.656372 master-0 kubenswrapper[29097]: I0312 18:29:20.656222 29097 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 18:29:20.657545 master-0 kubenswrapper[29097]: I0312 18:29:20.657472 29097 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 12 18:29:20.657699 master-0 kubenswrapper[29097]: I0312 18:29:20.657671 29097 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 12 18:29:20.657977 master-0 kubenswrapper[29097]: I0312 18:29:20.657929 29097 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 12 18:29:20.658078 master-0 kubenswrapper[29097]: I0312 18:29:20.658059 29097 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 12 18:29:20.658133 master-0 kubenswrapper[29097]: I0312 18:29:20.658081 29097 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 12 18:29:20.658133 master-0 kubenswrapper[29097]: I0312 18:29:20.658090 29097 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 12 18:29:20.658133 master-0 kubenswrapper[29097]: I0312 18:29:20.658097 29097 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 12 18:29:20.658133 master-0 kubenswrapper[29097]: I0312 18:29:20.658103 29097 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 12 18:29:20.658133 master-0 kubenswrapper[29097]: I0312 18:29:20.658110 29097 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 12 18:29:20.658133 master-0 kubenswrapper[29097]: I0312 18:29:20.658116 29097 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 12 18:29:20.658133 master-0 kubenswrapper[29097]: I0312 18:29:20.658123 29097 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 12 18:29:20.658133 master-0 kubenswrapper[29097]: I0312 18:29:20.658132 29097 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 12 18:29:20.658133 master-0 kubenswrapper[29097]: I0312 18:29:20.658139 29097 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 12 18:29:20.664439 master-0 kubenswrapper[29097]: I0312 18:29:20.658175 29097 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 12 18:29:20.664439 master-0 kubenswrapper[29097]: I0312 18:29:20.658188 29097 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 12 18:29:20.664439 master-0 kubenswrapper[29097]: I0312 18:29:20.658220 29097 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 12 18:29:20.664439 master-0 kubenswrapper[29097]: I0312 18:29:20.658790 29097 server.go:1280] "Started kubelet" Mar 12 18:29:20.664439 master-0 kubenswrapper[29097]: I0312 18:29:20.658996 29097 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 18:29:20.664439 master-0 kubenswrapper[29097]: I0312 18:29:20.659616 29097 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 18:29:20.664439 master-0 kubenswrapper[29097]: I0312 18:29:20.659907 29097 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 12 18:29:20.664439 master-0 kubenswrapper[29097]: I0312 18:29:20.662997 29097 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 18:29:20.660436 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 12 18:29:20.669031 master-0 kubenswrapper[29097]: I0312 18:29:20.668951 29097 server.go:449] "Adding debug handlers to kubelet server" Mar 12 18:29:20.671056 master-0 kubenswrapper[29097]: I0312 18:29:20.670075 29097 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 12 18:29:20.672105 master-0 kubenswrapper[29097]: I0312 18:29:20.671892 29097 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 12 18:29:20.678231 master-0 kubenswrapper[29097]: E0312 18:29:20.678187 29097 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 12 18:29:20.679416 master-0 kubenswrapper[29097]: I0312 18:29:20.679376 29097 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 12 18:29:20.679541 master-0 kubenswrapper[29097]: I0312 18:29:20.679422 29097 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 18:29:20.679541 master-0 kubenswrapper[29097]: I0312 18:29:20.679471 29097 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-13 18:02:20 +0000 UTC, rotation deadline is 2026-03-13 11:33:42.27258916 +0000 UTC Mar 12 18:29:20.679541 master-0 kubenswrapper[29097]: I0312 18:29:20.679528 29097 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h4m21.593064049s for next certificate rotation Mar 12 18:29:20.679667 master-0 kubenswrapper[29097]: I0312 18:29:20.679650 29097 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 12 18:29:20.679667 master-0 kubenswrapper[29097]: I0312 18:29:20.679662 29097 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 12 18:29:20.679804 master-0 kubenswrapper[29097]: I0312 18:29:20.679773 29097 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 12 18:29:20.680625 master-0 kubenswrapper[29097]: I0312 18:29:20.680581 29097 factory.go:153] Registering CRI-O factory Mar 12 18:29:20.680625 master-0 kubenswrapper[29097]: I0312 18:29:20.680621 29097 factory.go:221] Registration of the crio container factory successfully Mar 12 18:29:20.680800 master-0 kubenswrapper[29097]: I0312 18:29:20.680769 29097 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 12 18:29:20.680800 master-0 kubenswrapper[29097]: I0312 18:29:20.680794 29097 factory.go:55] Registering systemd factory Mar 12 18:29:20.680891 master-0 kubenswrapper[29097]: I0312 18:29:20.680807 29097 factory.go:221] Registration of the systemd container factory successfully Mar 12 18:29:20.680891 master-0 kubenswrapper[29097]: I0312 18:29:20.680835 29097 factory.go:103] Registering Raw factory Mar 12 18:29:20.680891 master-0 kubenswrapper[29097]: I0312 18:29:20.680860 29097 manager.go:1196] Started watching for new ooms in manager Mar 12 18:29:20.681327 master-0 kubenswrapper[29097]: I0312 18:29:20.681289 29097 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 12 18:29:20.681617 master-0 kubenswrapper[29097]: I0312 18:29:20.681580 29097 manager.go:319] Starting recovery of all containers Mar 12 18:29:20.692320 master-0 kubenswrapper[29097]: I0312 18:29:20.692222 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f1f60fa-d79d-4f31-b5bf-2ad333151537" volumeName="kubernetes.io/projected/9f1f60fa-d79d-4f31-b5bf-2ad333151537-kube-api-access-hqlfx" seLinuxMountContext="" Mar 12 18:29:20.692320 master-0 kubenswrapper[29097]: I0312 18:29:20.692297 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb529297-b3de-4167-a91e-0a63725b3b0f" volumeName="kubernetes.io/secret/fb529297-b3de-4167-a91e-0a63725b3b0f-etcd-client" seLinuxMountContext="" Mar 12 18:29:20.692320 master-0 kubenswrapper[29097]: I0312 18:29:20.692314 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0fb78c61-2051-42e2-8668-fa7404ccac43" volumeName="kubernetes.io/projected/0fb78c61-2051-42e2-8668-fa7404ccac43-kube-api-access-zsdjs" seLinuxMountContext="" Mar 12 18:29:20.692320 master-0 kubenswrapper[29097]: I0312 18:29:20.692326 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327" volumeName="kubernetes.io/secret/25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327-webhook-certs" seLinuxMountContext="" Mar 12 18:29:20.692703 master-0 kubenswrapper[29097]: I0312 18:29:20.692338 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b2c6cd11-b1ed-4fed-a4ce-4eee0af20868" volumeName="kubernetes.io/projected/b2c6cd11-b1ed-4fed-a4ce-4eee0af20868-kube-api-access-md9dt" seLinuxMountContext="" Mar 12 18:29:20.692703 master-0 kubenswrapper[29097]: I0312 18:29:20.692349 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e5fb0152-3efd-4000-bce3-fa90b75316ae" volumeName="kubernetes.io/configmap/e5fb0152-3efd-4000-bce3-fa90b75316ae-config" seLinuxMountContext="" Mar 12 18:29:20.692703 master-0 kubenswrapper[29097]: I0312 18:29:20.692361 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64" volumeName="kubernetes.io/configmap/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-trusted-ca" seLinuxMountContext="" Mar 12 18:29:20.692703 master-0 kubenswrapper[29097]: I0312 18:29:20.692372 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f1f60fa-d79d-4f31-b5bf-2ad333151537" volumeName="kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-secret-metrics-client-certs" seLinuxMountContext="" Mar 12 18:29:20.692703 master-0 kubenswrapper[29097]: I0312 18:29:20.692386 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab926874-9722-4e65-9084-27b2f9915450" volumeName="kubernetes.io/configmap/ab926874-9722-4e65-9084-27b2f9915450-config" seLinuxMountContext="" Mar 12 18:29:20.692703 master-0 kubenswrapper[29097]: I0312 18:29:20.692400 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="adb0dbbf-458d-46f5-b236-d4904e125418" volumeName="kubernetes.io/empty-dir/adb0dbbf-458d-46f5-b236-d4904e125418-node-exporter-textfile" seLinuxMountContext="" Mar 12 18:29:20.692703 master-0 kubenswrapper[29097]: I0312 18:29:20.692414 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1287cbb9-c9f6-48d2-9fda-f4464074e41b" volumeName="kubernetes.io/secret/1287cbb9-c9f6-48d2-9fda-f4464074e41b-cluster-storage-operator-serving-cert" seLinuxMountContext="" Mar 12 18:29:20.692703 master-0 kubenswrapper[29097]: I0312 18:29:20.692424 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74eb1407-de29-42e5-9e6c-ce1bec3a9d80" volumeName="kubernetes.io/secret/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 12 18:29:20.692703 master-0 kubenswrapper[29097]: I0312 18:29:20.692436 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9b41258c-ac1d-4e00-ac5e-732d85441f12" volumeName="kubernetes.io/configmap/9b41258c-ac1d-4e00-ac5e-732d85441f12-config" seLinuxMountContext="" Mar 12 18:29:20.692703 master-0 kubenswrapper[29097]: I0312 18:29:20.692453 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e697746f-fb9e-4d10-ab61-33c68e62cc0d" volumeName="kubernetes.io/configmap/e697746f-fb9e-4d10-ab61-33c68e62cc0d-config" seLinuxMountContext="" Mar 12 18:29:20.692703 master-0 kubenswrapper[29097]: I0312 18:29:20.692464 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee4c1949-96b4-4444-9675-9df1d46f681e" volumeName="kubernetes.io/secret/ee4c1949-96b4-4444-9675-9df1d46f681e-cloud-controller-manager-operator-tls" seLinuxMountContext="" Mar 12 18:29:20.692703 master-0 kubenswrapper[29097]: I0312 18:29:20.692494 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="51eb717b-d11f-4bc3-8df6-deb51d5889f3" volumeName="kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert" seLinuxMountContext="" Mar 12 18:29:20.692703 master-0 kubenswrapper[29097]: I0312 18:29:20.692506 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b2c6cd11-b1ed-4fed-a4ce-4eee0af20868" volumeName="kubernetes.io/configmap/b2c6cd11-b1ed-4fed-a4ce-4eee0af20868-cco-trusted-ca" seLinuxMountContext="" Mar 12 18:29:20.692703 master-0 kubenswrapper[29097]: I0312 18:29:20.692533 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f5e09875-4445-4584-94f0-243148307bb0" volumeName="kubernetes.io/empty-dir/f5e09875-4445-4584-94f0-243148307bb0-snapshots" seLinuxMountContext="" Mar 12 18:29:20.692703 master-0 kubenswrapper[29097]: I0312 18:29:20.692557 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c016b1e-d47c-47d4-a15f-4160e7731c82" volumeName="kubernetes.io/secret/1c016b1e-d47c-47d4-a15f-4160e7731c82-serving-cert" seLinuxMountContext="" Mar 12 18:29:20.692703 master-0 kubenswrapper[29097]: I0312 18:29:20.692569 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="455f0aad-add2-49d0-995c-f92467bce2d6" volumeName="kubernetes.io/configmap/455f0aad-add2-49d0-995c-f92467bce2d6-whereabouts-configmap" seLinuxMountContext="" Mar 12 18:29:20.692703 master-0 kubenswrapper[29097]: I0312 18:29:20.692581 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652" volumeName="kubernetes.io/projected/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652-kube-api-access-k59mb" seLinuxMountContext="" Mar 12 18:29:20.692703 master-0 kubenswrapper[29097]: I0312 18:29:20.692624 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="062f1b21-2ffc-47da-8334-427c3b2a1a90" volumeName="kubernetes.io/configmap/062f1b21-2ffc-47da-8334-427c3b2a1a90-config" seLinuxMountContext="" Mar 12 18:29:20.692703 master-0 kubenswrapper[29097]: I0312 18:29:20.692645 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c016b1e-d47c-47d4-a15f-4160e7731c82" volumeName="kubernetes.io/projected/1c016b1e-d47c-47d4-a15f-4160e7731c82-kube-api-access-clz8x" seLinuxMountContext="" Mar 12 18:29:20.692703 master-0 kubenswrapper[29097]: I0312 18:29:20.692657 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64" volumeName="kubernetes.io/projected/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-kube-api-access-fdlxn" seLinuxMountContext="" Mar 12 18:29:20.692703 master-0 kubenswrapper[29097]: I0312 18:29:20.692668 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="604044f4-9b0b-4747-827d-843f3cfa7077" volumeName="kubernetes.io/configmap/604044f4-9b0b-4747-827d-843f3cfa7077-auth-proxy-config" seLinuxMountContext="" Mar 12 18:29:20.692703 master-0 kubenswrapper[29097]: I0312 18:29:20.692707 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="223a548b-a3ad-40dd-82de-e3dbb7f3e4fa" volumeName="kubernetes.io/configmap/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa-config" seLinuxMountContext="" Mar 12 18:29:20.692703 master-0 kubenswrapper[29097]: I0312 18:29:20.692722 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ad05507-e242-4ff8-ae80-c16ff9ee68e2" volumeName="kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.692733 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b8dd13a7-10e5-431b-8d30-405dcfea02f5" volumeName="kubernetes.io/configmap/b8dd13a7-10e5-431b-8d30-405dcfea02f5-ovnkube-config" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.692744 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e697746f-fb9e-4d10-ab61-33c68e62cc0d" volumeName="kubernetes.io/configmap/e697746f-fb9e-4d10-ab61-33c68e62cc0d-etcd-service-ca" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.692754 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="adb0dbbf-458d-46f5-b236-d4904e125418" volumeName="kubernetes.io/secret/adb0dbbf-458d-46f5-b236-d4904e125418-node-exporter-kube-rbac-proxy-config" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.692784 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be2da107-a419-423f-a657-44d681291f28" volumeName="kubernetes.io/configmap/be2da107-a419-423f-a657-44d681291f28-client-ca" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.692801 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="47850839-bb4b-41e9-ac31-f1cabbb4926d" volumeName="kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.692813 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78c13011-7a79-445f-807c-4f5e75643549" volumeName="kubernetes.io/configmap/78c13011-7a79-445f-807c-4f5e75643549-metrics-client-ca" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.692823 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9717d467-af1a-4de0-88e0-c47ec4d12d6e" volumeName="kubernetes.io/projected/9717d467-af1a-4de0-88e0-c47ec4d12d6e-kube-api-access-kbzcs" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.692835 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f1f60fa-d79d-4f31-b5bf-2ad333151537" volumeName="kubernetes.io/configmap/9f1f60fa-d79d-4f31-b5bf-2ad333151537-metrics-server-audit-profiles" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.692848 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652" volumeName="kubernetes.io/projected/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652-ca-certs" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.692859 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3e5b8c8-a100-4880-a0b9-9c3989d4e739" volumeName="kubernetes.io/empty-dir/d3e5b8c8-a100-4880-a0b9-9c3989d4e739-utilities" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.692872 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3d77a98a-0176-4924-81d3-8e9890852b38" volumeName="kubernetes.io/configmap/3d77a98a-0176-4924-81d3-8e9890852b38-kube-state-metrics-custom-resource-state-configmap" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.692883 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3d77a98a-0176-4924-81d3-8e9890852b38" volumeName="kubernetes.io/secret/3d77a98a-0176-4924-81d3-8e9890852b38-kube-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.692895 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="41c1bd85-369e-4341-9e80-8b4b248b5572" volumeName="kubernetes.io/configmap/41c1bd85-369e-4341-9e80-8b4b248b5572-metrics-client-ca" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.692905 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="41c1bd85-369e-4341-9e80-8b4b248b5572" volumeName="kubernetes.io/secret/41c1bd85-369e-4341-9e80-8b4b248b5572-prometheus-operator-tls" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.692917 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="518ffff8-8119-41be-8b76-ce49d5751254" volumeName="kubernetes.io/secret/518ffff8-8119-41be-8b76-ce49d5751254-stats-auth" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.692930 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e697746f-fb9e-4d10-ab61-33c68e62cc0d" volumeName="kubernetes.io/secret/e697746f-fb9e-4d10-ab61-33c68e62cc0d-etcd-client" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.692941 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e94d098b-fbcc-4e85-b8ad-42f3a21c822c" volumeName="kubernetes.io/projected/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-kube-api-access-bttzm" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.692952 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0cc54e47-af53-448a-b1c9-043710890a32" volumeName="kubernetes.io/empty-dir/0cc54e47-af53-448a-b1c9-043710890a32-utilities" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.692964 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0cc54e47-af53-448a-b1c9-043710890a32" volumeName="kubernetes.io/empty-dir/0cc54e47-af53-448a-b1c9-043710890a32-catalog-content" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.692974 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327" volumeName="kubernetes.io/projected/25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327-kube-api-access-x6595" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.692987 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e22c7035-4b7a-48cb-9abb-db277b387842" volumeName="kubernetes.io/projected/e22c7035-4b7a-48cb-9abb-db277b387842-bound-sa-token" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.692998 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="38a4bf73-479e-4bbf-9aa3-639fc288c8bc" volumeName="kubernetes.io/configmap/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-cni-binary-copy" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693009 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4687cf53-55d7-42b7-b24d-e57da3989fd6" volumeName="kubernetes.io/configmap/4687cf53-55d7-42b7-b24d-e57da3989fd6-images" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693020 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="492e9833-4513-4f2f-b865-d05a8973fadc" volumeName="kubernetes.io/projected/492e9833-4513-4f2f-b865-d05a8973fadc-kube-api-access-5kn2k" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693031 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b48f8fd-2efe-44e3-a6d7-c71358b83a2f" volumeName="kubernetes.io/projected/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-kube-api-access-9jgbv" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693047 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9b41258c-ac1d-4e00-ac5e-732d85441f12" volumeName="kubernetes.io/configmap/9b41258c-ac1d-4e00-ac5e-732d85441f12-image-import-ca" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693059 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e697746f-fb9e-4d10-ab61-33c68e62cc0d" volumeName="kubernetes.io/projected/e697746f-fb9e-4d10-ab61-33c68e62cc0d-kube-api-access-vct98" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693071 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e697746f-fb9e-4d10-ab61-33c68e62cc0d" volumeName="kubernetes.io/secret/e697746f-fb9e-4d10-ab61-33c68e62cc0d-serving-cert" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693086 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="062f1b21-2ffc-47da-8334-427c3b2a1a90" volumeName="kubernetes.io/projected/062f1b21-2ffc-47da-8334-427c3b2a1a90-kube-api-access-jn9nf" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693100 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="604044f4-9b0b-4747-827d-843f3cfa7077" volumeName="kubernetes.io/configmap/604044f4-9b0b-4747-827d-843f3cfa7077-images" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693113 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1e2340b-ebca-40de-b1e0-8133999cd860" volumeName="kubernetes.io/projected/a1e2340b-ebca-40de-b1e0-8133999cd860-kube-api-access-6vpbp" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693125 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be2da107-a419-423f-a657-44d681291f28" volumeName="kubernetes.io/projected/be2da107-a419-423f-a657-44d681291f28-kube-api-access-jfp84" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693135 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9b41258c-ac1d-4e00-ac5e-732d85441f12" volumeName="kubernetes.io/projected/9b41258c-ac1d-4e00-ac5e-732d85441f12-kube-api-access-7lmj2" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693147 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="adb0dbbf-458d-46f5-b236-d4904e125418" volumeName="kubernetes.io/configmap/adb0dbbf-458d-46f5-b236-d4904e125418-metrics-client-ca" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693159 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b8dd13a7-10e5-431b-8d30-405dcfea02f5" volumeName="kubernetes.io/secret/b8dd13a7-10e5-431b-8d30-405dcfea02f5-ovn-node-metrics-cert" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693170 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37cd9c0a-697e-4e67-932b-b331ff77c8c0" volumeName="kubernetes.io/secret/37cd9c0a-697e-4e67-932b-b331ff77c8c0-serving-cert" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693359 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74eb1407-de29-42e5-9e6c-ce1bec3a9d80" volumeName="kubernetes.io/configmap/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-env-overrides" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693373 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78c13011-7a79-445f-807c-4f5e75643549" volumeName="kubernetes.io/secret/78c13011-7a79-445f-807c-4f5e75643549-openshift-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693384 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d4ae1240-e04e-48e9-88df-9f1a53508da7" volumeName="kubernetes.io/configmap/d4ae1240-e04e-48e9-88df-9f1a53508da7-config" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693395 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ad05507-e242-4ff8-ae80-c16ff9ee68e2" volumeName="kubernetes.io/projected/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-kube-api-access-th8tc" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693405 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c241720-7815-40fd-8d4a-1685a43b5893" volumeName="kubernetes.io/projected/8c241720-7815-40fd-8d4a-1685a43b5893-kube-api-access-l8qw4" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693417 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="455f0aad-add2-49d0-995c-f92467bce2d6" volumeName="kubernetes.io/configmap/455f0aad-add2-49d0-995c-f92467bce2d6-cni-sysctl-allowlist" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693431 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="45aa4887-c913-4ece-ae34-fcde33832621" volumeName="kubernetes.io/projected/45aa4887-c913-4ece-ae34-fcde33832621-kube-api-access-4vr66" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693454 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bce831df-c604-4608-a24e-b14d62c5287a" volumeName="kubernetes.io/projected/bce831df-c604-4608-a24e-b14d62c5287a-kube-api-access-wfjj6" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693471 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee4c1949-96b4-4444-9675-9df1d46f681e" volumeName="kubernetes.io/configmap/ee4c1949-96b4-4444-9675-9df1d46f681e-images" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693481 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="38a4bf73-479e-4bbf-9aa3-639fc288c8bc" volumeName="kubernetes.io/projected/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-kube-api-access-2pn9h" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693492 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4519000b-e475-4c26-a1c0-bf05cd9c242b" volumeName="kubernetes.io/projected/4519000b-e475-4c26-a1c0-bf05cd9c242b-kube-api-access-5x57x" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693503 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4687cf53-55d7-42b7-b24d-e57da3989fd6" volumeName="kubernetes.io/projected/4687cf53-55d7-42b7-b24d-e57da3989fd6-kube-api-access-68xhl" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693534 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe" volumeName="kubernetes.io/configmap/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-env-overrides" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693558 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="223a548b-a3ad-40dd-82de-e3dbb7f3e4fa" volumeName="kubernetes.io/secret/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa-serving-cert" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693569 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be2da107-a419-423f-a657-44d681291f28" volumeName="kubernetes.io/secret/be2da107-a419-423f-a657-44d681291f28-serving-cert" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693580 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3a2cda2-b70f-4128-a1be-48503f5aad6d" volumeName="kubernetes.io/projected/f3a2cda2-b70f-4128-a1be-48503f5aad6d-kube-api-access-tcvfv" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693591 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="236f2886-bb69-49a7-9471-36454fd1cbd3" volumeName="kubernetes.io/secret/236f2886-bb69-49a7-9471-36454fd1cbd3-serving-cert" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693602 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe" volumeName="kubernetes.io/secret/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-webhook-cert" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693613 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9b41258c-ac1d-4e00-ac5e-732d85441f12" volumeName="kubernetes.io/secret/9b41258c-ac1d-4e00-ac5e-732d85441f12-etcd-client" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693624 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb529297-b3de-4167-a91e-0a63725b3b0f" volumeName="kubernetes.io/projected/fb529297-b3de-4167-a91e-0a63725b3b0f-kube-api-access-tmzf4" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693636 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e5fb0152-3efd-4000-bce3-fa90b75316ae" volumeName="kubernetes.io/secret/e5fb0152-3efd-4000-bce3-fa90b75316ae-cert" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693648 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34cbf061-4c76-476e-bed9-0a133c744862" volumeName="kubernetes.io/secret/34cbf061-4c76-476e-bed9-0a133c744862-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693658 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37cd9c0a-697e-4e67-932b-b331ff77c8c0" volumeName="kubernetes.io/projected/37cd9c0a-697e-4e67-932b-b331ff77c8c0-kube-api-access-pfpb9" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693670 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9b41258c-ac1d-4e00-ac5e-732d85441f12" volumeName="kubernetes.io/configmap/9b41258c-ac1d-4e00-ac5e-732d85441f12-etcd-serving-ca" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693682 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652" volumeName="kubernetes.io/empty-dir/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652-cache" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693693 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b38e7fcd-8f7a-4d4f-8702-7ef205261054" volumeName="kubernetes.io/secret/b38e7fcd-8f7a-4d4f-8702-7ef205261054-webhook-cert" seLinuxMountContext="" Mar 12 18:29:20.694009 master-0 kubenswrapper[29097]: I0312 18:29:20.693704 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b8dd13a7-10e5-431b-8d30-405dcfea02f5" volumeName="kubernetes.io/projected/b8dd13a7-10e5-431b-8d30-405dcfea02f5-kube-api-access-7rhmv" seLinuxMountContext="" Mar 12 18:29:20.697564 master-0 kubenswrapper[29097]: I0312 18:29:20.693715 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be2da107-a419-423f-a657-44d681291f28" volumeName="kubernetes.io/configmap/be2da107-a419-423f-a657-44d681291f28-config" seLinuxMountContext="" Mar 12 18:29:20.697645 master-0 kubenswrapper[29097]: I0312 18:29:20.697581 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f5e09875-4445-4584-94f0-243148307bb0" volumeName="kubernetes.io/secret/f5e09875-4445-4584-94f0-243148307bb0-serving-cert" seLinuxMountContext="" Mar 12 18:29:20.697645 master-0 kubenswrapper[29097]: I0312 18:29:20.697622 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="266b9f4f-3fb4-474d-84df-0a6c687c7e9a" volumeName="kubernetes.io/secret/266b9f4f-3fb4-474d-84df-0a6c687c7e9a-metrics-tls" seLinuxMountContext="" Mar 12 18:29:20.697768 master-0 kubenswrapper[29097]: I0312 18:29:20.697646 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="38a4bf73-479e-4bbf-9aa3-639fc288c8bc" volumeName="kubernetes.io/configmap/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-daemon-config" seLinuxMountContext="" Mar 12 18:29:20.697768 master-0 kubenswrapper[29097]: I0312 18:29:20.697682 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8fe3d699-023e-4de7-8d42-6c9d8a5e68f3" volumeName="kubernetes.io/configmap/8fe3d699-023e-4de7-8d42-6c9d8a5e68f3-signing-cabundle" seLinuxMountContext="" Mar 12 18:29:20.697768 master-0 kubenswrapper[29097]: I0312 18:29:20.697705 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="999f02f6-e9b8-4d4b-ac35-b8b43a931cfc" volumeName="kubernetes.io/empty-dir/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-tmp" seLinuxMountContext="" Mar 12 18:29:20.697768 master-0 kubenswrapper[29097]: I0312 18:29:20.697729 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c" volumeName="kubernetes.io/empty-dir/b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c-catalog-content" seLinuxMountContext="" Mar 12 18:29:20.697768 master-0 kubenswrapper[29097]: I0312 18:29:20.697764 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d1b3859c-20a1-4a1c-8508-86ed843768f5" volumeName="kubernetes.io/secret/d1b3859c-20a1-4a1c-8508-86ed843768f5-catalogserver-certs" seLinuxMountContext="" Mar 12 18:29:20.698022 master-0 kubenswrapper[29097]: I0312 18:29:20.697799 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d94dc349-c5cb-4f12-8e48-867030af4981" volumeName="kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls" seLinuxMountContext="" Mar 12 18:29:20.698022 master-0 kubenswrapper[29097]: I0312 18:29:20.697971 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b38e7fcd-8f7a-4d4f-8702-7ef205261054" volumeName="kubernetes.io/projected/b38e7fcd-8f7a-4d4f-8702-7ef205261054-kube-api-access-zp5gk" seLinuxMountContext="" Mar 12 18:29:20.698022 master-0 kubenswrapper[29097]: I0312 18:29:20.698004 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4519000b-e475-4c26-a1c0-bf05cd9c242b" volumeName="kubernetes.io/empty-dir/4519000b-e475-4c26-a1c0-bf05cd9c242b-catalog-content" seLinuxMountContext="" Mar 12 18:29:20.698022 master-0 kubenswrapper[29097]: I0312 18:29:20.698024 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee55b576-6b8d-4217-b5a7-93b023a1e885" volumeName="kubernetes.io/configmap/ee55b576-6b8d-4217-b5a7-93b023a1e885-mcc-auth-proxy-config" seLinuxMountContext="" Mar 12 18:29:20.698205 master-0 kubenswrapper[29097]: I0312 18:29:20.698045 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="030160af-c915-4f00-903a-1c4b5c2b719a" volumeName="kubernetes.io/configmap/030160af-c915-4f00-903a-1c4b5c2b719a-config" seLinuxMountContext="" Mar 12 18:29:20.698205 master-0 kubenswrapper[29097]: I0312 18:29:20.698059 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="604044f4-9b0b-4747-827d-843f3cfa7077" volumeName="kubernetes.io/secret/604044f4-9b0b-4747-827d-843f3cfa7077-proxy-tls" seLinuxMountContext="" Mar 12 18:29:20.698205 master-0 kubenswrapper[29097]: I0312 18:29:20.698087 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d1b3859c-20a1-4a1c-8508-86ed843768f5" volumeName="kubernetes.io/empty-dir/d1b3859c-20a1-4a1c-8508-86ed843768f5-cache" seLinuxMountContext="" Mar 12 18:29:20.698205 master-0 kubenswrapper[29097]: I0312 18:29:20.698130 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0cc54e47-af53-448a-b1c9-043710890a32" volumeName="kubernetes.io/projected/0cc54e47-af53-448a-b1c9-043710890a32-kube-api-access-bdc26" seLinuxMountContext="" Mar 12 18:29:20.698205 master-0 kubenswrapper[29097]: I0312 18:29:20.698152 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="223a548b-a3ad-40dd-82de-e3dbb7f3e4fa" volumeName="kubernetes.io/projected/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa-kube-api-access-s55hv" seLinuxMountContext="" Mar 12 18:29:20.698205 master-0 kubenswrapper[29097]: I0312 18:29:20.698175 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4048e453-a983-4708-89b6-a81af0067e29" volumeName="kubernetes.io/projected/4048e453-a983-4708-89b6-a81af0067e29-kube-api-access" seLinuxMountContext="" Mar 12 18:29:20.698205 master-0 kubenswrapper[29097]: I0312 18:29:20.698194 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4687cf53-55d7-42b7-b24d-e57da3989fd6" volumeName="kubernetes.io/secret/4687cf53-55d7-42b7-b24d-e57da3989fd6-machine-api-operator-tls" seLinuxMountContext="" Mar 12 18:29:20.698459 master-0 kubenswrapper[29097]: I0312 18:29:20.698219 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64" volumeName="kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics" seLinuxMountContext="" Mar 12 18:29:20.698459 master-0 kubenswrapper[29097]: I0312 18:29:20.698236 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78c13011-7a79-445f-807c-4f5e75643549" volumeName="kubernetes.io/projected/78c13011-7a79-445f-807c-4f5e75643549-kube-api-access-bmntw" seLinuxMountContext="" Mar 12 18:29:20.698459 master-0 kubenswrapper[29097]: I0312 18:29:20.698256 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9b41258c-ac1d-4e00-ac5e-732d85441f12" volumeName="kubernetes.io/secret/9b41258c-ac1d-4e00-ac5e-732d85441f12-encryption-config" seLinuxMountContext="" Mar 12 18:29:20.698459 master-0 kubenswrapper[29097]: I0312 18:29:20.698271 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="41c1bd85-369e-4341-9e80-8b4b248b5572" volumeName="kubernetes.io/secret/41c1bd85-369e-4341-9e80-8b4b248b5572-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Mar 12 18:29:20.698459 master-0 kubenswrapper[29097]: I0312 18:29:20.698293 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="518ffff8-8119-41be-8b76-ce49d5751254" volumeName="kubernetes.io/secret/518ffff8-8119-41be-8b76-ce49d5751254-metrics-certs" seLinuxMountContext="" Mar 12 18:29:20.698459 master-0 kubenswrapper[29097]: I0312 18:29:20.698331 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74eb1407-de29-42e5-9e6c-ce1bec3a9d80" volumeName="kubernetes.io/configmap/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-ovnkube-config" seLinuxMountContext="" Mar 12 18:29:20.698459 master-0 kubenswrapper[29097]: I0312 18:29:20.698350 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f1f60fa-d79d-4f31-b5bf-2ad333151537" volumeName="kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-secret-metrics-server-tls" seLinuxMountContext="" Mar 12 18:29:20.698459 master-0 kubenswrapper[29097]: I0312 18:29:20.698363 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aee40f88-83e4-45c8-8331-969943f9f9aa" volumeName="kubernetes.io/configmap/aee40f88-83e4-45c8-8331-969943f9f9aa-auth-proxy-config" seLinuxMountContext="" Mar 12 18:29:20.698459 master-0 kubenswrapper[29097]: I0312 18:29:20.698377 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4048e453-a983-4708-89b6-a81af0067e29" volumeName="kubernetes.io/secret/4048e453-a983-4708-89b6-a81af0067e29-serving-cert" seLinuxMountContext="" Mar 12 18:29:20.698459 master-0 kubenswrapper[29097]: I0312 18:29:20.698394 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="518ffff8-8119-41be-8b76-ce49d5751254" volumeName="kubernetes.io/secret/518ffff8-8119-41be-8b76-ce49d5751254-default-certificate" seLinuxMountContext="" Mar 12 18:29:20.698459 master-0 kubenswrapper[29097]: I0312 18:29:20.698408 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27" volumeName="kubernetes.io/projected/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-kube-api-access-ggsdx" seLinuxMountContext="" Mar 12 18:29:20.698459 master-0 kubenswrapper[29097]: I0312 18:29:20.698424 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb529297-b3de-4167-a91e-0a63725b3b0f" volumeName="kubernetes.io/configmap/fb529297-b3de-4167-a91e-0a63725b3b0f-etcd-serving-ca" seLinuxMountContext="" Mar 12 18:29:20.698459 master-0 kubenswrapper[29097]: I0312 18:29:20.698437 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb529297-b3de-4167-a91e-0a63725b3b0f" volumeName="kubernetes.io/secret/fb529297-b3de-4167-a91e-0a63725b3b0f-serving-cert" seLinuxMountContext="" Mar 12 18:29:20.698459 master-0 kubenswrapper[29097]: I0312 18:29:20.698449 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="236f2886-bb69-49a7-9471-36454fd1cbd3" volumeName="kubernetes.io/configmap/236f2886-bb69-49a7-9471-36454fd1cbd3-config" seLinuxMountContext="" Mar 12 18:29:20.698459 master-0 kubenswrapper[29097]: I0312 18:29:20.698465 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab926874-9722-4e65-9084-27b2f9915450" volumeName="kubernetes.io/secret/ab926874-9722-4e65-9084-27b2f9915450-serving-cert" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.698477 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab926874-9722-4e65-9084-27b2f9915450" volumeName="kubernetes.io/projected/ab926874-9722-4e65-9084-27b2f9915450-kube-api-access" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.698494 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3a2cda2-b70f-4128-a1be-48503f5aad6d" volumeName="kubernetes.io/empty-dir/f3a2cda2-b70f-4128-a1be-48503f5aad6d-operand-assets" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.698506 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="062f1b21-2ffc-47da-8334-427c3b2a1a90" volumeName="kubernetes.io/configmap/062f1b21-2ffc-47da-8334-427c3b2a1a90-service-ca-bundle" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.698551 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="455f0aad-add2-49d0-995c-f92467bce2d6" volumeName="kubernetes.io/projected/455f0aad-add2-49d0-995c-f92467bce2d6-kube-api-access-pxsgv" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.698569 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e22c7035-4b7a-48cb-9abb-db277b387842" volumeName="kubernetes.io/configmap/e22c7035-4b7a-48cb-9abb-db277b387842-trusted-ca" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.698580 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed" volumeName="kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.698596 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6" volumeName="kubernetes.io/projected/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6-kube-api-access-h65dg" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.698613 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9b41258c-ac1d-4e00-ac5e-732d85441f12" volumeName="kubernetes.io/secret/9b41258c-ac1d-4e00-ac5e-732d85441f12-serving-cert" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.698626 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27" volumeName="kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.698644 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f5e09875-4445-4584-94f0-243148307bb0" volumeName="kubernetes.io/configmap/f5e09875-4445-4584-94f0-243148307bb0-service-ca-bundle" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.698656 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d1b3859c-20a1-4a1c-8508-86ed843768f5" volumeName="kubernetes.io/projected/d1b3859c-20a1-4a1c-8508-86ed843768f5-ca-certs" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.698671 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="055f5c67-f512-4510-99c5-e194944b0599" volumeName="kubernetes.io/projected/055f5c67-f512-4510-99c5-e194944b0599-kube-api-access-tkt7d" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.698722 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3d77a98a-0176-4924-81d3-8e9890852b38" volumeName="kubernetes.io/empty-dir/3d77a98a-0176-4924-81d3-8e9890852b38-volume-directive-shadow" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.698736 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52d3dc87-0bf8-4a62-a6fc-0ffa6060c6a8" volumeName="kubernetes.io/secret/52d3dc87-0bf8-4a62-a6fc-0ffa6060c6a8-tls-certificates" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.698757 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="266b9f4f-3fb4-474d-84df-0a6c687c7e9a" volumeName="kubernetes.io/configmap/266b9f4f-3fb4-474d-84df-0a6c687c7e9a-config-volume" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.698771 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="266b9f4f-3fb4-474d-84df-0a6c687c7e9a" volumeName="kubernetes.io/projected/266b9f4f-3fb4-474d-84df-0a6c687c7e9a-kube-api-access-6tmqs" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.698784 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="455f0aad-add2-49d0-995c-f92467bce2d6" volumeName="kubernetes.io/configmap/455f0aad-add2-49d0-995c-f92467bce2d6-cni-binary-copy" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.698800 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="adb0dbbf-458d-46f5-b236-d4904e125418" volumeName="kubernetes.io/projected/adb0dbbf-458d-46f5-b236-d4904e125418-kube-api-access-52svc" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.698813 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b38e7fcd-8f7a-4d4f-8702-7ef205261054" volumeName="kubernetes.io/secret/b38e7fcd-8f7a-4d4f-8702-7ef205261054-apiservice-cert" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.698832 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3d77a98a-0176-4924-81d3-8e9890852b38" volumeName="kubernetes.io/secret/3d77a98a-0176-4924-81d3-8e9890852b38-kube-state-metrics-tls" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.698845 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6" volumeName="kubernetes.io/secret/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6-certs" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.698859 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6" volumeName="kubernetes.io/secret/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6-node-bootstrap-token" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.698876 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d4ae1240-e04e-48e9-88df-9f1a53508da7" volumeName="kubernetes.io/projected/d4ae1240-e04e-48e9-88df-9f1a53508da7-kube-api-access" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.698889 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb529297-b3de-4167-a91e-0a63725b3b0f" volumeName="kubernetes.io/secret/fb529297-b3de-4167-a91e-0a63725b3b0f-encryption-config" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.698907 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe" volumeName="kubernetes.io/projected/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-kube-api-access-tdlcw" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.698940 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c" volumeName="kubernetes.io/empty-dir/b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c-utilities" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.698956 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c" volumeName="kubernetes.io/projected/b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c-kube-api-access-xmvnh" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.698974 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="236f2886-bb69-49a7-9471-36454fd1cbd3" volumeName="kubernetes.io/projected/236f2886-bb69-49a7-9471-36454fd1cbd3-kube-api-access-b6ggg" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.698989 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="604044f4-9b0b-4747-827d-843f3cfa7077" volumeName="kubernetes.io/projected/604044f4-9b0b-4747-827d-843f3cfa7077-kube-api-access-fqzmm" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.699030 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e5fb0152-3efd-4000-bce3-fa90b75316ae" volumeName="kubernetes.io/secret/e5fb0152-3efd-4000-bce3-fa90b75316ae-cluster-baremetal-operator-tls" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.699046 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3a2cda2-b70f-4128-a1be-48503f5aad6d" volumeName="kubernetes.io/secret/f3a2cda2-b70f-4128-a1be-48503f5aad6d-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.699059 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="030160af-c915-4f00-903a-1c4b5c2b719a" volumeName="kubernetes.io/secret/030160af-c915-4f00-903a-1c4b5c2b719a-machine-approver-tls" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.699075 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="33feec78-4592-4343-965b-aa1b7044fcf3" volumeName="kubernetes.io/projected/33feec78-4592-4343-965b-aa1b7044fcf3-kube-api-access-ptrtx" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.699089 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="492e9833-4513-4f2f-b865-d05a8973fadc" volumeName="kubernetes.io/configmap/492e9833-4513-4f2f-b865-d05a8973fadc-mcd-auth-proxy-config" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.699103 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c016b1e-d47c-47d4-a15f-4160e7731c82" volumeName="kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-client-ca" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.699118 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed" volumeName="kubernetes.io/projected/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-kube-api-access-wlf77" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.699131 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4519000b-e475-4c26-a1c0-bf05cd9c242b" volumeName="kubernetes.io/empty-dir/4519000b-e475-4c26-a1c0-bf05cd9c242b-utilities" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.699147 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3e5b8c8-a100-4880-a0b9-9c3989d4e739" volumeName="kubernetes.io/empty-dir/d3e5b8c8-a100-4880-a0b9-9c3989d4e739-catalog-content" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.699160 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3e5b8c8-a100-4880-a0b9-9c3989d4e739" volumeName="kubernetes.io/projected/d3e5b8c8-a100-4880-a0b9-9c3989d4e739-kube-api-access-jrg6p" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.699178 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f5e09875-4445-4584-94f0-243148307bb0" volumeName="kubernetes.io/configmap/f5e09875-4445-4584-94f0-243148307bb0-trusted-ca-bundle" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.699224 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed" volumeName="kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert" seLinuxMountContext="" Mar 12 18:29:20.699176 master-0 kubenswrapper[29097]: I0312 18:29:20.699242 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3d77a98a-0176-4924-81d3-8e9890852b38" volumeName="kubernetes.io/configmap/3d77a98a-0176-4924-81d3-8e9890852b38-metrics-client-ca" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.699262 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="999f02f6-e9b8-4d4b-ac35-b8b43a931cfc" volumeName="kubernetes.io/empty-dir/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-tuned" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.699275 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f1f60fa-d79d-4f31-b5bf-2ad333151537" volumeName="kubernetes.io/empty-dir/9f1f60fa-d79d-4f31-b5bf-2ad333151537-audit-log" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.699288 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="adb0dbbf-458d-46f5-b236-d4904e125418" volumeName="kubernetes.io/secret/adb0dbbf-458d-46f5-b236-d4904e125418-node-exporter-tls" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.699304 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d4ae1240-e04e-48e9-88df-9f1a53508da7" volumeName="kubernetes.io/secret/d4ae1240-e04e-48e9-88df-9f1a53508da7-serving-cert" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.699317 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="062f1b21-2ffc-47da-8334-427c3b2a1a90" volumeName="kubernetes.io/secret/062f1b21-2ffc-47da-8334-427c3b2a1a90-serving-cert" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.699334 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34cbf061-4c76-476e-bed9-0a133c744862" volumeName="kubernetes.io/projected/34cbf061-4c76-476e-bed9-0a133c744862-kube-api-access-gmsnk" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.699368 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b48f8fd-2efe-44e3-a6d7-c71358b83a2f" volumeName="kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.699382 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8fe3d699-023e-4de7-8d42-6c9d8a5e68f3" volumeName="kubernetes.io/projected/8fe3d699-023e-4de7-8d42-6c9d8a5e68f3-kube-api-access-xtrvs" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.699405 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1e2340b-ebca-40de-b1e0-8133999cd860" volumeName="kubernetes.io/secret/a1e2340b-ebca-40de-b1e0-8133999cd860-serving-cert" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.699421 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b8dd13a7-10e5-431b-8d30-405dcfea02f5" volumeName="kubernetes.io/configmap/b8dd13a7-10e5-431b-8d30-405dcfea02f5-env-overrides" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.699440 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0fb78c61-2051-42e2-8668-fa7404ccac43" volumeName="kubernetes.io/secret/0fb78c61-2051-42e2-8668-fa7404ccac43-samples-operator-tls" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.699454 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e5fb0152-3efd-4000-bce3-fa90b75316ae" volumeName="kubernetes.io/projected/e5fb0152-3efd-4000-bce3-fa90b75316ae-kube-api-access-pkftr" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.699467 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="030160af-c915-4f00-903a-1c4b5c2b719a" volumeName="kubernetes.io/projected/030160af-c915-4f00-903a-1c4b5c2b719a-kube-api-access-9p4dz" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.699489 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="055f5c67-f512-4510-99c5-e194944b0599" volumeName="kubernetes.io/configmap/055f5c67-f512-4510-99c5-e194944b0599-config" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.699502 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37cd9c0a-697e-4e67-932b-b331ff77c8c0" volumeName="kubernetes.io/empty-dir/37cd9c0a-697e-4e67-932b-b331ff77c8c0-available-featuregates" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.699588 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b38e7fcd-8f7a-4d4f-8702-7ef205261054" volumeName="kubernetes.io/empty-dir/b38e7fcd-8f7a-4d4f-8702-7ef205261054-tmpfs" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.699636 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b8dd13a7-10e5-431b-8d30-405dcfea02f5" volumeName="kubernetes.io/configmap/b8dd13a7-10e5-431b-8d30-405dcfea02f5-ovnkube-script-lib" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.699653 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee55b576-6b8d-4217-b5a7-93b023a1e885" volumeName="kubernetes.io/secret/ee55b576-6b8d-4217-b5a7-93b023a1e885-proxy-tls" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.699717 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78c13011-7a79-445f-807c-4f5e75643549" volumeName="kubernetes.io/secret/78c13011-7a79-445f-807c-4f5e75643549-openshift-state-metrics-tls" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.699735 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b648b6de-59a6-42da-84e2-77ea0264ae25" volumeName="kubernetes.io/projected/b648b6de-59a6-42da-84e2-77ea0264ae25-kube-api-access-7n4d5" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.699793 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f1f60fa-d79d-4f31-b5bf-2ad333151537" volumeName="kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-client-ca-bundle" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.699830 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e720e1d0-5a6d-4b76-8b25-5963e24950f5" volumeName="kubernetes.io/configmap/e720e1d0-5a6d-4b76-8b25-5963e24950f5-config" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.699868 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4048e453-a983-4708-89b6-a81af0067e29" volumeName="kubernetes.io/configmap/4048e453-a983-4708-89b6-a81af0067e29-service-ca" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.699892 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9b41258c-ac1d-4e00-ac5e-732d85441f12" volumeName="kubernetes.io/configmap/9b41258c-ac1d-4e00-ac5e-732d85441f12-audit" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.699905 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aee40f88-83e4-45c8-8331-969943f9f9aa" volumeName="kubernetes.io/projected/aee40f88-83e4-45c8-8331-969943f9f9aa-kube-api-access-th72r" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.699917 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d92dddc8-a810-43f5-8beb-32d1c8ad8381" volumeName="kubernetes.io/projected/d92dddc8-a810-43f5-8beb-32d1c8ad8381-kube-api-access-l22gw" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.699962 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="030160af-c915-4f00-903a-1c4b5c2b719a" volumeName="kubernetes.io/configmap/030160af-c915-4f00-903a-1c4b5c2b719a-auth-proxy-config" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.699975 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1287cbb9-c9f6-48d2-9fda-f4464074e41b" volumeName="kubernetes.io/projected/1287cbb9-c9f6-48d2-9fda-f4464074e41b-kube-api-access-hjz8k" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.699994 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed" volumeName="kubernetes.io/configmap/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-trusted-ca" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700034 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3d77a98a-0176-4924-81d3-8e9890852b38" volumeName="kubernetes.io/projected/3d77a98a-0176-4924-81d3-8e9890852b38-kube-api-access-f72ng" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700052 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="518ffff8-8119-41be-8b76-ce49d5751254" volumeName="kubernetes.io/configmap/518ffff8-8119-41be-8b76-ce49d5751254-service-ca-bundle" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700071 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aee40f88-83e4-45c8-8331-969943f9f9aa" volumeName="kubernetes.io/secret/aee40f88-83e4-45c8-8331-969943f9f9aa-cert" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700111 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6d288e3-8e73-44d2-874d-64c6c98dd991" volumeName="kubernetes.io/secret/b6d288e3-8e73-44d2-874d-64c6c98dd991-metrics-tls" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700133 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e22c7035-4b7a-48cb-9abb-db277b387842" volumeName="kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700152 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c016b1e-d47c-47d4-a15f-4160e7731c82" volumeName="kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-proxy-ca-bundles" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700166 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="41c1bd85-369e-4341-9e80-8b4b248b5572" volumeName="kubernetes.io/projected/41c1bd85-369e-4341-9e80-8b4b248b5572-kube-api-access-q7pjn" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700210 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d92dddc8-a810-43f5-8beb-32d1c8ad8381" volumeName="kubernetes.io/configmap/d92dddc8-a810-43f5-8beb-32d1c8ad8381-iptables-alerter-script" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700224 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e22c7035-4b7a-48cb-9abb-db277b387842" volumeName="kubernetes.io/projected/e22c7035-4b7a-48cb-9abb-db277b387842-kube-api-access-pmxc2" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700241 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4687cf53-55d7-42b7-b24d-e57da3989fd6" volumeName="kubernetes.io/configmap/4687cf53-55d7-42b7-b24d-e57da3989fd6-config" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700277 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74eb1407-de29-42e5-9e6c-ce1bec3a9d80" volumeName="kubernetes.io/projected/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-kube-api-access-b6ggc" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700291 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="492e9833-4513-4f2f-b865-d05a8973fadc" volumeName="kubernetes.io/secret/492e9833-4513-4f2f-b865-d05a8973fadc-proxy-tls" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700307 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d94dc349-c5cb-4f12-8e48-867030af4981" volumeName="kubernetes.io/projected/d94dc349-c5cb-4f12-8e48-867030af4981-kube-api-access-zjmcv" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700360 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e94d098b-fbcc-4e85-b8ad-42f3a21c822c" volumeName="kubernetes.io/configmap/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-telemetry-config" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700378 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee4c1949-96b4-4444-9675-9df1d46f681e" volumeName="kubernetes.io/configmap/ee4c1949-96b4-4444-9675-9df1d46f681e-auth-proxy-config" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700398 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb529297-b3de-4167-a91e-0a63725b3b0f" volumeName="kubernetes.io/configmap/fb529297-b3de-4167-a91e-0a63725b3b0f-audit-policies" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700419 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f1f60fa-d79d-4f31-b5bf-2ad333151537" volumeName="kubernetes.io/configmap/9f1f60fa-d79d-4f31-b5bf-2ad333151537-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700436 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee4c1949-96b4-4444-9675-9df1d46f681e" volumeName="kubernetes.io/projected/ee4c1949-96b4-4444-9675-9df1d46f681e-kube-api-access-x4wsx" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700455 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="062f1b21-2ffc-47da-8334-427c3b2a1a90" volumeName="kubernetes.io/configmap/062f1b21-2ffc-47da-8334-427c3b2a1a90-trusted-ca-bundle" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700475 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe" volumeName="kubernetes.io/configmap/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-ovnkube-identity-cm" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700491 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1e2340b-ebca-40de-b1e0-8133999cd860" volumeName="kubernetes.io/configmap/a1e2340b-ebca-40de-b1e0-8133999cd860-config" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700524 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e720e1d0-5a6d-4b76-8b25-5963e24950f5" volumeName="kubernetes.io/projected/e720e1d0-5a6d-4b76-8b25-5963e24950f5-kube-api-access" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700544 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="055f5c67-f512-4510-99c5-e194944b0599" volumeName="kubernetes.io/secret/055f5c67-f512-4510-99c5-e194944b0599-serving-cert" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700558 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="51eb717b-d11f-4bc3-8df6-deb51d5889f3" volumeName="kubernetes.io/projected/51eb717b-d11f-4bc3-8df6-deb51d5889f3-kube-api-access-gbnx8" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700577 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b2c6cd11-b1ed-4fed-a4ce-4eee0af20868" volumeName="kubernetes.io/secret/b2c6cd11-b1ed-4fed-a4ce-4eee0af20868-cloud-credential-operator-serving-cert" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700596 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d94dc349-c5cb-4f12-8e48-867030af4981" volumeName="kubernetes.io/projected/d94dc349-c5cb-4f12-8e48-867030af4981-bound-sa-token" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700609 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f5e09875-4445-4584-94f0-243148307bb0" volumeName="kubernetes.io/projected/f5e09875-4445-4584-94f0-243148307bb0-kube-api-access-clsd9" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700706 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6d288e3-8e73-44d2-874d-64c6c98dd991" volumeName="kubernetes.io/projected/b6d288e3-8e73-44d2-874d-64c6c98dd991-kube-api-access-vdb9w" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700726 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb529297-b3de-4167-a91e-0a63725b3b0f" volumeName="kubernetes.io/configmap/fb529297-b3de-4167-a91e-0a63725b3b0f-trusted-ca-bundle" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700751 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="47850839-bb4b-41e9-ac31-f1cabbb4926d" volumeName="kubernetes.io/projected/47850839-bb4b-41e9-ac31-f1cabbb4926d-kube-api-access-krrkl" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700771 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8fe3d699-023e-4de7-8d42-6c9d8a5e68f3" volumeName="kubernetes.io/secret/8fe3d699-023e-4de7-8d42-6c9d8a5e68f3-signing-key" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700787 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d1b3859c-20a1-4a1c-8508-86ed843768f5" volumeName="kubernetes.io/projected/d1b3859c-20a1-4a1c-8508-86ed843768f5-kube-api-access-gw4m5" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700807 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d94dc349-c5cb-4f12-8e48-867030af4981" volumeName="kubernetes.io/configmap/d94dc349-c5cb-4f12-8e48-867030af4981-trusted-ca" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700822 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e5fb0152-3efd-4000-bce3-fa90b75316ae" volumeName="kubernetes.io/configmap/e5fb0152-3efd-4000-bce3-fa90b75316ae-images" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700843 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e697746f-fb9e-4d10-ab61-33c68e62cc0d" volumeName="kubernetes.io/configmap/e697746f-fb9e-4d10-ab61-33c68e62cc0d-etcd-ca" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700857 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e94d098b-fbcc-4e85-b8ad-42f3a21c822c" volumeName="kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700875 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c016b1e-d47c-47d4-a15f-4160e7731c82" volumeName="kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-config" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700888 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="999f02f6-e9b8-4d4b-ac35-b8b43a931cfc" volumeName="kubernetes.io/projected/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-kube-api-access-2lj7z" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700903 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9b41258c-ac1d-4e00-ac5e-732d85441f12" volumeName="kubernetes.io/configmap/9b41258c-ac1d-4e00-ac5e-732d85441f12-trusted-ca-bundle" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700921 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e720e1d0-5a6d-4b76-8b25-5963e24950f5" volumeName="kubernetes.io/secret/e720e1d0-5a6d-4b76-8b25-5963e24950f5-serving-cert" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700934 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee55b576-6b8d-4217-b5a7-93b023a1e885" volumeName="kubernetes.io/projected/ee55b576-6b8d-4217-b5a7-93b023a1e885-kube-api-access-j5lf8" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700950 29097 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="518ffff8-8119-41be-8b76-ce49d5751254" volumeName="kubernetes.io/projected/518ffff8-8119-41be-8b76-ce49d5751254-kube-api-access-4glbr" seLinuxMountContext="" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700962 29097 reconstruct.go:97] "Volume reconstruction finished" Mar 12 18:29:20.704111 master-0 kubenswrapper[29097]: I0312 18:29:20.700971 29097 reconciler.go:26] "Reconciler: start to sync state" Mar 12 18:29:20.708867 master-0 kubenswrapper[29097]: I0312 18:29:20.705990 29097 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 12 18:29:20.718124 master-0 kubenswrapper[29097]: I0312 18:29:20.718044 29097 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 12 18:29:20.719459 master-0 kubenswrapper[29097]: I0312 18:29:20.719430 29097 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 12 18:29:20.719511 master-0 kubenswrapper[29097]: I0312 18:29:20.719473 29097 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 12 18:29:20.719511 master-0 kubenswrapper[29097]: I0312 18:29:20.719493 29097 kubelet.go:2335] "Starting kubelet main sync loop" Mar 12 18:29:20.719591 master-0 kubenswrapper[29097]: E0312 18:29:20.719552 29097 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 18:29:20.722417 master-0 kubenswrapper[29097]: I0312 18:29:20.722322 29097 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 12 18:29:20.732538 master-0 kubenswrapper[29097]: I0312 18:29:20.732472 29097 generic.go:334] "Generic (PLEG): container finished" podID="ab926874-9722-4e65-9084-27b2f9915450" containerID="f47fabdc4bdd8a3562bf6c4bb328b7b2603314ba7c3e007528769af4852f929f" exitCode=0 Mar 12 18:29:20.733709 master-0 kubenswrapper[29097]: E0312 18:29:20.733671 29097 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33feec78_4592_4343_965b_aa1b7044fcf3.slice/crio-26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Error finding container 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Status 404 returned error can't find the container with id 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547 Mar 12 18:29:20.734191 master-0 kubenswrapper[29097]: I0312 18:29:20.734160 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-754bdc9f9d-4w5z7_030160af-c915-4f00-903a-1c4b5c2b719a/machine-approver-controller/0.log" Mar 12 18:29:20.734503 master-0 kubenswrapper[29097]: I0312 18:29:20.734467 29097 generic.go:334] "Generic (PLEG): container finished" podID="030160af-c915-4f00-903a-1c4b5c2b719a" containerID="896aa2273ca1ba7df7cc5c10fd0e284e882d24c2714f3848133288e9eccfa795" exitCode=255 Mar 12 18:29:20.737976 master-0 kubenswrapper[29097]: I0312 18:29:20.737950 29097 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="7543b93babb8e5c9d5cf6e5b32750ff43fa63df2a49a76caac539aefeccb417e" exitCode=0 Mar 12 18:29:20.738029 master-0 kubenswrapper[29097]: I0312 18:29:20.737977 29097 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="a4e1a60fb2a5676e0c3a007005c7ba4c139f5bc8097de545710cc25465fe8dd1" exitCode=0 Mar 12 18:29:20.738029 master-0 kubenswrapper[29097]: I0312 18:29:20.737990 29097 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="84b05cdad590c2078d906c0b5bbb00f860e5030460386d4b22d12520cb006e5f" exitCode=0 Mar 12 18:29:20.741384 master-0 kubenswrapper[29097]: I0312 18:29:20.741357 29097 generic.go:334] "Generic (PLEG): container finished" podID="d3e5b8c8-a100-4880-a0b9-9c3989d4e739" containerID="a27087137da15f60f89c47c0f62e286ef1e4ec7252189f88f560af42271ffe59" exitCode=0 Mar 12 18:29:20.741441 master-0 kubenswrapper[29097]: I0312 18:29:20.741383 29097 generic.go:334] "Generic (PLEG): container finished" podID="d3e5b8c8-a100-4880-a0b9-9c3989d4e739" containerID="67222c5dd6dc84922f3b31521e73c46da015094341322a76fa955a30881504a6" exitCode=0 Mar 12 18:29:20.746171 master-0 kubenswrapper[29097]: I0312 18:29:20.746135 29097 generic.go:334] "Generic (PLEG): container finished" podID="1453f6461bf5d599ad65a4656343ee91" containerID="43fec13eaecff4e5dfee1960d9d80a34d149510a17fc33563f826b5c69991892" exitCode=0 Mar 12 18:29:20.747838 master-0 kubenswrapper[29097]: I0312 18:29:20.747805 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-m6hsp_223a548b-a3ad-40dd-82de-e3dbb7f3e4fa/openshift-controller-manager-operator/1.log" Mar 12 18:29:20.747893 master-0 kubenswrapper[29097]: I0312 18:29:20.747850 29097 generic.go:334] "Generic (PLEG): container finished" podID="223a548b-a3ad-40dd-82de-e3dbb7f3e4fa" containerID="3e81068034bf9c9fbfc0dcacd5d8ed6f99d4b966db54edfeaa5ae37af6e0a1a5" exitCode=255 Mar 12 18:29:20.749488 master-0 kubenswrapper[29097]: I0312 18:29:20.749459 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-tjp2j_37cd9c0a-697e-4e67-932b-b331ff77c8c0/openshift-config-operator/2.log" Mar 12 18:29:20.749858 master-0 kubenswrapper[29097]: I0312 18:29:20.749832 29097 generic.go:334] "Generic (PLEG): container finished" podID="37cd9c0a-697e-4e67-932b-b331ff77c8c0" containerID="efd0f8bb0ebca8e4985d1f13c9ba9d356c0dcc91d47381baf756c886f3430aea" exitCode=255 Mar 12 18:29:20.749858 master-0 kubenswrapper[29097]: I0312 18:29:20.749855 29097 generic.go:334] "Generic (PLEG): container finished" podID="37cd9c0a-697e-4e67-932b-b331ff77c8c0" containerID="a644d995b7e4a613ffad672523990100f60140e220dd50976725d9008b099a3d" exitCode=0 Mar 12 18:29:20.753315 master-0 kubenswrapper[29097]: I0312 18:29:20.753261 29097 generic.go:334] "Generic (PLEG): container finished" podID="518ffff8-8119-41be-8b76-ce49d5751254" containerID="769cd1e8b5824a316a70fa02fbd72a61b282feb96440b00bc150d9a2b430b6d3" exitCode=0 Mar 12 18:29:20.757627 master-0 kubenswrapper[29097]: I0312 18:29:20.757579 29097 generic.go:334] "Generic (PLEG): container finished" podID="8fe3d699-023e-4de7-8d42-6c9d8a5e68f3" containerID="7007abd6bd87f278095a5c5bea805876ca0e2532537842c0b1266ddd70ce3cd3" exitCode=0 Mar 12 18:29:20.759782 master-0 kubenswrapper[29097]: I0312 18:29:20.759742 29097 generic.go:334] "Generic (PLEG): container finished" podID="adb0dbbf-458d-46f5-b236-d4904e125418" containerID="5f50f1609b5be6488fded6489aafefb9e65bb7d7fb19c94cf58285ac54471228" exitCode=0 Mar 12 18:29:20.761144 master-0 kubenswrapper[29097]: I0312 18:29:20.761114 29097 generic.go:334] "Generic (PLEG): container finished" podID="30102cc9-45f8-46f8-bb34-eec48fdb297d" containerID="15f58aad78a995767697fd5b4bbf18700052b6bed0c718d5b6a5383ae0c8a9a8" exitCode=0 Mar 12 18:29:20.773193 master-0 kubenswrapper[29097]: I0312 18:29:20.773147 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-lqpbp_306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed/cluster-node-tuning-operator/0.log" Mar 12 18:29:20.773289 master-0 kubenswrapper[29097]: I0312 18:29:20.773194 29097 generic.go:334] "Generic (PLEG): container finished" podID="306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed" containerID="bf6736acde3a261d2e1c8eec8be75f38ab871967029a8a7ea9d5bc1635fc75f5" exitCode=1 Mar 12 18:29:20.775891 master-0 kubenswrapper[29097]: I0312 18:29:20.775862 29097 generic.go:334] "Generic (PLEG): container finished" podID="39c441a05d91070efc538925475b0a44" containerID="feb7a0602e16521ca8f037d98e053563e5dfd7b3fed109ded127b4e56a4c158c" exitCode=2 Mar 12 18:29:20.775891 master-0 kubenswrapper[29097]: I0312 18:29:20.775887 29097 generic.go:334] "Generic (PLEG): container finished" podID="39c441a05d91070efc538925475b0a44" containerID="6013ae8778b6f3db082ecdee07bf998643391f13699e5ddf7a85c9b9ddf833c3" exitCode=0 Mar 12 18:29:20.775958 master-0 kubenswrapper[29097]: I0312 18:29:20.775895 29097 generic.go:334] "Generic (PLEG): container finished" podID="39c441a05d91070efc538925475b0a44" containerID="56c803b302b6c89542dd77ed04fecb43a59a8287926d38c4629dc8bd033d7a46" exitCode=0 Mar 12 18:29:20.778940 master-0 kubenswrapper[29097]: I0312 18:29:20.778892 29097 generic.go:334] "Generic (PLEG): container finished" podID="4519000b-e475-4c26-a1c0-bf05cd9c242b" containerID="41524a86d9adbef4539f0c75d4ef6de3fe400da88552adc9ce67756160dee015" exitCode=0 Mar 12 18:29:20.778940 master-0 kubenswrapper[29097]: I0312 18:29:20.778934 29097 generic.go:334] "Generic (PLEG): container finished" podID="4519000b-e475-4c26-a1c0-bf05cd9c242b" containerID="4283fbeacecb2226a32d29824505e5362a9bc10b995fb624a688b75d67e46563" exitCode=0 Mar 12 18:29:20.780942 master-0 kubenswrapper[29097]: I0312 18:29:20.780911 29097 generic.go:334] "Generic (PLEG): container finished" podID="e22c7035-4b7a-48cb-9abb-db277b387842" containerID="a987d23905b82090084aa8d4e8ab172632e1e1833011544d548639c8ff18c467" exitCode=0 Mar 12 18:29:20.784110 master-0 kubenswrapper[29097]: I0312 18:29:20.784088 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-2ltx9_bce831df-c604-4608-a24e-b14d62c5287a/snapshot-controller/3.log" Mar 12 18:29:20.784169 master-0 kubenswrapper[29097]: I0312 18:29:20.784116 29097 generic.go:334] "Generic (PLEG): container finished" podID="bce831df-c604-4608-a24e-b14d62c5287a" containerID="3ee6889d81d43029dd6623714d782675e77b0ac4d47d44b5e698b3218f31c69c" exitCode=1 Mar 12 18:29:20.787945 master-0 kubenswrapper[29097]: I0312 18:29:20.787911 29097 generic.go:334] "Generic (PLEG): container finished" podID="b8dd13a7-10e5-431b-8d30-405dcfea02f5" containerID="69a2563b13bb321b549ca470bba68e3784ff4506218240cbeb3734f424459804" exitCode=0 Mar 12 18:29:20.792078 master-0 kubenswrapper[29097]: I0312 18:29:20.792028 29097 generic.go:334] "Generic (PLEG): container finished" podID="e720e1d0-5a6d-4b76-8b25-5963e24950f5" containerID="6cbf8532a0aab6166e00e40dafe24b7c97f2d79bb9206285a901edb45142b490" exitCode=0 Mar 12 18:29:20.799502 master-0 kubenswrapper[29097]: I0312 18:29:20.799429 29097 generic.go:334] "Generic (PLEG): container finished" podID="236f2886-bb69-49a7-9471-36454fd1cbd3" containerID="6ae7a934b8aa2f254b8b82bbc367d7391db11d303ac3c55852c1da10c3f95301" exitCode=0 Mar 12 18:29:20.800914 master-0 kubenswrapper[29097]: I0312 18:29:20.800867 29097 generic.go:334] "Generic (PLEG): container finished" podID="50322fdb-6d3f-4237-92d2-a170e2071de5" containerID="6ad45a6ad32331be3b965a6445d2f9fd3e17e7a370bbd99490dcb4dc21bb6f9f" exitCode=0 Mar 12 18:29:20.804328 master-0 kubenswrapper[29097]: I0312 18:29:20.804273 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ljw8b_062f1b21-2ffc-47da-8334-427c3b2a1a90/authentication-operator/4.log" Mar 12 18:29:20.804409 master-0 kubenswrapper[29097]: I0312 18:29:20.804333 29097 generic.go:334] "Generic (PLEG): container finished" podID="062f1b21-2ffc-47da-8334-427c3b2a1a90" containerID="0947e2aa600f02809bc0e80e74513829af30ca5ce847f92836ff0aa87ebf24ca" exitCode=255 Mar 12 18:29:20.806846 master-0 kubenswrapper[29097]: I0312 18:29:20.806799 29097 generic.go:334] "Generic (PLEG): container finished" podID="f3a2cda2-b70f-4128-a1be-48503f5aad6d" containerID="14be846643126f0f684988fbee828e3ae28a2a3ed42495436ab25923fcd90c1e" exitCode=0 Mar 12 18:29:20.806846 master-0 kubenswrapper[29097]: I0312 18:29:20.806832 29097 generic.go:334] "Generic (PLEG): container finished" podID="f3a2cda2-b70f-4128-a1be-48503f5aad6d" containerID="fdad7d8f064f703bfbf88c807f07178484bba5e53ad147ecc9d74969fce8c221" exitCode=0 Mar 12 18:29:20.806846 master-0 kubenswrapper[29097]: I0312 18:29:20.806844 29097 generic.go:334] "Generic (PLEG): container finished" podID="f3a2cda2-b70f-4128-a1be-48503f5aad6d" containerID="452ac1e185a248cfac36d370d36916ef0c27910988d43a12329a25e9765f77ac" exitCode=0 Mar 12 18:29:20.813820 master-0 kubenswrapper[29097]: I0312 18:29:20.813791 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-mb6tc_d1b3859c-20a1-4a1c-8508-86ed843768f5/manager/1.log" Mar 12 18:29:20.815576 master-0 kubenswrapper[29097]: I0312 18:29:20.815533 29097 generic.go:334] "Generic (PLEG): container finished" podID="d1b3859c-20a1-4a1c-8508-86ed843768f5" containerID="96aef0daff5b4b065b760a53564e1e05a4751150ad79dc2f9bc551c5dafe3e48" exitCode=1 Mar 12 18:29:20.819017 master-0 kubenswrapper[29097]: I0312 18:29:20.818985 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-5-master-0_99f63924-b198-4954-ba14-5c48e8830ec0/installer/0.log" Mar 12 18:29:20.819078 master-0 kubenswrapper[29097]: I0312 18:29:20.819026 29097 generic.go:334] "Generic (PLEG): container finished" podID="99f63924-b198-4954-ba14-5c48e8830ec0" containerID="bcd9c4470387a4f73246459472597ab7bf839663226c4513e3b54a4697a699f9" exitCode=1 Mar 12 18:29:20.819649 master-0 kubenswrapper[29097]: E0312 18:29:20.819611 29097 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 12 18:29:20.821846 master-0 kubenswrapper[29097]: I0312 18:29:20.821809 29097 generic.go:334] "Generic (PLEG): container finished" podID="0a18e6ce-2fed-4e81-9191-45c1e5d3a090" containerID="660d6cb7ac45d8c8e280bd8037da6efe2ef8548c41dcd02f688edd458d998314" exitCode=0 Mar 12 18:29:20.824059 master-0 kubenswrapper[29097]: I0312 18:29:20.824024 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_7542f3f1-23fe-41df-99b9-4324c75d35b7/installer/0.log" Mar 12 18:29:20.824139 master-0 kubenswrapper[29097]: I0312 18:29:20.824070 29097 generic.go:334] "Generic (PLEG): container finished" podID="7542f3f1-23fe-41df-99b9-4324c75d35b7" containerID="e3aea0a79706e5d2ced89ea30c6dab8e3469fe22291b915ce855f44fa68a87b6" exitCode=1 Mar 12 18:29:20.826378 master-0 kubenswrapper[29097]: I0312 18:29:20.826346 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-kwv7s_51eb717b-d11f-4bc3-8df6-deb51d5889f3/package-server-manager/0.log" Mar 12 18:29:20.826769 master-0 kubenswrapper[29097]: I0312 18:29:20.826736 29097 generic.go:334] "Generic (PLEG): container finished" podID="51eb717b-d11f-4bc3-8df6-deb51d5889f3" containerID="33470f162304f6a1c732da622d08f9a2cb10dfebe7eb3e1cc79d0a55f3c66c95" exitCode=1 Mar 12 18:29:20.836345 master-0 kubenswrapper[29097]: I0312 18:29:20.836300 29097 generic.go:334] "Generic (PLEG): container finished" podID="e697746f-fb9e-4d10-ab61-33c68e62cc0d" containerID="4322cdc97f321d2418571282b2d0a02572a0fe1f4c6c9ffe9fbcda76c46d48dc" exitCode=0 Mar 12 18:29:20.841854 master-0 kubenswrapper[29097]: I0312 18:29:20.840642 29097 generic.go:334] "Generic (PLEG): container finished" podID="4048e453-a983-4708-89b6-a81af0067e29" containerID="570936a0a36edb0fda6b55c99e7f566dfd145b7b28da0dcae1b91148af7c1a36" exitCode=0 Mar 12 18:29:20.849248 master-0 kubenswrapper[29097]: I0312 18:29:20.849204 29097 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="577df93b02b1bb8864460514509f0380f90a4c7a14a9f501133a0fe6e0eb58f9" exitCode=0 Mar 12 18:29:20.854303 master-0 kubenswrapper[29097]: I0312 18:29:20.854272 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-z9srl_ee4c1949-96b4-4444-9675-9df1d46f681e/config-sync-controllers/0.log" Mar 12 18:29:20.854816 master-0 kubenswrapper[29097]: I0312 18:29:20.854789 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-z9srl_ee4c1949-96b4-4444-9675-9df1d46f681e/cluster-cloud-controller-manager/0.log" Mar 12 18:29:20.854883 master-0 kubenswrapper[29097]: I0312 18:29:20.854827 29097 generic.go:334] "Generic (PLEG): container finished" podID="ee4c1949-96b4-4444-9675-9df1d46f681e" containerID="488293f6a0a5ffc939b73e8e291035b18dd6b6d9c6030cee524df83362585aa5" exitCode=1 Mar 12 18:29:20.854883 master-0 kubenswrapper[29097]: I0312 18:29:20.854864 29097 generic.go:334] "Generic (PLEG): container finished" podID="ee4c1949-96b4-4444-9675-9df1d46f681e" containerID="55f44f89a0ddfa17022efb42d5b69490ffb4f27463e27a43d9ad2629d1fed3e4" exitCode=1 Mar 12 18:29:20.857947 master-0 kubenswrapper[29097]: I0312 18:29:20.857911 29097 generic.go:334] "Generic (PLEG): container finished" podID="0cc54e47-af53-448a-b1c9-043710890a32" containerID="64b8db76d38d762e3433321ecb6cf6a40c39a4859996726a4dbe65ebe8ab152e" exitCode=0 Mar 12 18:29:20.858042 master-0 kubenswrapper[29097]: I0312 18:29:20.857951 29097 generic.go:334] "Generic (PLEG): container finished" podID="0cc54e47-af53-448a-b1c9-043710890a32" containerID="d444bb83001cd903efee9e4b70e81f0883fb0a84f83f9034f7633dc5339f7ac1" exitCode=0 Mar 12 18:29:20.859851 master-0 kubenswrapper[29097]: I0312 18:29:20.859821 29097 generic.go:334] "Generic (PLEG): container finished" podID="604044f4-9b0b-4747-827d-843f3cfa7077" containerID="6b224901428e2ddbe12d7888c29aa663990f99e54eaab842f708f9d3489fa570" exitCode=0 Mar 12 18:29:20.864113 master-0 kubenswrapper[29097]: I0312 18:29:20.864025 29097 generic.go:334] "Generic (PLEG): container finished" podID="455f0aad-add2-49d0-995c-f92467bce2d6" containerID="c629217e0646c42efab7b6831a82c134d4897e205bc3cb7b99ec2b82209a7725" exitCode=0 Mar 12 18:29:20.864113 master-0 kubenswrapper[29097]: I0312 18:29:20.864106 29097 generic.go:334] "Generic (PLEG): container finished" podID="455f0aad-add2-49d0-995c-f92467bce2d6" containerID="0c7342da7ff90812cfb607510698f6f5025811001aa1d822318142b6a574472a" exitCode=0 Mar 12 18:29:20.864239 master-0 kubenswrapper[29097]: I0312 18:29:20.864118 29097 generic.go:334] "Generic (PLEG): container finished" podID="455f0aad-add2-49d0-995c-f92467bce2d6" containerID="47d87208497022a24111ccaca14cfa76489b3e3c8d2e4baeec44eed1ec3639c0" exitCode=0 Mar 12 18:29:20.864239 master-0 kubenswrapper[29097]: I0312 18:29:20.864128 29097 generic.go:334] "Generic (PLEG): container finished" podID="455f0aad-add2-49d0-995c-f92467bce2d6" containerID="6f8ff60199929b0a4e5f12c0833311ee92d8752831cac14e7f6e3610c7c482cd" exitCode=0 Mar 12 18:29:20.864239 master-0 kubenswrapper[29097]: I0312 18:29:20.864136 29097 generic.go:334] "Generic (PLEG): container finished" podID="455f0aad-add2-49d0-995c-f92467bce2d6" containerID="a06bfc83f83320e9affd2425dbf28da14fdf99e08ecffd8df981975c0ab701b1" exitCode=0 Mar 12 18:29:20.864239 master-0 kubenswrapper[29097]: I0312 18:29:20.864144 29097 generic.go:334] "Generic (PLEG): container finished" podID="455f0aad-add2-49d0-995c-f92467bce2d6" containerID="f1a76c40be6adf4508f866e6663729add45233d4cc201334c0c921cf2c117caa" exitCode=0 Mar 12 18:29:20.868325 master-0 kubenswrapper[29097]: I0312 18:29:20.868296 29097 generic.go:334] "Generic (PLEG): container finished" podID="b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c" containerID="ccd65fc043716d0888637293f37bd7ecd513c907229d11d94e53b330518e6446" exitCode=0 Mar 12 18:29:20.868325 master-0 kubenswrapper[29097]: I0312 18:29:20.868325 29097 generic.go:334] "Generic (PLEG): container finished" podID="b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c" containerID="5202e2258391267eb3b45c1a1e1d347281dc176d85f3c415101ad891dd72d792" exitCode=0 Mar 12 18:29:20.871432 master-0 kubenswrapper[29097]: I0312 18:29:20.871319 29097 generic.go:334] "Generic (PLEG): container finished" podID="d4ae1240-e04e-48e9-88df-9f1a53508da7" containerID="3bfdf8caec49323e35f87883171b05e3d1f44df1c027fc9a9977c37c9de794d7" exitCode=0 Mar 12 18:29:20.885692 master-0 kubenswrapper[29097]: I0312 18:29:20.885641 29097 generic.go:334] "Generic (PLEG): container finished" podID="a1e2340b-ebca-40de-b1e0-8133999cd860" containerID="9de5a3b93eb3f1136dd34751bc0d652341fdfc646209d52ecaff219c3bdfc30b" exitCode=0 Mar 12 18:29:20.890769 master-0 kubenswrapper[29097]: I0312 18:29:20.890692 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4527l_d94dc349-c5cb-4f12-8e48-867030af4981/ingress-operator/4.log" Mar 12 18:29:20.891070 master-0 kubenswrapper[29097]: I0312 18:29:20.891020 29097 generic.go:334] "Generic (PLEG): container finished" podID="d94dc349-c5cb-4f12-8e48-867030af4981" containerID="37c9fcab8917043972cb8da48f9b3a66fa98e29cb384d4ab82bdb89b8dd2d452" exitCode=1 Mar 12 18:29:20.895621 master-0 kubenswrapper[29097]: I0312 18:29:20.895576 29097 generic.go:334] "Generic (PLEG): container finished" podID="4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64" containerID="7ae4d8d774c3ae2fa8787557fe823a911ba5793827a61120b252083df1bc5f38" exitCode=0 Mar 12 18:29:20.899731 master-0 kubenswrapper[29097]: I0312 18:29:20.899694 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-2psgb_e5fb0152-3efd-4000-bce3-fa90b75316ae/cluster-baremetal-operator/2.log" Mar 12 18:29:20.900394 master-0 kubenswrapper[29097]: I0312 18:29:20.900353 29097 generic.go:334] "Generic (PLEG): container finished" podID="e5fb0152-3efd-4000-bce3-fa90b75316ae" containerID="a2b26c62b9f6c98c92beecd149d44e6763377e53417bf7236ed48cc7741bf7a7" exitCode=1 Mar 12 18:29:20.903069 master-0 kubenswrapper[29097]: I0312 18:29:20.903028 29097 generic.go:334] "Generic (PLEG): container finished" podID="2a4a981c-9454-4e1f-951e-1a62737659cc" containerID="95a463de33fcdba00f135dbdd2f42b2c5b30584ee4c54c59c7552f930a4442bf" exitCode=0 Mar 12 18:29:20.907064 master-0 kubenswrapper[29097]: I0312 18:29:20.907034 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/3.log" Mar 12 18:29:20.907511 master-0 kubenswrapper[29097]: I0312 18:29:20.907469 29097 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="f4576abbe7a90b1b20fca15403fa958348685c536308da122d58a74078454e59" exitCode=1 Mar 12 18:29:20.907511 master-0 kubenswrapper[29097]: I0312 18:29:20.907499 29097 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="cfb72c7fed7776f25cec78c2b1f068a79f1aca1d681c7f12e196e29d22a04486" exitCode=0 Mar 12 18:29:20.909469 master-0 kubenswrapper[29097]: I0312 18:29:20.909430 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-hqrqt_8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe/approver/1.log" Mar 12 18:29:20.909769 master-0 kubenswrapper[29097]: I0312 18:29:20.909738 29097 generic.go:334] "Generic (PLEG): container finished" podID="8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe" containerID="f3f978addd81177f408450b5c8b37d7927d5e40c6e8538e905d2bc327fb8a086" exitCode=1 Mar 12 18:29:20.911845 master-0 kubenswrapper[29097]: I0312 18:29:20.911814 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-zd9gm_34cbf061-4c76-476e-bed9-0a133c744862/control-plane-machine-set-operator/0.log" Mar 12 18:29:20.911845 master-0 kubenswrapper[29097]: I0312 18:29:20.911843 29097 generic.go:334] "Generic (PLEG): container finished" podID="34cbf061-4c76-476e-bed9-0a133c744862" containerID="d72afaed4f952dfc1603764d86ef509711bb42af6ee8dbbfe68a46a833266739" exitCode=1 Mar 12 18:29:20.915858 master-0 kubenswrapper[29097]: I0312 18:29:20.915815 29097 generic.go:334] "Generic (PLEG): container finished" podID="9b41258c-ac1d-4e00-ac5e-732d85441f12" containerID="3afc57dd06460be0cc0e28f1088f020bc3b1fb80b27fe8bfdb49d253e732e561" exitCode=0 Mar 12 18:29:20.920011 master-0 kubenswrapper[29097]: I0312 18:29:20.919971 29097 generic.go:334] "Generic (PLEG): container finished" podID="fb529297-b3de-4167-a91e-0a63725b3b0f" containerID="f77291c1df7f378588657b046f600e7b89800e859e666660a704c3c70a31f3c7" exitCode=0 Mar 12 18:29:20.923240 master-0 kubenswrapper[29097]: I0312 18:29:20.923211 29097 generic.go:334] "Generic (PLEG): container finished" podID="055f5c67-f512-4510-99c5-e194944b0599" containerID="fce4a972222f063110d34772de7116adb2483b3e9c195060fc1414ecf2cd9f6c" exitCode=0 Mar 12 18:29:20.925837 master-0 kubenswrapper[29097]: I0312 18:29:20.925806 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-9nzsn_b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652/manager/1.log" Mar 12 18:29:20.926395 master-0 kubenswrapper[29097]: I0312 18:29:20.926357 29097 generic.go:334] "Generic (PLEG): container finished" podID="b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652" containerID="68e99d6ea6e8d10062ce5f4ba00f8d6b01cd9c70d46c0f8c6da206028e5ae034" exitCode=1 Mar 12 18:29:20.931646 master-0 kubenswrapper[29097]: I0312 18:29:20.931617 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_38785e6e-3052-405c-8874-4f295985def5/installer/0.log" Mar 12 18:29:20.932037 master-0 kubenswrapper[29097]: I0312 18:29:20.931654 29097 generic.go:334] "Generic (PLEG): container finished" podID="38785e6e-3052-405c-8874-4f295985def5" containerID="ad09860af65a7f4806ecc5c16545e1e14574d76310388c1e9bda798b177013f0" exitCode=1 Mar 12 18:29:20.934240 master-0 kubenswrapper[29097]: I0312 18:29:20.934203 29097 generic.go:334] "Generic (PLEG): container finished" podID="e418d797-2c31-404b-9dc3-251399e42542" containerID="6b7528f0c5da1778fadc0415752a37a2983c5adfa27ce67313a93246b6745480" exitCode=0 Mar 12 18:29:20.939304 master-0 kubenswrapper[29097]: I0312 18:29:20.939269 29097 generic.go:334] "Generic (PLEG): container finished" podID="b6d288e3-8e73-44d2-874d-64c6c98dd991" containerID="61391e64ce8e20710a16e47ab514517643e782dc9c713a84a5cefd62cff8c6ad" exitCode=0 Mar 12 18:29:20.940604 master-0 kubenswrapper[29097]: I0312 18:29:20.940572 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_e2bbd04e-d147-4343-9e5d-300e42de9dbb/installer/0.log" Mar 12 18:29:20.940680 master-0 kubenswrapper[29097]: I0312 18:29:20.940610 29097 generic.go:334] "Generic (PLEG): container finished" podID="e2bbd04e-d147-4343-9e5d-300e42de9dbb" containerID="6ad211f881c1b186d3265c89c0f87451f6acb70c49f4334e2e7867092be6c91a" exitCode=1 Mar 12 18:29:20.941951 master-0 kubenswrapper[29097]: I0312 18:29:20.941908 29097 generic.go:334] "Generic (PLEG): container finished" podID="45aa4887-c913-4ece-ae34-fcde33832621" containerID="713977d47dfecb905c7cc3c14de2a72254744fe363e6f7198ff24aaf349daf7b" exitCode=0 Mar 12 18:29:20.945568 master-0 kubenswrapper[29097]: I0312 18:29:20.945539 29097 generic.go:334] "Generic (PLEG): container finished" podID="74eb1407-de29-42e5-9e6c-ce1bec3a9d80" containerID="cf7ec04355ab534ccfb643cab9c3d22d23f3ccdda0dc0dcaa6f049053cf3267f" exitCode=0 Mar 12 18:29:20.953677 master-0 kubenswrapper[29097]: I0312 18:29:20.953638 29097 generic.go:334] "Generic (PLEG): container finished" podID="ec8121ea-f6e9-4232-9837-78b278a8cf54" containerID="858ee31a04ea10059b361ef351f3695c906bcd6e4d8c64728b6201ca11a0a592" exitCode=0 Mar 12 18:29:20.955155 master-0 kubenswrapper[29097]: I0312 18:29:20.955088 29097 generic.go:334] "Generic (PLEG): container finished" podID="4cb73c69-af16-4565-bdb5-aeae9dcfb423" containerID="4d14cf356a45b87bedba837114945ff27dddf151bc1c718cb0f056aecd18d911" exitCode=0 Mar 12 18:29:20.956676 master-0 kubenswrapper[29097]: I0312 18:29:20.956650 29097 generic.go:334] "Generic (PLEG): container finished" podID="a1a56802af72ce1aac6b5077f1695ac0" containerID="91fc9e27f58a493917f258512c2dfe1c4bf9d4efc52492f0f4d3e21237d1136f" exitCode=0 Mar 12 18:29:21.019724 master-0 kubenswrapper[29097]: E0312 18:29:21.019661 29097 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 12 18:29:21.097313 master-0 kubenswrapper[29097]: I0312 18:29:21.097196 29097 manager.go:324] Recovery completed Mar 12 18:29:21.161614 master-0 kubenswrapper[29097]: E0312 18:29:21.161556 29097 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33feec78_4592_4343_965b_aa1b7044fcf3.slice/crio-26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Error finding container 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Status 404 returned error can't find the container with id 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547 Mar 12 18:29:21.162354 master-0 kubenswrapper[29097]: W0312 18:29:21.162035 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33feec78_4592_4343_965b_aa1b7044fcf3.slice/crio-26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547 WatchSource:0}: Error finding container 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Status 404 returned error can't find the container with id 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547 Mar 12 18:29:21.180720 master-0 kubenswrapper[29097]: I0312 18:29:21.180680 29097 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 12 18:29:21.180720 master-0 kubenswrapper[29097]: I0312 18:29:21.180707 29097 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 12 18:29:21.180720 master-0 kubenswrapper[29097]: I0312 18:29:21.180725 29097 state_mem.go:36] "Initialized new in-memory state store" Mar 12 18:29:21.180995 master-0 kubenswrapper[29097]: I0312 18:29:21.180965 29097 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 12 18:29:21.180995 master-0 kubenswrapper[29097]: I0312 18:29:21.180984 29097 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 12 18:29:21.181086 master-0 kubenswrapper[29097]: I0312 18:29:21.181004 29097 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 12 18:29:21.181086 master-0 kubenswrapper[29097]: I0312 18:29:21.181011 29097 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 12 18:29:21.181086 master-0 kubenswrapper[29097]: I0312 18:29:21.181018 29097 policy_none.go:49] "None policy: Start" Mar 12 18:29:21.184250 master-0 kubenswrapper[29097]: I0312 18:29:21.184207 29097 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 12 18:29:21.184342 master-0 kubenswrapper[29097]: I0312 18:29:21.184263 29097 state_mem.go:35] "Initializing new in-memory state store" Mar 12 18:29:21.184500 master-0 kubenswrapper[29097]: I0312 18:29:21.184474 29097 state_mem.go:75] "Updated machine memory state" Mar 12 18:29:21.184500 master-0 kubenswrapper[29097]: I0312 18:29:21.184489 29097 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 12 18:29:21.197057 master-0 kubenswrapper[29097]: I0312 18:29:21.197022 29097 manager.go:334] "Starting Device Plugin manager" Mar 12 18:29:21.197057 master-0 kubenswrapper[29097]: I0312 18:29:21.197061 29097 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 12 18:29:21.197250 master-0 kubenswrapper[29097]: I0312 18:29:21.197072 29097 server.go:79] "Starting device plugin registration server" Mar 12 18:29:21.197427 master-0 kubenswrapper[29097]: I0312 18:29:21.197402 29097 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 18:29:21.197479 master-0 kubenswrapper[29097]: I0312 18:29:21.197418 29097 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 18:29:21.197913 master-0 kubenswrapper[29097]: I0312 18:29:21.197886 29097 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 12 18:29:21.197963 master-0 kubenswrapper[29097]: I0312 18:29:21.197954 29097 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 12 18:29:21.197963 master-0 kubenswrapper[29097]: I0312 18:29:21.197961 29097 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 18:29:21.298273 master-0 kubenswrapper[29097]: I0312 18:29:21.297648 29097 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:29:21.305834 master-0 kubenswrapper[29097]: I0312 18:29:21.305765 29097 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:29:21.305834 master-0 kubenswrapper[29097]: I0312 18:29:21.305839 29097 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:29:21.306031 master-0 kubenswrapper[29097]: I0312 18:29:21.305850 29097 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:29:21.306031 master-0 kubenswrapper[29097]: I0312 18:29:21.305932 29097 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 18:29:21.308558 master-0 kubenswrapper[29097]: E0312 18:29:21.308534 29097 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Mar 12 18:29:21.420255 master-0 kubenswrapper[29097]: I0312 18:29:21.420082 29097 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0","openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 18:29:21.421099 master-0 kubenswrapper[29097]: I0312 18:29:21.421021 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"ce272961794705fb82908bdcd8d6c34bb765939cf5756a00b858a75975bfb3ec"} Mar 12 18:29:21.421182 master-0 kubenswrapper[29097]: I0312 18:29:21.421101 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"79b096245a2c0849e1eb752ac1cc93728bb4af45cc62d990fa23de0f9691630c"} Mar 12 18:29:21.421182 master-0 kubenswrapper[29097]: I0312 18:29:21.421119 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"57a2e66698dd7b2085962d87cba4600292cd8ca3813c497d51b91333d17177c0"} Mar 12 18:29:21.421182 master-0 kubenswrapper[29097]: I0312 18:29:21.421132 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"b639416601ad7bd99978e18cd7f92b4a72dd1d06449a777a4af899e4c60f21ba"} Mar 12 18:29:21.421182 master-0 kubenswrapper[29097]: I0312 18:29:21.421146 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"b7750a2045b8f5019e76849e41c480529ce009b9060d700116c60f3a22cfe61b"} Mar 12 18:29:21.421182 master-0 kubenswrapper[29097]: I0312 18:29:21.421156 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"7543b93babb8e5c9d5cf6e5b32750ff43fa63df2a49a76caac539aefeccb417e"} Mar 12 18:29:21.421182 master-0 kubenswrapper[29097]: I0312 18:29:21.421169 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"a4e1a60fb2a5676e0c3a007005c7ba4c139f5bc8097de545710cc25465fe8dd1"} Mar 12 18:29:21.421182 master-0 kubenswrapper[29097]: I0312 18:29:21.421180 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"84b05cdad590c2078d906c0b5bbb00f860e5030460386d4b22d12520cb006e5f"} Mar 12 18:29:21.421592 master-0 kubenswrapper[29097]: I0312 18:29:21.421192 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"a27b7d74527b56755a6c2c471b3ca3c73b2cfc54277efe40b5551df95fef2671"} Mar 12 18:29:21.421592 master-0 kubenswrapper[29097]: I0312 18:29:21.421216 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"9398dfdf2392e3a31a313f647f8c7402bae4efc9bf142697e60f18f631f4f9da"} Mar 12 18:29:21.421592 master-0 kubenswrapper[29097]: I0312 18:29:21.421231 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"2a1fd65e7a43d1b5dcde44cfab17aaf88383a780e7e8fdd45682ee7f8eb7ffc3"} Mar 12 18:29:21.421592 master-0 kubenswrapper[29097]: I0312 18:29:21.421242 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"c24a11c569ed28517312963d8cc79cf04602675bd0a036245bd76c38b6e0f58e"} Mar 12 18:29:21.421592 master-0 kubenswrapper[29097]: I0312 18:29:21.421252 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerDied","Data":"43fec13eaecff4e5dfee1960d9d80a34d149510a17fc33563f826b5c69991892"} Mar 12 18:29:21.421592 master-0 kubenswrapper[29097]: I0312 18:29:21.421263 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"9c555e5bffa63ad24656c5dfa5ef32654f3cce81a377d07d84caf4aca5f33e3f"} Mar 12 18:29:21.421592 master-0 kubenswrapper[29097]: I0312 18:29:21.421308 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0fb27537deeb6ada4bd6bd0dc8f77614abe096d108a40275f771f7f507fd43a" Mar 12 18:29:21.421592 master-0 kubenswrapper[29097]: I0312 18:29:21.421325 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"49835aec35bdc5feca0d7cf24779b8da","Type":"ContainerStarted","Data":"65937ab55749d18637d8330aa44ceee2c94d4c78aeac6055c20ae0425fb42bf6"} Mar 12 18:29:21.421592 master-0 kubenswrapper[29097]: I0312 18:29:21.421337 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"49835aec35bdc5feca0d7cf24779b8da","Type":"ContainerStarted","Data":"69f074324c83bb75cf9edb644e26f8a566f617056c611c320fb03fd80290ef36"} Mar 12 18:29:21.421592 master-0 kubenswrapper[29097]: I0312 18:29:21.421348 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"49835aec35bdc5feca0d7cf24779b8da","Type":"ContainerStarted","Data":"f62d8ace6b78a3d4700c1f018543131bd0581db10be6e4a0ffe1a906b4efcd0a"} Mar 12 18:29:21.421592 master-0 kubenswrapper[29097]: I0312 18:29:21.421358 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"49835aec35bdc5feca0d7cf24779b8da","Type":"ContainerStarted","Data":"996ba8fb061459a072fc0dac62d85e8970305954c92459dbfc764353eca2dc98"} Mar 12 18:29:21.421592 master-0 kubenswrapper[29097]: I0312 18:29:21.421369 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"49835aec35bdc5feca0d7cf24779b8da","Type":"ContainerStarted","Data":"677d598751a9389168de4b8d58e7ebd447bfc781ea7149cdcbbd5de656faaac5"} Mar 12 18:29:21.421592 master-0 kubenswrapper[29097]: I0312 18:29:21.421392 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd22c21b01ab8567576e92f9b78bcc2934cfd08f8466cc304cfffee656791ad7" Mar 12 18:29:21.421592 master-0 kubenswrapper[29097]: I0312 18:29:21.421458 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b35a0992276a5a57f41cc10a07b53753228078f2cd4dbc9d5d05061bb670327" Mar 12 18:29:21.421592 master-0 kubenswrapper[29097]: I0312 18:29:21.421497 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2aaa458623aaac6f6d633ebf1840bf60a09bfe111c0fe4eefba933212240641" Mar 12 18:29:21.421592 master-0 kubenswrapper[29097]: I0312 18:29:21.421530 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce6917041a8bfd1810138da6e9f362c03a9897208ddc7625a2d63afd22b8d0a8" Mar 12 18:29:21.421592 master-0 kubenswrapper[29097]: I0312 18:29:21.421542 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3d2e6992e795fa35374f60292962d9511ac22996078698ddd0c5f16bcc8772c" Mar 12 18:29:21.421592 master-0 kubenswrapper[29097]: I0312 18:29:21.421579 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"01eeb3272c240b68931788fdbe353a89d0b5911f3532e9e6d3891f108d5a47fc"} Mar 12 18:29:21.421592 master-0 kubenswrapper[29097]: I0312 18:29:21.421595 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"df94ca5ecee83a72fb47501f1f919eaf4028159e1d146ee2a47f3391c16e15fc"} Mar 12 18:29:21.421592 master-0 kubenswrapper[29097]: I0312 18:29:21.421609 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"0e37d83cdb0f462d8ef6bf2539d2a10c7d84ad143eb688508398e11bcd665ee1"} Mar 12 18:29:21.421592 master-0 kubenswrapper[29097]: I0312 18:29:21.421620 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"1f3cc25e5fa149ff612b530c2777559e47a1f93ddab8c630d27f6a1e07dff2d3"} Mar 12 18:29:21.422855 master-0 kubenswrapper[29097]: I0312 18:29:21.421631 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"f0795f7815622793e2682cda3a2b19590293281fa98d6a395d40fd76c0da7491"} Mar 12 18:29:21.422855 master-0 kubenswrapper[29097]: I0312 18:29:21.421642 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerDied","Data":"577df93b02b1bb8864460514509f0380f90a4c7a14a9f501133a0fe6e0eb58f9"} Mar 12 18:29:21.422855 master-0 kubenswrapper[29097]: I0312 18:29:21.421656 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"94fe08ab3fbb45153add02b6a25a4870b04bc9f5d9d03ddb9283e70a2fe32299"} Mar 12 18:29:21.422855 master-0 kubenswrapper[29097]: I0312 18:29:21.421740 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"899242a15b2bdf3b4a04fb323647ca94","Type":"ContainerStarted","Data":"47171d91400de4e00e465f217262a5cfbabe28599c08b7a76e6b01d33016a909"} Mar 12 18:29:21.422855 master-0 kubenswrapper[29097]: I0312 18:29:21.421755 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"899242a15b2bdf3b4a04fb323647ca94","Type":"ContainerStarted","Data":"9efb49a7f1b6a902873e9b844b8f9a0a68e95cea55ba6d37aefc0f305d7e46f9"} Mar 12 18:29:21.422855 master-0 kubenswrapper[29097]: I0312 18:29:21.421800 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ac2e44511feb89f0eb8641549dc02db57c6e394fd3b40bd40da8f07b13abdb2" Mar 12 18:29:21.422855 master-0 kubenswrapper[29097]: I0312 18:29:21.421811 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"93aef700d51857dffd379b4fa5c63e9358523baf23deed5a0f436de9a4c7c7b1"} Mar 12 18:29:21.422855 master-0 kubenswrapper[29097]: I0312 18:29:21.421822 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"f4576abbe7a90b1b20fca15403fa958348685c536308da122d58a74078454e59"} Mar 12 18:29:21.422855 master-0 kubenswrapper[29097]: I0312 18:29:21.421833 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"cfb72c7fed7776f25cec78c2b1f068a79f1aca1d681c7f12e196e29d22a04486"} Mar 12 18:29:21.422855 master-0 kubenswrapper[29097]: I0312 18:29:21.421860 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"427d9fb4da53770ccc67c12bc3aaae3e507cd75207365e40c1611eeaaf274b84"} Mar 12 18:29:21.422855 master-0 kubenswrapper[29097]: I0312 18:29:21.421911 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2719f778fae8a1410c74e71ed0769412f583c56fba7c4dc342221e161dce0bd" Mar 12 18:29:21.422855 master-0 kubenswrapper[29097]: I0312 18:29:21.421922 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e39d12d9165077c1566bf86fdaa9d42c6abb87768cbd70c00423b7ab08d3f0d6" Mar 12 18:29:21.422855 master-0 kubenswrapper[29097]: I0312 18:29:21.421942 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2bc7ecb16572f7c27823d06dc4505f361b41e0cd30f5c3e98a48ae3773ffae2" Mar 12 18:29:21.422855 master-0 kubenswrapper[29097]: I0312 18:29:21.421969 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fbd8581b67e0a5e29b36d7c0987774ae0aa02a95a0bdf7e572b9e31a319d172" Mar 12 18:29:21.422855 master-0 kubenswrapper[29097]: I0312 18:29:21.421980 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eef1cf57e8276fdad086e78802215bf998ecd43c19a3a34c77847d52949c2696" Mar 12 18:29:21.422855 master-0 kubenswrapper[29097]: I0312 18:29:21.421990 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91bbe32b85272f0f9f735ba2a67b1085ea37b3016231eb6d6938a08eed1a3b9d" Mar 12 18:29:21.509065 master-0 kubenswrapper[29097]: I0312 18:29:21.509009 29097 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:29:21.509576 master-0 kubenswrapper[29097]: I0312 18:29:21.509124 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 18:29:21.509576 master-0 kubenswrapper[29097]: I0312 18:29:21.509164 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 18:29:21.509576 master-0 kubenswrapper[29097]: I0312 18:29:21.509187 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:29:21.509576 master-0 kubenswrapper[29097]: I0312 18:29:21.509460 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:29:21.509576 master-0 kubenswrapper[29097]: I0312 18:29:21.509557 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:29:21.510056 master-0 kubenswrapper[29097]: I0312 18:29:21.509585 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:21.510056 master-0 kubenswrapper[29097]: I0312 18:29:21.509612 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/49835aec35bdc5feca0d7cf24779b8da-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"49835aec35bdc5feca0d7cf24779b8da\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:29:21.510056 master-0 kubenswrapper[29097]: I0312 18:29:21.509635 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:29:21.510056 master-0 kubenswrapper[29097]: I0312 18:29:21.509659 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:29:21.510056 master-0 kubenswrapper[29097]: I0312 18:29:21.509734 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:21.510056 master-0 kubenswrapper[29097]: I0312 18:29:21.509787 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:21.510056 master-0 kubenswrapper[29097]: I0312 18:29:21.509810 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:29:21.510056 master-0 kubenswrapper[29097]: I0312 18:29:21.509831 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/49835aec35bdc5feca0d7cf24779b8da-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"49835aec35bdc5feca0d7cf24779b8da\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:29:21.510056 master-0 kubenswrapper[29097]: I0312 18:29:21.509902 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 18:29:21.510056 master-0 kubenswrapper[29097]: I0312 18:29:21.509921 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:29:21.510056 master-0 kubenswrapper[29097]: I0312 18:29:21.509938 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:29:21.510056 master-0 kubenswrapper[29097]: I0312 18:29:21.509962 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:29:21.510056 master-0 kubenswrapper[29097]: I0312 18:29:21.509980 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:21.510056 master-0 kubenswrapper[29097]: I0312 18:29:21.510000 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:21.510056 master-0 kubenswrapper[29097]: I0312 18:29:21.510018 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 18:29:21.511951 master-0 kubenswrapper[29097]: I0312 18:29:21.511891 29097 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:29:21.512045 master-0 kubenswrapper[29097]: I0312 18:29:21.511961 29097 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:29:21.512045 master-0 kubenswrapper[29097]: I0312 18:29:21.511980 29097 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:29:21.512263 master-0 kubenswrapper[29097]: I0312 18:29:21.512144 29097 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 18:29:21.520901 master-0 kubenswrapper[29097]: E0312 18:29:21.520842 29097 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Mar 12 18:29:21.591649 master-0 kubenswrapper[29097]: E0312 18:29:21.591597 29097 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Mar 12 18:29:21.592145 master-0 kubenswrapper[29097]: E0312 18:29:21.591795 29097 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"openshift-kube-scheduler-master-0\" already exists" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 18:29:21.592145 master-0 kubenswrapper[29097]: E0312 18:29:21.591801 29097 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-master-0\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:29:21.610308 master-0 kubenswrapper[29097]: I0312 18:29:21.610248 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:29:21.610499 master-0 kubenswrapper[29097]: I0312 18:29:21.610381 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:29:21.610499 master-0 kubenswrapper[29097]: I0312 18:29:21.610458 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:21.610499 master-0 kubenswrapper[29097]: I0312 18:29:21.610489 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/49835aec35bdc5feca0d7cf24779b8da-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"49835aec35bdc5feca0d7cf24779b8da\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:29:21.610616 master-0 kubenswrapper[29097]: I0312 18:29:21.610526 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 18:29:21.610616 master-0 kubenswrapper[29097]: I0312 18:29:21.610548 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 18:29:21.610616 master-0 kubenswrapper[29097]: I0312 18:29:21.610570 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:29:21.610616 master-0 kubenswrapper[29097]: I0312 18:29:21.610588 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:29:21.610735 master-0 kubenswrapper[29097]: I0312 18:29:21.610620 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:29:21.610735 master-0 kubenswrapper[29097]: I0312 18:29:21.610638 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:29:21.610735 master-0 kubenswrapper[29097]: I0312 18:29:21.610657 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:29:21.610735 master-0 kubenswrapper[29097]: I0312 18:29:21.610678 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:21.610735 master-0 kubenswrapper[29097]: I0312 18:29:21.610700 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:21.610735 master-0 kubenswrapper[29097]: I0312 18:29:21.610717 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/49835aec35bdc5feca0d7cf24779b8da-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"49835aec35bdc5feca0d7cf24779b8da\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:29:21.610735 master-0 kubenswrapper[29097]: I0312 18:29:21.610736 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:21.610946 master-0 kubenswrapper[29097]: I0312 18:29:21.610755 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:21.610946 master-0 kubenswrapper[29097]: I0312 18:29:21.610774 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 18:29:21.610946 master-0 kubenswrapper[29097]: I0312 18:29:21.610794 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 18:29:21.610946 master-0 kubenswrapper[29097]: I0312 18:29:21.610811 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:29:21.610946 master-0 kubenswrapper[29097]: I0312 18:29:21.610831 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:29:21.610946 master-0 kubenswrapper[29097]: I0312 18:29:21.610848 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:29:21.610946 master-0 kubenswrapper[29097]: I0312 18:29:21.610879 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:29:21.610946 master-0 kubenswrapper[29097]: I0312 18:29:21.610909 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:21.610946 master-0 kubenswrapper[29097]: I0312 18:29:21.610936 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/49835aec35bdc5feca0d7cf24779b8da-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"49835aec35bdc5feca0d7cf24779b8da\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:29:21.611197 master-0 kubenswrapper[29097]: I0312 18:29:21.610961 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 18:29:21.611197 master-0 kubenswrapper[29097]: I0312 18:29:21.610986 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 18:29:21.611197 master-0 kubenswrapper[29097]: I0312 18:29:21.611011 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:29:21.611197 master-0 kubenswrapper[29097]: I0312 18:29:21.611037 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:29:21.611197 master-0 kubenswrapper[29097]: I0312 18:29:21.611061 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:29:21.611197 master-0 kubenswrapper[29097]: I0312 18:29:21.611086 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:29:21.611197 master-0 kubenswrapper[29097]: I0312 18:29:21.611110 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:29:21.611197 master-0 kubenswrapper[29097]: I0312 18:29:21.611139 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:21.611197 master-0 kubenswrapper[29097]: I0312 18:29:21.611169 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:21.611197 master-0 kubenswrapper[29097]: I0312 18:29:21.611194 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/49835aec35bdc5feca0d7cf24779b8da-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"49835aec35bdc5feca0d7cf24779b8da\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:29:21.611490 master-0 kubenswrapper[29097]: I0312 18:29:21.611217 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:21.611490 master-0 kubenswrapper[29097]: I0312 18:29:21.611242 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:21.611490 master-0 kubenswrapper[29097]: I0312 18:29:21.611268 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 18:29:21.611490 master-0 kubenswrapper[29097]: I0312 18:29:21.611293 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 18:29:21.611490 master-0 kubenswrapper[29097]: I0312 18:29:21.611327 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:29:21.611490 master-0 kubenswrapper[29097]: I0312 18:29:21.611352 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 18:29:21.656903 master-0 kubenswrapper[29097]: I0312 18:29:21.656856 29097 apiserver.go:52] "Watching apiserver" Mar 12 18:29:21.676722 master-0 kubenswrapper[29097]: I0312 18:29:21.676611 29097 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 12 18:29:21.678247 master-0 kubenswrapper[29097]: I0312 18:29:21.678201 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4","openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zd9gm","openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-88gzm","openshift-ingress-operator/ingress-operator-677db989d6-4527l","openshift-kube-controller-manager/installer-2-master-0","openshift-kube-scheduler/installer-5-retry-1-master-0","openshift-machine-config-operator/machine-config-daemon-mfv5x","openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg","openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c","openshift-network-diagnostics/network-check-target-cpthp","openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg","openshift-service-ca-operator/service-ca-operator-69b6fc6b88-9xlgl","openshift-marketplace/redhat-operators-d5tcw","openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn","openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9","openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k","openshift-dns/node-resolver-7lzgx","openshift-etcd/installer-2-master-0","openshift-kube-scheduler/installer-4-master-0","openshift-kube-storage-version-migrator/migrator-57ccdf9b5-w72wh","openshift-machine-config-operator/machine-config-server-2jzxq","openshift-apiserver/apiserver-5786c989f8-f6jgb","openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b","openshift-kube-apiserver/kube-apiserver-master-0","openshift-monitoring/metrics-server-5784dff469-l5d64","openshift-monitoring/openshift-state-metrics-74cc79fd76-f59x9","openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s","openshift-service-ca/service-ca-84bfdbbb7f-769nb","openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6qlzz","openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc","openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-vk7lr","openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-2ltx9","openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq","openshift-marketplace/redhat-marketplace-ggkqg","openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl","openshift-ingress/router-default-79f8cd6fdd-79bhf","openshift-kube-scheduler/installer-5-master-0","openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-monitoring/node-exporter-6v462","openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx","openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-tzgs9","openshift-network-diagnostics/network-check-source-7c67b67d47-g4dkj","openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-649db","openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-m6hsp","openshift-kube-apiserver/installer-3-master-0","openshift-kube-controller-manager/installer-3-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-cluster-node-tuning-operator/tuned-c6qmx","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-2xr98","openshift-marketplace/marketplace-operator-64bf9778cb-clkx5","openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-8762n","openshift-controller-manager/controller-manager-7cd74f9776-2rmc9","openshift-dns/dns-default-6h5tt","openshift-insights/insights-operator-8f89dfddd-m6z6d","openshift-kube-apiserver/installer-3-retry-1-master-0","openshift-marketplace/certified-operators-6jhwp","openshift-multus/multus-admission-controller-7769569c45-dq2gs","openshift-ovn-kubernetes/ovnkube-node-hx8q8","openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs","openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j","openshift-machine-api/machine-api-operator-84bf6db4f9-gnrzd","openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq","openshift-network-node-identity/network-node-identity-hqrqt","openshift-network-operator/network-operator-7c649bf6d4-vksss","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp","openshift-etcd/installer-1-master-0","openshift-kube-apiserver/installer-1-master-0","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpb6k","openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7","openshift-dns-operator/dns-operator-589895fbb7-jqj5k","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-api/cluster-autoscaler-operator-69576476f7-hkfnq","openshift-machine-config-operator/machine-config-controller-ff46b7bdf-k5hsv","openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b","openshift-etcd/etcd-master-0","openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-ddsbn","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-w7xvp","openshift-multus/multus-656l8","openshift-multus/multus-additional-cni-plugins-lv8hk","openshift-network-operator/iptables-alerter-4k8wm","openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7","assisted-installer/assisted-installer-controller-g257x","openshift-marketplace/community-operators-nmmwm","openshift-multus/network-metrics-daemon-z4sc9"] Mar 12 18:29:21.678538 master-0 kubenswrapper[29097]: I0312 18:29:21.678486 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-g257x" Mar 12 18:29:21.696346 master-0 kubenswrapper[29097]: I0312 18:29:21.693690 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 12 18:29:21.696346 master-0 kubenswrapper[29097]: I0312 18:29:21.693774 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 12 18:29:21.696346 master-0 kubenswrapper[29097]: I0312 18:29:21.695394 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 12 18:29:21.696346 master-0 kubenswrapper[29097]: I0312 18:29:21.695639 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 12 18:29:21.696346 master-0 kubenswrapper[29097]: I0312 18:29:21.695856 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 12 18:29:21.696346 master-0 kubenswrapper[29097]: I0312 18:29:21.695907 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 12 18:29:21.696346 master-0 kubenswrapper[29097]: I0312 18:29:21.695984 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 12 18:29:21.696346 master-0 kubenswrapper[29097]: I0312 18:29:21.696010 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.698048 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.698810 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.698827 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.698940 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.698966 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.698941 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.698994 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.699122 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.699161 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.699204 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.699230 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.699298 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.699318 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.699338 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.699374 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.699423 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.699562 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.699571 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.699608 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.699695 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.699718 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.699747 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.699566 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.699829 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.699884 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.699902 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.699994 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.700029 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.700117 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.700132 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.700194 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.700301 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.700537 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.700570 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.699301 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 12 18:29:21.700791 master-0 kubenswrapper[29097]: I0312 18:29:21.700684 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 12 18:29:21.703818 master-0 kubenswrapper[29097]: I0312 18:29:21.701857 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 12 18:29:21.703818 master-0 kubenswrapper[29097]: I0312 18:29:21.701987 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 12 18:29:21.703818 master-0 kubenswrapper[29097]: I0312 18:29:21.701908 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 12 18:29:21.703818 master-0 kubenswrapper[29097]: I0312 18:29:21.702163 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 12 18:29:21.703818 master-0 kubenswrapper[29097]: I0312 18:29:21.702212 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 18:29:21.703818 master-0 kubenswrapper[29097]: I0312 18:29:21.702218 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 12 18:29:21.703818 master-0 kubenswrapper[29097]: I0312 18:29:21.702284 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 12 18:29:21.703818 master-0 kubenswrapper[29097]: I0312 18:29:21.702344 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 12 18:29:21.703818 master-0 kubenswrapper[29097]: I0312 18:29:21.703288 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 18:29:21.704198 master-0 kubenswrapper[29097]: I0312 18:29:21.703952 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 18:29:21.709933 master-0 kubenswrapper[29097]: I0312 18:29:21.705763 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 12 18:29:21.709933 master-0 kubenswrapper[29097]: I0312 18:29:21.705799 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 12 18:29:21.709933 master-0 kubenswrapper[29097]: I0312 18:29:21.705886 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 12 18:29:21.709933 master-0 kubenswrapper[29097]: I0312 18:29:21.705964 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 12 18:29:21.710260 master-0 kubenswrapper[29097]: I0312 18:29:21.710117 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 12 18:29:21.710442 master-0 kubenswrapper[29097]: I0312 18:29:21.710423 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 12 18:29:21.710628 master-0 kubenswrapper[29097]: I0312 18:29:21.710582 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 18:29:21.710628 master-0 kubenswrapper[29097]: I0312 18:29:21.710625 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 12 18:29:21.710749 master-0 kubenswrapper[29097]: I0312 18:29:21.710699 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 12 18:29:21.710817 master-0 kubenswrapper[29097]: I0312 18:29:21.710763 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 12 18:29:21.710883 master-0 kubenswrapper[29097]: I0312 18:29:21.710823 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 18:29:21.710927 master-0 kubenswrapper[29097]: I0312 18:29:21.710893 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 12 18:29:21.710982 master-0 kubenswrapper[29097]: I0312 18:29:21.710968 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 12 18:29:21.711193 master-0 kubenswrapper[29097]: I0312 18:29:21.711177 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 18:29:21.711640 master-0 kubenswrapper[29097]: I0312 18:29:21.711611 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggsdx\" (UniqueName: \"kubernetes.io/projected/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-kube-api-access-ggsdx\") pod \"olm-operator-d64cfc9db-npt4r\" (UID: \"d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r" Mar 12 18:29:21.711741 master-0 kubenswrapper[29097]: I0312 18:29:21.711650 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-clkx5\" (UID: \"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:29:21.711741 master-0 kubenswrapper[29097]: I0312 18:29:21.711676 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:29:21.711741 master-0 kubenswrapper[29097]: I0312 18:29:21.711694 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbnx8\" (UniqueName: \"kubernetes.io/projected/51eb717b-d11f-4bc3-8df6-deb51d5889f3-kube-api-access-gbnx8\") pod \"package-server-manager-854648ff6d-kwv7s\" (UID: \"51eb717b-d11f-4bc3-8df6-deb51d5889f3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:29:21.711741 master-0 kubenswrapper[29097]: I0312 18:29:21.711733 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdlxn\" (UniqueName: \"kubernetes.io/projected/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-kube-api-access-fdlxn\") pod \"marketplace-operator-64bf9778cb-clkx5\" (UID: \"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:29:21.711930 master-0 kubenswrapper[29097]: I0312 18:29:21.711739 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 12 18:29:21.711930 master-0 kubenswrapper[29097]: I0312 18:29:21.711751 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1e2340b-ebca-40de-b1e0-8133999cd860-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-2xr98\" (UID: \"a1e2340b-ebca-40de-b1e0-8133999cd860\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-2xr98" Mar 12 18:29:21.711930 master-0 kubenswrapper[29097]: I0312 18:29:21.711742 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 12 18:29:21.711930 master-0 kubenswrapper[29097]: I0312 18:29:21.711894 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-kwv7s\" (UID: \"51eb717b-d11f-4bc3-8df6-deb51d5889f3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:29:21.711930 master-0 kubenswrapper[29097]: I0312 18:29:21.711911 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 18:29:21.712168 master-0 kubenswrapper[29097]: I0312 18:29:21.712078 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 12 18:29:21.712296 master-0 kubenswrapper[29097]: I0312 18:29:21.712245 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 12 18:29:21.713704 master-0 kubenswrapper[29097]: I0312 18:29:21.712976 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krrkl\" (UniqueName: \"kubernetes.io/projected/47850839-bb4b-41e9-ac31-f1cabbb4926d-kube-api-access-krrkl\") pod \"catalog-operator-7d9c49f57b-pslh7\" (UID: \"47850839-bb4b-41e9-ac31-f1cabbb4926d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7" Mar 12 18:29:21.713704 master-0 kubenswrapper[29097]: I0312 18:29:21.713079 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1e2340b-ebca-40de-b1e0-8133999cd860-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-2xr98\" (UID: \"a1e2340b-ebca-40de-b1e0-8133999cd860\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-2xr98" Mar 12 18:29:21.713704 master-0 kubenswrapper[29097]: I0312 18:29:21.713281 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert\") pod \"olm-operator-d64cfc9db-npt4r\" (UID: \"d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r" Mar 12 18:29:21.713704 master-0 kubenswrapper[29097]: I0312 18:29:21.713389 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-clkx5\" (UID: \"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:29:21.713704 master-0 kubenswrapper[29097]: I0312 18:29:21.713422 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d94dc349-c5cb-4f12-8e48-867030af4981-bound-sa-token\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:29:21.713704 master-0 kubenswrapper[29097]: I0312 18:29:21.713444 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vpbp\" (UniqueName: \"kubernetes.io/projected/a1e2340b-ebca-40de-b1e0-8133999cd860-kube-api-access-6vpbp\") pod \"kube-storage-version-migrator-operator-7f65c457f5-2xr98\" (UID: \"a1e2340b-ebca-40de-b1e0-8133999cd860\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-2xr98" Mar 12 18:29:21.713704 master-0 kubenswrapper[29097]: I0312 18:29:21.713465 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkt7d\" (UniqueName: \"kubernetes.io/projected/055f5c67-f512-4510-99c5-e194944b0599-kube-api-access-tkt7d\") pod \"service-ca-operator-69b6fc6b88-9xlgl\" (UID: \"055f5c67-f512-4510-99c5-e194944b0599\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-9xlgl" Mar 12 18:29:21.713704 master-0 kubenswrapper[29097]: I0312 18:29:21.713489 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjmcv\" (UniqueName: \"kubernetes.io/projected/d94dc349-c5cb-4f12-8e48-867030af4981-kube-api-access-zjmcv\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:29:21.713704 master-0 kubenswrapper[29097]: I0312 18:29:21.713530 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert\") pod \"catalog-operator-7d9c49f57b-pslh7\" (UID: \"47850839-bb4b-41e9-ac31-f1cabbb4926d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7" Mar 12 18:29:21.713704 master-0 kubenswrapper[29097]: I0312 18:29:21.713554 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/055f5c67-f512-4510-99c5-e194944b0599-serving-cert\") pod \"service-ca-operator-69b6fc6b88-9xlgl\" (UID: \"055f5c67-f512-4510-99c5-e194944b0599\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-9xlgl" Mar 12 18:29:21.713704 master-0 kubenswrapper[29097]: I0312 18:29:21.713612 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/055f5c67-f512-4510-99c5-e194944b0599-config\") pod \"service-ca-operator-69b6fc6b88-9xlgl\" (UID: \"055f5c67-f512-4510-99c5-e194944b0599\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-9xlgl" Mar 12 18:29:21.714294 master-0 kubenswrapper[29097]: I0312 18:29:21.713638 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d94dc349-c5cb-4f12-8e48-867030af4981-trusted-ca\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:29:21.714294 master-0 kubenswrapper[29097]: I0312 18:29:21.714150 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d94dc349-c5cb-4f12-8e48-867030af4981-metrics-tls\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:29:21.714294 master-0 kubenswrapper[29097]: I0312 18:29:21.714234 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-clkx5\" (UID: \"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:29:21.714419 master-0 kubenswrapper[29097]: I0312 18:29:21.714386 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47850839-bb4b-41e9-ac31-f1cabbb4926d-srv-cert\") pod \"catalog-operator-7d9c49f57b-pslh7\" (UID: \"47850839-bb4b-41e9-ac31-f1cabbb4926d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7" Mar 12 18:29:21.716350 master-0 kubenswrapper[29097]: I0312 18:29:21.716261 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1e2340b-ebca-40de-b1e0-8133999cd860-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-2xr98\" (UID: \"a1e2340b-ebca-40de-b1e0-8133999cd860\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-2xr98" Mar 12 18:29:21.716444 master-0 kubenswrapper[29097]: I0312 18:29:21.714406 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/51eb717b-d11f-4bc3-8df6-deb51d5889f3-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-kwv7s\" (UID: \"51eb717b-d11f-4bc3-8df6-deb51d5889f3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:29:21.716444 master-0 kubenswrapper[29097]: I0312 18:29:21.716422 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1e2340b-ebca-40de-b1e0-8133999cd860-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-2xr98\" (UID: \"a1e2340b-ebca-40de-b1e0-8133999cd860\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-2xr98" Mar 12 18:29:21.716591 master-0 kubenswrapper[29097]: I0312 18:29:21.716561 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/055f5c67-f512-4510-99c5-e194944b0599-config\") pod \"service-ca-operator-69b6fc6b88-9xlgl\" (UID: \"055f5c67-f512-4510-99c5-e194944b0599\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-9xlgl" Mar 12 18:29:21.716947 master-0 kubenswrapper[29097]: I0312 18:29:21.716760 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/055f5c67-f512-4510-99c5-e194944b0599-serving-cert\") pod \"service-ca-operator-69b6fc6b88-9xlgl\" (UID: \"055f5c67-f512-4510-99c5-e194944b0599\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-9xlgl" Mar 12 18:29:21.717015 master-0 kubenswrapper[29097]: I0312 18:29:21.716987 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 12 18:29:21.717372 master-0 kubenswrapper[29097]: I0312 18:29:21.717337 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 12 18:29:21.717897 master-0 kubenswrapper[29097]: I0312 18:29:21.717874 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 12 18:29:21.717990 master-0 kubenswrapper[29097]: I0312 18:29:21.717959 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 12 18:29:21.718265 master-0 kubenswrapper[29097]: I0312 18:29:21.718229 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 12 18:29:21.718341 master-0 kubenswrapper[29097]: I0312 18:29:21.718321 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 12 18:29:21.718545 master-0 kubenswrapper[29097]: I0312 18:29:21.718359 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 12 18:29:21.718759 master-0 kubenswrapper[29097]: I0312 18:29:21.718736 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 12 18:29:21.719091 master-0 kubenswrapper[29097]: I0312 18:29:21.719061 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-srv-cert\") pod \"olm-operator-d64cfc9db-npt4r\" (UID: \"d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r" Mar 12 18:29:21.719968 master-0 kubenswrapper[29097]: I0312 18:29:21.719582 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 12 18:29:21.719968 master-0 kubenswrapper[29097]: I0312 18:29:21.719696 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 12 18:29:21.719968 master-0 kubenswrapper[29097]: I0312 18:29:21.719932 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 12 18:29:21.722333 master-0 kubenswrapper[29097]: I0312 18:29:21.720008 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 12 18:29:21.723674 master-0 kubenswrapper[29097]: I0312 18:29:21.723552 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 12 18:29:21.723674 master-0 kubenswrapper[29097]: I0312 18:29:21.723623 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 12 18:29:21.727854 master-0 kubenswrapper[29097]: I0312 18:29:21.727813 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 12 18:29:21.729355 master-0 kubenswrapper[29097]: I0312 18:29:21.729306 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 12 18:29:21.739341 master-0 kubenswrapper[29097]: I0312 18:29:21.733022 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 12 18:29:21.739341 master-0 kubenswrapper[29097]: I0312 18:29:21.734910 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 12 18:29:21.739341 master-0 kubenswrapper[29097]: I0312 18:29:21.736563 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 12 18:29:21.739341 master-0 kubenswrapper[29097]: I0312 18:29:21.737060 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 12 18:29:21.739341 master-0 kubenswrapper[29097]: I0312 18:29:21.737239 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 12 18:29:21.739341 master-0 kubenswrapper[29097]: I0312 18:29:21.737590 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 12 18:29:21.739341 master-0 kubenswrapper[29097]: I0312 18:29:21.737676 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 12 18:29:21.739341 master-0 kubenswrapper[29097]: I0312 18:29:21.738362 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 12 18:29:21.739341 master-0 kubenswrapper[29097]: I0312 18:29:21.738579 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 12 18:29:21.739341 master-0 kubenswrapper[29097]: I0312 18:29:21.738750 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 12 18:29:21.739341 master-0 kubenswrapper[29097]: I0312 18:29:21.738884 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 12 18:29:21.739341 master-0 kubenswrapper[29097]: I0312 18:29:21.738981 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 12 18:29:21.739341 master-0 kubenswrapper[29097]: I0312 18:29:21.739129 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 12 18:29:21.739341 master-0 kubenswrapper[29097]: I0312 18:29:21.739175 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 12 18:29:21.739341 master-0 kubenswrapper[29097]: I0312 18:29:21.739259 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 12 18:29:21.740050 master-0 kubenswrapper[29097]: I0312 18:29:21.739558 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 12 18:29:21.740050 master-0 kubenswrapper[29097]: I0312 18:29:21.739831 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 12 18:29:21.740614 master-0 kubenswrapper[29097]: I0312 18:29:21.740182 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 12 18:29:21.740614 master-0 kubenswrapper[29097]: I0312 18:29:21.740435 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 12 18:29:21.740614 master-0 kubenswrapper[29097]: I0312 18:29:21.740538 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 12 18:29:21.741081 master-0 kubenswrapper[29097]: I0312 18:29:21.740972 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 12 18:29:21.742881 master-0 kubenswrapper[29097]: I0312 18:29:21.742722 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 12 18:29:21.742881 master-0 kubenswrapper[29097]: I0312 18:29:21.742794 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 12 18:29:21.745322 master-0 kubenswrapper[29097]: I0312 18:29:21.745295 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d94dc349-c5cb-4f12-8e48-867030af4981-trusted-ca\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:29:21.752613 master-0 kubenswrapper[29097]: I0312 18:29:21.752580 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-clkx5\" (UID: \"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:29:21.764440 master-0 kubenswrapper[29097]: I0312 18:29:21.760583 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 12 18:29:21.781357 master-0 kubenswrapper[29097]: I0312 18:29:21.781303 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 12 18:29:21.793819 master-0 kubenswrapper[29097]: I0312 18:29:21.793764 29097 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 12 18:29:21.801535 master-0 kubenswrapper[29097]: I0312 18:29:21.801470 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.814492 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e697746f-fb9e-4d10-ab61-33c68e62cc0d-config\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.814815 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e697746f-fb9e-4d10-ab61-33c68e62cc0d-config\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.814880 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.814923 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-fz79c\" (UID: \"e94d098b-fbcc-4e85-b8ad-42f3a21c822c\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.814949 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e720e1d0-5a6d-4b76-8b25-5963e24950f5-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-w7xvp\" (UID: \"e720e1d0-5a6d-4b76-8b25-5963e24950f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-w7xvp" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.814981 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815006 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfp84\" (UniqueName: \"kubernetes.io/projected/be2da107-a419-423f-a657-44d681291f28-kube-api-access-jfp84\") pod \"route-controller-manager-7db5456fb7-csszs\" (UID: \"be2da107-a419-423f-a657-44d681291f28\") " pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815030 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3d77a98a-0176-4924-81d3-8e9890852b38-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815058 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-systemd\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815080 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/d1b3859c-20a1-4a1c-8508-86ed843768f5-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-mb6tc\" (UID: \"d1b3859c-20a1-4a1c-8508-86ed843768f5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815115 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/030160af-c915-4f00-903a-1c4b5c2b719a-config\") pod \"machine-approver-754bdc9f9d-4w5z7\" (UID: \"030160af-c915-4f00-903a-1c4b5c2b719a\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815141 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b38e7fcd-8f7a-4d4f-8702-7ef205261054-apiservice-cert\") pod \"packageserver-694648486f-f89lc\" (UID: \"b38e7fcd-8f7a-4d4f-8702-7ef205261054\") " pod="openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815165 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-w7wj9\" (UID: \"74eb1407-de29-42e5-9e6c-ce1bec3a9d80\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815187 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-9nzsn\" (UID: \"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815210 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pn9h\" (UniqueName: \"kubernetes.io/projected/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-kube-api-access-2pn9h\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815232 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/062f1b21-2ffc-47da-8334-427c3b2a1a90-config\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815253 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab926874-9722-4e65-9084-27b2f9915450-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-ddsbn\" (UID: \"ab926874-9722-4e65-9084-27b2f9915450\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-ddsbn" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815285 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb529297-b3de-4167-a91e-0a63725b3b0f-trusted-ca-bundle\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815305 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b8dd13a7-10e5-431b-8d30-405dcfea02f5-ovnkube-config\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815326 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/236f2886-bb69-49a7-9471-36454fd1cbd3-config\") pod \"openshift-apiserver-operator-799b6db4d7-6qlzz\" (UID: \"236f2886-bb69-49a7-9471-36454fd1cbd3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6qlzz" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815349 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptrtx\" (UniqueName: \"kubernetes.io/projected/33feec78-4592-4343-965b-aa1b7044fcf3-kube-api-access-ptrtx\") pod \"network-check-target-cpthp\" (UID: \"33feec78-4592-4343-965b-aa1b7044fcf3\") " pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815372 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/9f1f60fa-d79d-4f31-b5bf-2ad333151537-metrics-server-audit-profiles\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815393 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9b41258c-ac1d-4e00-ac5e-732d85441f12-etcd-serving-ca\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815414 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-webhook-cert\") pod \"network-node-identity-hqrqt\" (UID: \"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe\") " pod="openshift-network-node-identity/network-node-identity-hqrqt" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815437 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vr66\" (UniqueName: \"kubernetes.io/projected/45aa4887-c913-4ece-ae34-fcde33832621-kube-api-access-4vr66\") pod \"csi-snapshot-controller-operator-5685fbc7d-649db\" (UID: \"45aa4887-c913-4ece-ae34-fcde33832621\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-649db" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815473 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/030160af-c915-4f00-903a-1c4b5c2b719a-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-4w5z7\" (UID: \"030160af-c915-4f00-903a-1c4b5c2b719a\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815496 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-system-cni-dir\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815543 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-9nzsn\" (UID: \"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815570 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/d1b3859c-20a1-4a1c-8508-86ed843768f5-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-mb6tc\" (UID: \"d1b3859c-20a1-4a1c-8508-86ed843768f5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815594 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68xhl\" (UniqueName: \"kubernetes.io/projected/4687cf53-55d7-42b7-b24d-e57da3989fd6-kube-api-access-68xhl\") pod \"machine-api-operator-84bf6db4f9-gnrzd\" (UID: \"4687cf53-55d7-42b7-b24d-e57da3989fd6\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-gnrzd" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815620 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-os-release\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815661 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/604044f4-9b0b-4747-827d-843f3cfa7077-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-xbfrg\" (UID: \"604044f4-9b0b-4747-827d-843f3cfa7077\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815684 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n4d5\" (UniqueName: \"kubernetes.io/projected/b648b6de-59a6-42da-84e2-77ea0264ae25-kube-api-access-7n4d5\") pod \"network-check-source-7c67b67d47-g4dkj\" (UID: \"b648b6de-59a6-42da-84e2-77ea0264ae25\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-g4dkj" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815708 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fb529297-b3de-4167-a91e-0a63725b3b0f-encryption-config\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815731 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f72ng\" (UniqueName: \"kubernetes.io/projected/3d77a98a-0176-4924-81d3-8e9890852b38-kube-api-access-f72ng\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815752 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/455f0aad-add2-49d0-995c-f92467bce2d6-cni-binary-copy\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815774 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8fe3d699-023e-4de7-8d42-6c9d8a5e68f3-signing-cabundle\") pod \"service-ca-84bfdbbb7f-769nb\" (UID: \"8fe3d699-023e-4de7-8d42-6c9d8a5e68f3\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-769nb" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815796 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d1b3859c-20a1-4a1c-8508-86ed843768f5-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-mb6tc\" (UID: \"d1b3859c-20a1-4a1c-8508-86ed843768f5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815817 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqlfx\" (UniqueName: \"kubernetes.io/projected/9f1f60fa-d79d-4f31-b5bf-2ad333151537-kube-api-access-hqlfx\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815839 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4ae1240-e04e-48e9-88df-9f1a53508da7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpb6k\" (UID: \"d4ae1240-e04e-48e9-88df-9f1a53508da7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpb6k" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815863 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4048e453-a983-4708-89b6-a81af0067e29-service-ca\") pod \"cluster-version-operator-8c9c967c7-m885k\" (UID: \"4048e453-a983-4708-89b6-a81af0067e29\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815885 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmntw\" (UniqueName: \"kubernetes.io/projected/78c13011-7a79-445f-807c-4f5e75643549-kube-api-access-bmntw\") pod \"openshift-state-metrics-74cc79fd76-f59x9\" (UID: \"78c13011-7a79-445f-807c-4f5e75643549\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-f59x9" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815907 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6595\" (UniqueName: \"kubernetes.io/projected/25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327-kube-api-access-x6595\") pod \"multus-admission-controller-7769569c45-dq2gs\" (UID: \"25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327\") " pod="openshift-multus/multus-admission-controller-7769569c45-dq2gs" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815930 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0fb78c61-2051-42e2-8668-fa7404ccac43-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-8762n\" (UID: \"0fb78c61-2051-42e2-8668-fa7404ccac43\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-8762n" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815952 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6-node-bootstrap-token\") pod \"machine-config-server-2jzxq\" (UID: \"4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6\") " pod="openshift-machine-config-operator/machine-config-server-2jzxq" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.815980 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-var-lib-cni-multus\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816003 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3e5b8c8-a100-4880-a0b9-9c3989d4e739-utilities\") pod \"redhat-operators-d5tcw\" (UID: \"d3e5b8c8-a100-4880-a0b9-9c3989d4e739\") " pod="openshift-marketplace/redhat-operators-d5tcw" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816024 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs\") pod \"network-metrics-daemon-z4sc9\" (UID: \"5b48f8fd-2efe-44e3-a6d7-c71358b83a2f\") " pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816048 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4glbr\" (UniqueName: \"kubernetes.io/projected/518ffff8-8119-41be-8b76-ce49d5751254-kube-api-access-4glbr\") pod \"router-default-79f8cd6fdd-79bhf\" (UID: \"518ffff8-8119-41be-8b76-ce49d5751254\") " pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816076 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/d1b3859c-20a1-4a1c-8508-86ed843768f5-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-mb6tc\" (UID: \"d1b3859c-20a1-4a1c-8508-86ed843768f5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816097 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-fz79c\" (UID: \"e94d098b-fbcc-4e85-b8ad-42f3a21c822c\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816121 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th72r\" (UniqueName: \"kubernetes.io/projected/aee40f88-83e4-45c8-8331-969943f9f9aa-kube-api-access-th72r\") pod \"cluster-autoscaler-operator-69576476f7-hkfnq\" (UID: \"aee40f88-83e4-45c8-8331-969943f9f9aa\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-hkfnq" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816146 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/b2c6cd11-b1ed-4fed-a4ce-4eee0af20868-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-vk7lr\" (UID: \"b2c6cd11-b1ed-4fed-a4ce-4eee0af20868\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-vk7lr" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816169 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8qw4\" (UniqueName: \"kubernetes.io/projected/8c241720-7815-40fd-8d4a-1685a43b5893-kube-api-access-l8qw4\") pod \"migrator-57ccdf9b5-w72wh\" (UID: \"8c241720-7815-40fd-8d4a-1685a43b5893\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-w72wh" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816192 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdc26\" (UniqueName: \"kubernetes.io/projected/0cc54e47-af53-448a-b1c9-043710890a32-kube-api-access-bdc26\") pod \"redhat-marketplace-ggkqg\" (UID: \"0cc54e47-af53-448a-b1c9-043710890a32\") " pod="openshift-marketplace/redhat-marketplace-ggkqg" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816216 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fb529297-b3de-4167-a91e-0a63725b3b0f-audit-dir\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816239 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-systemd-units\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816263 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f1f60fa-d79d-4f31-b5bf-2ad333151537-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816287 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1287cbb9-c9f6-48d2-9fda-f4464074e41b-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-88gzm\" (UID: \"1287cbb9-c9f6-48d2-9fda-f4464074e41b\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-88gzm" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816312 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md9dt\" (UniqueName: \"kubernetes.io/projected/b2c6cd11-b1ed-4fed-a4ce-4eee0af20868-kube-api-access-md9dt\") pod \"cloud-credential-operator-55d85b7b47-vk7lr\" (UID: \"b2c6cd11-b1ed-4fed-a4ce-4eee0af20868\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-vk7lr" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816335 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/adb0dbbf-458d-46f5-b236-d4904e125418-node-exporter-wtmp\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816388 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/266b9f4f-3fb4-474d-84df-0a6c687c7e9a-config-volume\") pod \"dns-default-6h5tt\" (UID: \"266b9f4f-3fb4-474d-84df-0a6c687c7e9a\") " pod="openshift-dns/dns-default-6h5tt" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816412 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tmqs\" (UniqueName: \"kubernetes.io/projected/266b9f4f-3fb4-474d-84df-0a6c687c7e9a-kube-api-access-6tmqs\") pod \"dns-default-6h5tt\" (UID: \"266b9f4f-3fb4-474d-84df-0a6c687c7e9a\") " pod="openshift-dns/dns-default-6h5tt" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816436 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/236f2886-bb69-49a7-9471-36454fd1cbd3-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-6qlzz\" (UID: \"236f2886-bb69-49a7-9471-36454fd1cbd3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6qlzz" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816460 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/37cd9c0a-697e-4e67-932b-b331ff77c8c0-available-featuregates\") pod \"openshift-config-operator-64488f9d78-tjp2j\" (UID: \"37cd9c0a-697e-4e67-932b-b331ff77c8c0\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816484 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4wsx\" (UniqueName: \"kubernetes.io/projected/ee4c1949-96b4-4444-9675-9df1d46f681e-kube-api-access-x4wsx\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-z9srl\" (UID: \"ee4c1949-96b4-4444-9675-9df1d46f681e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816506 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/adb0dbbf-458d-46f5-b236-d4904e125418-metrics-client-ca\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816546 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b2c6cd11-b1ed-4fed-a4ce-4eee0af20868-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-vk7lr\" (UID: \"b2c6cd11-b1ed-4fed-a4ce-4eee0af20868\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-vk7lr" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816571 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vct98\" (UniqueName: \"kubernetes.io/projected/e697746f-fb9e-4d10-ab61-33c68e62cc0d-kube-api-access-vct98\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816595 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-run-netns\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816618 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3a2cda2-b70f-4128-a1be-48503f5aad6d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-6nvn4\" (UID: \"f3a2cda2-b70f-4128-a1be-48503f5aad6d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816642 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmvnh\" (UniqueName: \"kubernetes.io/projected/b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c-kube-api-access-xmvnh\") pod \"certified-operators-6jhwp\" (UID: \"b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c\") " pod="openshift-marketplace/certified-operators-6jhwp" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816664 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9b41258c-ac1d-4e00-ac5e-732d85441f12-audit\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816687 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-ovnkube-identity-cm\") pod \"network-node-identity-hqrqt\" (UID: \"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe\") " pod="openshift-network-node-identity/network-node-identity-hqrqt" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816709 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aee40f88-83e4-45c8-8331-969943f9f9aa-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-hkfnq\" (UID: \"aee40f88-83e4-45c8-8331-969943f9f9aa\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-hkfnq" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816731 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e697746f-fb9e-4d10-ab61-33c68e62cc0d-etcd-client\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816753 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab926874-9722-4e65-9084-27b2f9915450-serving-cert\") pod \"kube-apiserver-operator-68bd585b-ddsbn\" (UID: \"ab926874-9722-4e65-9084-27b2f9915450\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-ddsbn" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816776 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rhmv\" (UniqueName: \"kubernetes.io/projected/b8dd13a7-10e5-431b-8d30-405dcfea02f5-kube-api-access-7rhmv\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816800 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmsnk\" (UniqueName: \"kubernetes.io/projected/34cbf061-4c76-476e-bed9-0a133c744862-kube-api-access-gmsnk\") pod \"control-plane-machine-set-operator-6686554ddc-zd9gm\" (UID: \"34cbf061-4c76-476e-bed9-0a133c744862\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zd9gm" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816823 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9b41258c-ac1d-4e00-ac5e-732d85441f12-encryption-config\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816848 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdlcw\" (UniqueName: \"kubernetes.io/projected/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-kube-api-access-tdlcw\") pod \"network-node-identity-hqrqt\" (UID: \"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe\") " pod="openshift-network-node-identity/network-node-identity-hqrqt" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816872 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-sysctl-conf\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816894 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9717d467-af1a-4de0-88e0-c47ec4d12d6e-hosts-file\") pod \"node-resolver-7lzgx\" (UID: \"9717d467-af1a-4de0-88e0-c47ec4d12d6e\") " pod="openshift-dns/node-resolver-7lzgx" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816918 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-os-release\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816945 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-secret-metrics-client-certs\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.816972 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.817003 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-sys\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.817025 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-cnibin\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.817066 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cc54e47-af53-448a-b1c9-043710890a32-catalog-content\") pod \"redhat-marketplace-ggkqg\" (UID: \"0cc54e47-af53-448a-b1c9-043710890a32\") " pod="openshift-marketplace/redhat-marketplace-ggkqg" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.817090 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-cnibin\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.817113 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/ee4c1949-96b4-4444-9675-9df1d46f681e-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-z9srl\" (UID: \"ee4c1949-96b4-4444-9675-9df1d46f681e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.817136 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-hostroot\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.817158 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-etc-kubernetes\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.817182 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fb529297-b3de-4167-a91e-0a63725b3b0f-etcd-client\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:29:21.817108 master-0 kubenswrapper[29097]: I0312 18:29:21.817208 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th8tc\" (UniqueName: \"kubernetes.io/projected/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-kube-api-access-th8tc\") pod \"dns-operator-589895fbb7-jqj5k\" (UID: \"8ad05507-e242-4ff8-ae80-c16ff9ee68e2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-jqj5k" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.817229 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-env-overrides\") pod \"network-node-identity-hqrqt\" (UID: \"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe\") " pod="openshift-network-node-identity/network-node-identity-hqrqt" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.817253 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4048e453-a983-4708-89b6-a81af0067e29-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-m885k\" (UID: \"4048e453-a983-4708-89b6-a81af0067e29\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.817276 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be2da107-a419-423f-a657-44d681291f28-config\") pod \"route-controller-manager-7db5456fb7-csszs\" (UID: \"be2da107-a419-423f-a657-44d681291f28\") " pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.817296 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-etc-openvswitch\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.817318 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6-certs\") pod \"machine-config-server-2jzxq\" (UID: \"4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6\") " pod="openshift-machine-config-operator/machine-config-server-2jzxq" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.817340 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-w7wj9\" (UID: \"74eb1407-de29-42e5-9e6c-ce1bec3a9d80\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.817359 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-system-cni-dir\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.817381 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-var-lib-kubelet\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.817406 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-run-multus-certs\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.817433 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kn2k\" (UniqueName: \"kubernetes.io/projected/492e9833-4513-4f2f-b865-d05a8973fadc-kube-api-access-5kn2k\") pod \"machine-config-daemon-mfv5x\" (UID: \"492e9833-4513-4f2f-b865-d05a8973fadc\") " pod="openshift-machine-config-operator/machine-config-daemon-mfv5x" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.817458 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/9f1f60fa-d79d-4f31-b5bf-2ad333151537-audit-log\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.817483 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ee55b576-6b8d-4217-b5a7-93b023a1e885-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-k5hsv\" (UID: \"ee55b576-6b8d-4217-b5a7-93b023a1e885\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-k5hsv" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.817523 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3d77a98a-0176-4924-81d3-8e9890852b38-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.817550 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ae1240-e04e-48e9-88df-9f1a53508da7-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpb6k\" (UID: \"d4ae1240-e04e-48e9-88df-9f1a53508da7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpb6k" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.817574 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-run\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.817601 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4048e453-a983-4708-89b6-a81af0067e29-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-m885k\" (UID: \"4048e453-a983-4708-89b6-a81af0067e29\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.817625 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4048e453-a983-4708-89b6-a81af0067e29-serving-cert\") pod \"cluster-version-operator-8c9c967c7-m885k\" (UID: \"4048e453-a983-4708-89b6-a81af0067e29\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.817653 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aee40f88-83e4-45c8-8331-969943f9f9aa-cert\") pod \"cluster-autoscaler-operator-69576476f7-hkfnq\" (UID: \"aee40f88-83e4-45c8-8331-969943f9f9aa\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-hkfnq" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.817677 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/492e9833-4513-4f2f-b865-d05a8973fadc-rootfs\") pod \"machine-config-daemon-mfv5x\" (UID: \"492e9833-4513-4f2f-b865-d05a8973fadc\") " pod="openshift-machine-config-operator/machine-config-daemon-mfv5x" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.817707 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcvfv\" (UniqueName: \"kubernetes.io/projected/f3a2cda2-b70f-4128-a1be-48503f5aad6d-kube-api-access-tcvfv\") pod \"cluster-olm-operator-77899cf6d-6nvn4\" (UID: \"f3a2cda2-b70f-4128-a1be-48503f5aad6d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.817749 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7pjn\" (UniqueName: \"kubernetes.io/projected/41c1bd85-369e-4341-9e80-8b4b248b5572-kube-api-access-q7pjn\") pod \"prometheus-operator-5ff8674d55-qs7tx\" (UID: \"41c1bd85-369e-4341-9e80-8b4b248b5572\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.817786 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/34cbf061-4c76-476e-bed9-0a133c744862-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-zd9gm\" (UID: \"34cbf061-4c76-476e-bed9-0a133c744862\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zd9gm" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.817811 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c-catalog-content\") pod \"certified-operators-6jhwp\" (UID: \"b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c\") " pod="openshift-marketplace/certified-operators-6jhwp" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.817837 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e5fb0152-3efd-4000-bce3-fa90b75316ae-images\") pod \"cluster-baremetal-operator-5cdb4c5598-2psgb\" (UID: \"e5fb0152-3efd-4000-bce3-fa90b75316ae\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.817861 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-run-systemd\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.817887 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-run-ovn-kubernetes\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.817915 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmxc2\" (UniqueName: \"kubernetes.io/projected/e22c7035-4b7a-48cb-9abb-db277b387842-kube-api-access-pmxc2\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.817940 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/adb0dbbf-458d-46f5-b236-d4904e125418-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.817972 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.817998 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4048e453-a983-4708-89b6-a81af0067e29-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-m885k\" (UID: \"4048e453-a983-4708-89b6-a81af0067e29\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.818025 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be2da107-a419-423f-a657-44d681291f28-serving-cert\") pod \"route-controller-manager-7db5456fb7-csszs\" (UID: \"be2da107-a419-423f-a657-44d681291f28\") " pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.818050 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-daemon-config\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.818077 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/3d77a98a-0176-4924-81d3-8e9890852b38-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.818108 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-sysctl-d\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.818136 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/adb0dbbf-458d-46f5-b236-d4904e125418-node-exporter-textfile\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.818164 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c-utilities\") pod \"certified-operators-6jhwp\" (UID: \"b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c\") " pod="openshift-marketplace/certified-operators-6jhwp" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.818192 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfpb9\" (UniqueName: \"kubernetes.io/projected/37cd9c0a-697e-4e67-932b-b331ff77c8c0-kube-api-access-pfpb9\") pod \"openshift-config-operator-64488f9d78-tjp2j\" (UID: \"37cd9c0a-697e-4e67-932b-b331ff77c8c0\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.818219 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-w7wj9\" (UID: \"74eb1407-de29-42e5-9e6c-ce1bec3a9d80\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.818245 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3e5b8c8-a100-4880-a0b9-9c3989d4e739-catalog-content\") pod \"redhat-operators-d5tcw\" (UID: \"d3e5b8c8-a100-4880-a0b9-9c3989d4e739\") " pod="openshift-marketplace/redhat-operators-d5tcw" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.818270 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-sysconfig\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.818282 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/37cd9c0a-697e-4e67-932b-b331ff77c8c0-available-featuregates\") pod \"openshift-config-operator-64488f9d78-tjp2j\" (UID: \"37cd9c0a-697e-4e67-932b-b331ff77c8c0\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.818296 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8fe3d699-023e-4de7-8d42-6c9d8a5e68f3-signing-key\") pod \"service-ca-84bfdbbb7f-769nb\" (UID: \"8fe3d699-023e-4de7-8d42-6c9d8a5e68f3\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-769nb" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.818352 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-log-socket\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.818397 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-m6hsp\" (UID: \"223a548b-a3ad-40dd-82de-e3dbb7f3e4fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-m6hsp" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.818425 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/062f1b21-2ffc-47da-8334-427c3b2a1a90-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.818452 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e697746f-fb9e-4d10-ab61-33c68e62cc0d-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.818481 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/adb0dbbf-458d-46f5-b236-d4904e125418-node-exporter-tls\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.818527 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l22gw\" (UniqueName: \"kubernetes.io/projected/d92dddc8-a810-43f5-8beb-32d1c8ad8381-kube-api-access-l22gw\") pod \"iptables-alerter-4k8wm\" (UID: \"d92dddc8-a810-43f5-8beb-32d1c8ad8381\") " pod="openshift-network-operator/iptables-alerter-4k8wm" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.818554 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b41258c-ac1d-4e00-ac5e-732d85441f12-config\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.818581 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-conf-dir\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.818603 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-lib-modules\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.818624 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k59mb\" (UniqueName: \"kubernetes.io/projected/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652-kube-api-access-k59mb\") pod \"operator-controller-controller-manager-6598bfb6c4-9nzsn\" (UID: \"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.818650 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqzmm\" (UniqueName: \"kubernetes.io/projected/604044f4-9b0b-4747-827d-843f3cfa7077-kube-api-access-fqzmm\") pod \"machine-config-operator-fdb5c78b5-xbfrg\" (UID: \"604044f4-9b0b-4747-827d-843f3cfa7077\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.818677 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/b6d288e3-8e73-44d2-874d-64c6c98dd991-host-etc-kube\") pod \"network-operator-7c649bf6d4-vksss\" (UID: \"b6d288e3-8e73-44d2-874d-64c6c98dd991\") " pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.818885 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8fe3d699-023e-4de7-8d42-6c9d8a5e68f3-signing-cabundle\") pod \"service-ca-84bfdbbb7f-769nb\" (UID: \"8fe3d699-023e-4de7-8d42-6c9d8a5e68f3\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-769nb" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.818942 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-fz79c\" (UID: \"e94d098b-fbcc-4e85-b8ad-42f3a21c822c\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.818946 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.818992 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/062f1b21-2ffc-47da-8334-427c3b2a1a90-config\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.819023 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/062f1b21-2ffc-47da-8334-427c3b2a1a90-serving-cert\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.819047 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e697746f-fb9e-4d10-ab61-33c68e62cc0d-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.819257 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3a2cda2-b70f-4128-a1be-48503f5aad6d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-6nvn4\" (UID: \"f3a2cda2-b70f-4128-a1be-48503f5aad6d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.819473 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab926874-9722-4e65-9084-27b2f9915450-serving-cert\") pod \"kube-apiserver-operator-68bd585b-ddsbn\" (UID: \"ab926874-9722-4e65-9084-27b2f9915450\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-ddsbn" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.819620 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cc54e47-af53-448a-b1c9-043710890a32-catalog-content\") pod \"redhat-marketplace-ggkqg\" (UID: \"0cc54e47-af53-448a-b1c9-043710890a32\") " pod="openshift-marketplace/redhat-marketplace-ggkqg" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.819645 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b8dd13a7-10e5-431b-8d30-405dcfea02f5-ovnkube-config\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.819758 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e697746f-fb9e-4d10-ab61-33c68e62cc0d-serving-cert\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.819928 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-w7wj9\" (UID: \"74eb1407-de29-42e5-9e6c-ce1bec3a9d80\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.819976 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d1b3859c-20a1-4a1c-8508-86ed843768f5-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-mb6tc\" (UID: \"d1b3859c-20a1-4a1c-8508-86ed843768f5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.820089 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.820137 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c-catalog-content\") pod \"certified-operators-6jhwp\" (UID: \"b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c\") " pod="openshift-marketplace/certified-operators-6jhwp" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.820329 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-m6hsp\" (UID: \"223a548b-a3ad-40dd-82de-e3dbb7f3e4fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-m6hsp" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.820482 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/236f2886-bb69-49a7-9471-36454fd1cbd3-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-6qlzz\" (UID: \"236f2886-bb69-49a7-9471-36454fd1cbd3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6qlzz" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.820506 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/236f2886-bb69-49a7-9471-36454fd1cbd3-config\") pod \"openshift-apiserver-operator-799b6db4d7-6qlzz\" (UID: \"236f2886-bb69-49a7-9471-36454fd1cbd3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6qlzz" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.820721 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-w7wj9\" (UID: \"74eb1407-de29-42e5-9e6c-ce1bec3a9d80\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.820864 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/9f1f60fa-d79d-4f31-b5bf-2ad333151537-audit-log\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.821031 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3e5b8c8-a100-4880-a0b9-9c3989d4e739-utilities\") pod \"redhat-operators-d5tcw\" (UID: \"d3e5b8c8-a100-4880-a0b9-9c3989d4e739\") " pod="openshift-marketplace/redhat-operators-d5tcw" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.821212 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-ovnkube-identity-cm\") pod \"network-node-identity-hqrqt\" (UID: \"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe\") " pod="openshift-network-node-identity/network-node-identity-hqrqt" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.821225 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c-utilities\") pod \"certified-operators-6jhwp\" (UID: \"b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c\") " pod="openshift-marketplace/certified-operators-6jhwp" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.821227 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-9nzsn\" (UID: \"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.821288 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/3d77a98a-0176-4924-81d3-8e9890852b38-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.821335 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/adb0dbbf-458d-46f5-b236-d4904e125418-node-exporter-textfile\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.821353 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-metrics-certs\") pod \"network-metrics-daemon-z4sc9\" (UID: \"5b48f8fd-2efe-44e3-a6d7-c71358b83a2f\") " pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.821423 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e697746f-fb9e-4d10-ab61-33c68e62cc0d-etcd-client\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.821495 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-env-overrides\") pod \"network-node-identity-hqrqt\" (UID: \"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe\") " pod="openshift-network-node-identity/network-node-identity-hqrqt" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.821629 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/030160af-c915-4f00-903a-1c4b5c2b719a-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-4w5z7\" (UID: \"030160af-c915-4f00-903a-1c4b5c2b719a\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.821666 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-run-netns\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.821693 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/e5fb0152-3efd-4000-bce3-fa90b75316ae-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-2psgb\" (UID: \"e5fb0152-3efd-4000-bce3-fa90b75316ae\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.821712 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-fz79c\" (UID: \"e94d098b-fbcc-4e85-b8ad-42f3a21c822c\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.821726 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/492e9833-4513-4f2f-b865-d05a8973fadc-mcd-auth-proxy-config\") pod \"machine-config-daemon-mfv5x\" (UID: \"492e9833-4513-4f2f-b865-d05a8973fadc\") " pod="openshift-machine-config-operator/machine-config-daemon-mfv5x" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.821763 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbzcs\" (UniqueName: \"kubernetes.io/projected/9717d467-af1a-4de0-88e0-c47ec4d12d6e-kube-api-access-kbzcs\") pod \"node-resolver-7lzgx\" (UID: \"9717d467-af1a-4de0-88e0-c47ec4d12d6e\") " pod="openshift-dns/node-resolver-7lzgx" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.821792 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c016b1e-d47c-47d4-a15f-4160e7731c82-serving-cert\") pod \"controller-manager-7cd74f9776-2rmc9\" (UID: \"1c016b1e-d47c-47d4-a15f-4160e7731c82\") " pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.821820 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.821827 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9b41258c-ac1d-4e00-ac5e-732d85441f12-encryption-config\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.821849 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b41258c-ac1d-4e00-ac5e-732d85441f12-trusted-ca-bundle\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.821913 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.821968 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3e5b8c8-a100-4880-a0b9-9c3989d4e739-catalog-content\") pod \"redhat-operators-d5tcw\" (UID: \"d3e5b8c8-a100-4880-a0b9-9c3989d4e739\") " pod="openshift-marketplace/redhat-operators-d5tcw" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.822028 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-w7wj9\" (UID: \"74eb1407-de29-42e5-9e6c-ce1bec3a9d80\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.822053 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/062f1b21-2ffc-47da-8334-427c3b2a1a90-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.822065 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/52d3dc87-0bf8-4a62-a6fc-0ffa6060c6a8-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-tzgs9\" (UID: \"52d3dc87-0bf8-4a62-a6fc-0ffa6060c6a8\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-tzgs9" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.822091 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/062f1b21-2ffc-47da-8334-427c3b2a1a90-serving-cert\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.821819 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-daemon-config\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.822329 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmzf4\" (UniqueName: \"kubernetes.io/projected/fb529297-b3de-4167-a91e-0a63725b3b0f-kube-api-access-tmzf4\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.822335 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ae1240-e04e-48e9-88df-9f1a53508da7-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpb6k\" (UID: \"d4ae1240-e04e-48e9-88df-9f1a53508da7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpb6k" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.822393 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e697746f-fb9e-4d10-ab61-33c68e62cc0d-serving-cert\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.822449 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-host\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.822476 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab926874-9722-4e65-9084-27b2f9915450-config\") pod \"kube-apiserver-operator-68bd585b-ddsbn\" (UID: \"ab926874-9722-4e65-9084-27b2f9915450\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-ddsbn" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.822543 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtrvs\" (UniqueName: \"kubernetes.io/projected/8fe3d699-023e-4de7-8d42-6c9d8a5e68f3-kube-api-access-xtrvs\") pod \"service-ca-84bfdbbb7f-769nb\" (UID: \"8fe3d699-023e-4de7-8d42-6c9d8a5e68f3\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-769nb" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.822398 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/455f0aad-add2-49d0-995c-f92467bce2d6-cni-binary-copy\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.822582 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e22c7035-4b7a-48cb-9abb-db277b387842-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.822585 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/266b9f4f-3fb4-474d-84df-0a6c687c7e9a-metrics-tls\") pod \"dns-default-6h5tt\" (UID: \"266b9f4f-3fb4-474d-84df-0a6c687c7e9a\") " pod="openshift-dns/dns-default-6h5tt" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.822609 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8fe3d699-023e-4de7-8d42-6c9d8a5e68f3-signing-key\") pod \"service-ca-84bfdbbb7f-769nb\" (UID: \"8fe3d699-023e-4de7-8d42-6c9d8a5e68f3\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-769nb" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.822644 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab926874-9722-4e65-9084-27b2f9915450-config\") pod \"kube-apiserver-operator-68bd585b-ddsbn\" (UID: \"ab926874-9722-4e65-9084-27b2f9915450\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-ddsbn" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.822662 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfjj6\" (UniqueName: \"kubernetes.io/projected/bce831df-c604-4608-a24e-b14d62c5287a-kube-api-access-wfjj6\") pod \"csi-snapshot-controller-7577d6f48-2ltx9\" (UID: \"bce831df-c604-4608-a24e-b14d62c5287a\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-2ltx9" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.822705 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-client-ca\") pod \"controller-manager-7cd74f9776-2rmc9\" (UID: \"1c016b1e-d47c-47d4-a15f-4160e7731c82\") " pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.822723 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d92dddc8-a810-43f5-8beb-32d1c8ad8381-host-slash\") pod \"iptables-alerter-4k8wm\" (UID: \"d92dddc8-a810-43f5-8beb-32d1c8ad8381\") " pod="openshift-network-operator/iptables-alerter-4k8wm" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.822772 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6ggc\" (UniqueName: \"kubernetes.io/projected/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-kube-api-access-b6ggc\") pod \"ovnkube-control-plane-66b55d57d-w7wj9\" (UID: \"74eb1407-de29-42e5-9e6c-ce1bec3a9d80\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.822797 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-9nzsn\" (UID: \"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.822816 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-9nzsn\" (UID: \"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.822863 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.822893 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e5fb0152-3efd-4000-bce3-fa90b75316ae-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-2psgb\" (UID: \"e5fb0152-3efd-4000-bce3-fa90b75316ae\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.822925 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/adb0dbbf-458d-46f5-b236-d4904e125418-sys\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.822989 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw4m5\" (UniqueName: \"kubernetes.io/projected/d1b3859c-20a1-4a1c-8508-86ed843768f5-kube-api-access-gw4m5\") pod \"catalogd-controller-manager-7f8b8b6f4c-mb6tc\" (UID: \"d1b3859c-20a1-4a1c-8508-86ed843768f5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.823020 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37cd9c0a-697e-4e67-932b-b331ff77c8c0-serving-cert\") pod \"openshift-config-operator-64488f9d78-tjp2j\" (UID: \"37cd9c0a-697e-4e67-932b-b331ff77c8c0\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.823039 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5e09875-4445-4584-94f0-243148307bb0-service-ca-bundle\") pod \"insights-operator-8f89dfddd-m6z6d\" (UID: \"f5e09875-4445-4584-94f0-243148307bb0\") " pod="openshift-insights/insights-operator-8f89dfddd-m6z6d" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.823082 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5fb0152-3efd-4000-bce3-fa90b75316ae-config\") pod \"cluster-baremetal-operator-5cdb4c5598-2psgb\" (UID: \"e5fb0152-3efd-4000-bce3-fa90b75316ae\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.823101 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-run-openvswitch\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.823121 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/455f0aad-add2-49d0-995c-f92467bce2d6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.823150 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.823167 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x57x\" (UniqueName: \"kubernetes.io/projected/4519000b-e475-4c26-a1c0-bf05cd9c242b-kube-api-access-5x57x\") pod \"community-operators-nmmwm\" (UID: \"4519000b-e475-4c26-a1c0-bf05cd9c242b\") " pod="openshift-marketplace/community-operators-nmmwm" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.823198 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4687cf53-55d7-42b7-b24d-e57da3989fd6-images\") pod \"machine-api-operator-84bf6db4f9-gnrzd\" (UID: \"4687cf53-55d7-42b7-b24d-e57da3989fd6\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-gnrzd" Mar 12 18:29:21.826203 master-0 kubenswrapper[29097]: I0312 18:29:21.823215 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrg6p\" (UniqueName: \"kubernetes.io/projected/d3e5b8c8-a100-4880-a0b9-9c3989d4e739-kube-api-access-jrg6p\") pod \"redhat-operators-d5tcw\" (UID: \"d3e5b8c8-a100-4880-a0b9-9c3989d4e739\") " pod="openshift-marketplace/redhat-operators-d5tcw" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.823234 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5e09875-4445-4584-94f0-243148307bb0-serving-cert\") pod \"insights-operator-8f89dfddd-m6z6d\" (UID: \"f5e09875-4445-4584-94f0-243148307bb0\") " pod="openshift-insights/insights-operator-8f89dfddd-m6z6d" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.823252 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkftr\" (UniqueName: \"kubernetes.io/projected/e5fb0152-3efd-4000-bce3-fa90b75316ae-kube-api-access-pkftr\") pod \"cluster-baremetal-operator-5cdb4c5598-2psgb\" (UID: \"e5fb0152-3efd-4000-bce3-fa90b75316ae\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.823322 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-run-k8s-cni-cncf-io\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.823339 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-modprobe-d\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.823359 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjz8k\" (UniqueName: \"kubernetes.io/projected/1287cbb9-c9f6-48d2-9fda-f4464074e41b-kube-api-access-hjz8k\") pod \"cluster-storage-operator-6fbfc8dc8f-88gzm\" (UID: \"1287cbb9-c9f6-48d2-9fda-f4464074e41b\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-88gzm" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.823361 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37cd9c0a-697e-4e67-932b-b331ff77c8c0-serving-cert\") pod \"openshift-config-operator-64488f9d78-tjp2j\" (UID: \"37cd9c0a-697e-4e67-932b-b331ff77c8c0\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.823379 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/78c13011-7a79-445f-807c-4f5e75643549-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-f59x9\" (UID: \"78c13011-7a79-445f-807c-4f5e75643549\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-f59x9" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.823398 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-secret-metrics-server-tls\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.823416 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9b41258c-ac1d-4e00-ac5e-732d85441f12-image-import-ca\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.823433 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/f5e09875-4445-4584-94f0-243148307bb0-snapshots\") pod \"insights-operator-8f89dfddd-m6z6d\" (UID: \"f5e09875-4445-4584-94f0-243148307bb0\") " pod="openshift-insights/insights-operator-8f89dfddd-m6z6d" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.823453 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-client-ca-bundle\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.823472 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e22c7035-4b7a-48cb-9abb-db277b387842-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.823491 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d1b3859c-20a1-4a1c-8508-86ed843768f5-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-mb6tc\" (UID: \"d1b3859c-20a1-4a1c-8508-86ed843768f5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.823578 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/455f0aad-add2-49d0-995c-f92467bce2d6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.823607 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lmj2\" (UniqueName: \"kubernetes.io/projected/9b41258c-ac1d-4e00-ac5e-732d85441f12-kube-api-access-7lmj2\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.823636 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b6d288e3-8e73-44d2-874d-64c6c98dd991-metrics-tls\") pod \"network-operator-7c649bf6d4-vksss\" (UID: \"b6d288e3-8e73-44d2-874d-64c6c98dd991\") " pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.823653 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-kubernetes\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.823671 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b8dd13a7-10e5-431b-8d30-405dcfea02f5-ovn-node-metrics-cert\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.823689 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fb529297-b3de-4167-a91e-0a63725b3b0f-audit-policies\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.823707 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-tuned\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.823725 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/492e9833-4513-4f2f-b865-d05a8973fadc-proxy-tls\") pod \"machine-config-daemon-mfv5x\" (UID: \"492e9833-4513-4f2f-b865-d05a8973fadc\") " pod="openshift-machine-config-operator/machine-config-daemon-mfv5x" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.823747 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cc54e47-af53-448a-b1c9-043710890a32-utilities\") pod \"redhat-marketplace-ggkqg\" (UID: \"0cc54e47-af53-448a-b1c9-043710890a32\") " pod="openshift-marketplace/redhat-marketplace-ggkqg" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.823764 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e720e1d0-5a6d-4b76-8b25-5963e24950f5-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-w7xvp\" (UID: \"e720e1d0-5a6d-4b76-8b25-5963e24950f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-w7xvp" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.823782 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9b41258c-ac1d-4e00-ac5e-732d85441f12-etcd-client\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.823800 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-config\") pod \"controller-manager-7cd74f9776-2rmc9\" (UID: \"1c016b1e-d47c-47d4-a15f-4160e7731c82\") " pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.823901 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9b41258c-ac1d-4e00-ac5e-732d85441f12-node-pullsecrets\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.823920 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ae1240-e04e-48e9-88df-9f1a53508da7-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpb6k\" (UID: \"d4ae1240-e04e-48e9-88df-9f1a53508da7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpb6k" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.823941 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxsgv\" (UniqueName: \"kubernetes.io/projected/455f0aad-add2-49d0-995c-f92467bce2d6-kube-api-access-pxsgv\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.824020 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-tuned\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.824039 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/f5e09875-4445-4584-94f0-243148307bb0-snapshots\") pod \"insights-operator-8f89dfddd-m6z6d\" (UID: \"f5e09875-4445-4584-94f0-243148307bb0\") " pod="openshift-insights/insights-operator-8f89dfddd-m6z6d" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.824078 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-kubelet\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.824100 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/adb0dbbf-458d-46f5-b236-d4904e125418-root\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.824160 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ae1240-e04e-48e9-88df-9f1a53508da7-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpb6k\" (UID: \"d4ae1240-e04e-48e9-88df-9f1a53508da7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpb6k" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.824167 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cc54e47-af53-448a-b1c9-043710890a32-utilities\") pod \"redhat-marketplace-ggkqg\" (UID: \"0cc54e47-af53-448a-b1c9-043710890a32\") " pod="openshift-marketplace/redhat-marketplace-ggkqg" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.824202 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e720e1d0-5a6d-4b76-8b25-5963e24950f5-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-w7xvp\" (UID: \"e720e1d0-5a6d-4b76-8b25-5963e24950f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-w7xvp" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.824317 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b8dd13a7-10e5-431b-8d30-405dcfea02f5-ovn-node-metrics-cert\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.824589 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-cni-netd\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.824626 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52svc\" (UniqueName: \"kubernetes.io/projected/adb0dbbf-458d-46f5-b236-d4904e125418-kube-api-access-52svc\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.824648 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/41c1bd85-369e-4341-9e80-8b4b248b5572-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-qs7tx\" (UID: \"41c1bd85-369e-4341-9e80-8b4b248b5572\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.824675 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-proxy-ca-bundles\") pod \"controller-manager-7cd74f9776-2rmc9\" (UID: \"1c016b1e-d47c-47d4-a15f-4160e7731c82\") " pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.824800 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-cni-binary-copy\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.824819 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b6d288e3-8e73-44d2-874d-64c6c98dd991-metrics-tls\") pod \"network-operator-7c649bf6d4-vksss\" (UID: \"b6d288e3-8e73-44d2-874d-64c6c98dd991\") " pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.824932 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-tmp\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.824986 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-node-log\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.824992 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-tmp\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.825040 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b38e7fcd-8f7a-4d4f-8702-7ef205261054-tmpfs\") pod \"packageserver-694648486f-f89lc\" (UID: \"b38e7fcd-8f7a-4d4f-8702-7ef205261054\") " pod="openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.825079 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6ggg\" (UniqueName: \"kubernetes.io/projected/236f2886-bb69-49a7-9471-36454fd1cbd3-kube-api-access-b6ggg\") pod \"openshift-apiserver-operator-799b6db4d7-6qlzz\" (UID: \"236f2886-bb69-49a7-9471-36454fd1cbd3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6qlzz" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.825126 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.825132 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-cni-binary-copy\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.825129 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b38e7fcd-8f7a-4d4f-8702-7ef205261054-tmpfs\") pod \"packageserver-694648486f-f89lc\" (UID: \"b38e7fcd-8f7a-4d4f-8702-7ef205261054\") " pod="openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.825162 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327-webhook-certs\") pod \"multus-admission-controller-7769569c45-dq2gs\" (UID: \"25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327\") " pod="openshift-multus/multus-admission-controller-7769569c45-dq2gs" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.825192 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/518ffff8-8119-41be-8b76-ce49d5751254-stats-auth\") pod \"router-default-79f8cd6fdd-79bhf\" (UID: \"518ffff8-8119-41be-8b76-ce49d5751254\") " pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.825248 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e22c7035-4b7a-48cb-9abb-db277b387842-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.825294 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/518ffff8-8119-41be-8b76-ce49d5751254-default-certificate\") pod \"router-default-79f8cd6fdd-79bhf\" (UID: \"518ffff8-8119-41be-8b76-ce49d5751254\") " pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.825399 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4687cf53-55d7-42b7-b24d-e57da3989fd6-config\") pod \"machine-api-operator-84bf6db4f9-gnrzd\" (UID: \"4687cf53-55d7-42b7-b24d-e57da3989fd6\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-gnrzd" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.825437 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9b41258c-ac1d-4e00-ac5e-732d85441f12-etcd-client\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.825456 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee55b576-6b8d-4217-b5a7-93b023a1e885-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-k5hsv\" (UID: \"ee55b576-6b8d-4217-b5a7-93b023a1e885\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-k5hsv" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.825547 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlf77\" (UniqueName: \"kubernetes.io/projected/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-kube-api-access-wlf77\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.825583 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-cni-dir\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.825663 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be2da107-a419-423f-a657-44d681291f28-client-ca\") pod \"route-controller-manager-7db5456fb7-csszs\" (UID: \"be2da107-a419-423f-a657-44d681291f28\") " pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.825713 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d92dddc8-a810-43f5-8beb-32d1c8ad8381-iptables-alerter-script\") pod \"iptables-alerter-4k8wm\" (UID: \"d92dddc8-a810-43f5-8beb-32d1c8ad8381\") " pod="openshift-network-operator/iptables-alerter-4k8wm" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.825746 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b41258c-ac1d-4e00-ac5e-732d85441f12-serving-cert\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.825805 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e22c7035-4b7a-48cb-9abb-db277b387842-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.825852 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h65dg\" (UniqueName: \"kubernetes.io/projected/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6-kube-api-access-h65dg\") pod \"machine-config-server-2jzxq\" (UID: \"4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6\") " pod="openshift-machine-config-operator/machine-config-server-2jzxq" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.825880 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d92dddc8-a810-43f5-8beb-32d1c8ad8381-iptables-alerter-script\") pod \"iptables-alerter-4k8wm\" (UID: \"d92dddc8-a810-43f5-8beb-32d1c8ad8381\") " pod="openshift-network-operator/iptables-alerter-4k8wm" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.825972 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b41258c-ac1d-4e00-ac5e-732d85441f12-serving-cert\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.826015 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-socket-dir-parent\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.826165 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/518ffff8-8119-41be-8b76-ce49d5751254-service-ca-bundle\") pod \"router-default-79f8cd6fdd-79bhf\" (UID: \"518ffff8-8119-41be-8b76-ce49d5751254\") " pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.826214 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/604044f4-9b0b-4747-827d-843f3cfa7077-images\") pod \"machine-config-operator-fdb5c78b5-xbfrg\" (UID: \"604044f4-9b0b-4747-827d-843f3cfa7077\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.826238 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4687cf53-55d7-42b7-b24d-e57da3989fd6-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-gnrzd\" (UID: \"4687cf53-55d7-42b7-b24d-e57da3989fd6\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-gnrzd" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.826266 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3d77a98a-0176-4924-81d3-8e9890852b38-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.826343 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn9nf\" (UniqueName: \"kubernetes.io/projected/062f1b21-2ffc-47da-8334-427c3b2a1a90-kube-api-access-jn9nf\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.826363 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lj7z\" (UniqueName: \"kubernetes.io/projected/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-kube-api-access-2lj7z\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.826387 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/604044f4-9b0b-4747-827d-843f3cfa7077-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-xbfrg\" (UID: \"604044f4-9b0b-4747-827d-843f3cfa7077\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.826406 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-var-lib-kubelet\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.826424 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5e09875-4445-4584-94f0-243148307bb0-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-m6z6d\" (UID: \"f5e09875-4445-4584-94f0-243148307bb0\") " pod="openshift-insights/insights-operator-8f89dfddd-m6z6d" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.826442 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clsd9\" (UniqueName: \"kubernetes.io/projected/f5e09875-4445-4584-94f0-243148307bb0-kube-api-access-clsd9\") pod \"insights-operator-8f89dfddd-m6z6d\" (UID: \"f5e09875-4445-4584-94f0-243148307bb0\") " pod="openshift-insights/insights-operator-8f89dfddd-m6z6d" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.826459 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-cni-bin\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.826480 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee4c1949-96b4-4444-9675-9df1d46f681e-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-z9srl\" (UID: \"ee4c1949-96b4-4444-9675-9df1d46f681e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.826572 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4cb73c69-af16-4565-bdb5-aeae9dcfb423-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.826596 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/78c13011-7a79-445f-807c-4f5e75643549-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-f59x9\" (UID: \"78c13011-7a79-445f-807c-4f5e75643549\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-f59x9" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.826615 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jgbv\" (UniqueName: \"kubernetes.io/projected/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-kube-api-access-9jgbv\") pod \"network-metrics-daemon-z4sc9\" (UID: \"5b48f8fd-2efe-44e3-a6d7-c71358b83a2f\") " pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.826632 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-run-ovn\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.826649 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-slash\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.826666 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b8dd13a7-10e5-431b-8d30-405dcfea02f5-env-overrides\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.826694 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/41c1bd85-369e-4341-9e80-8b4b248b5572-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-qs7tx\" (UID: \"41c1bd85-369e-4341-9e80-8b4b248b5572\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.826719 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fb529297-b3de-4167-a91e-0a63725b3b0f-etcd-serving-ca\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.826747 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/3d77a98a-0176-4924-81d3-8e9890852b38-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.826804 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/062f1b21-2ffc-47da-8334-427c3b2a1a90-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.826882 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b8dd13a7-10e5-431b-8d30-405dcfea02f5-env-overrides\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.826911 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e720e1d0-5a6d-4b76-8b25-5963e24950f5-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-w7xvp\" (UID: \"e720e1d0-5a6d-4b76-8b25-5963e24950f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-w7xvp" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.827011 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/062f1b21-2ffc-47da-8334-427c3b2a1a90-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.827036 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e720e1d0-5a6d-4b76-8b25-5963e24950f5-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-w7xvp\" (UID: \"e720e1d0-5a6d-4b76-8b25-5963e24950f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-w7xvp" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.827044 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5lf8\" (UniqueName: \"kubernetes.io/projected/ee55b576-6b8d-4217-b5a7-93b023a1e885-kube-api-access-j5lf8\") pod \"machine-config-controller-ff46b7bdf-k5hsv\" (UID: \"ee55b576-6b8d-4217-b5a7-93b023a1e885\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-k5hsv" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.827080 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/41c1bd85-369e-4341-9e80-8b4b248b5572-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-qs7tx\" (UID: \"41c1bd85-369e-4341-9e80-8b4b248b5572\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.827101 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/78c13011-7a79-445f-807c-4f5e75643549-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-f59x9\" (UID: \"78c13011-7a79-445f-807c-4f5e75643549\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-f59x9" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.827143 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b8dd13a7-10e5-431b-8d30-405dcfea02f5-ovnkube-script-lib\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.827165 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ee4c1949-96b4-4444-9675-9df1d46f681e-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-z9srl\" (UID: \"ee4c1949-96b4-4444-9675-9df1d46f681e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.827364 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b8dd13a7-10e5-431b-8d30-405dcfea02f5-ovnkube-script-lib\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.827420 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb529297-b3de-4167-a91e-0a63725b3b0f-serving-cert\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.827450 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4519000b-e475-4c26-a1c0-bf05cd9c242b-catalog-content\") pod \"community-operators-nmmwm\" (UID: \"4519000b-e475-4c26-a1c0-bf05cd9c242b\") " pod="openshift-marketplace/community-operators-nmmwm" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.827469 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/f3a2cda2-b70f-4128-a1be-48503f5aad6d-operand-assets\") pod \"cluster-olm-operator-77899cf6d-6nvn4\" (UID: \"f3a2cda2-b70f-4128-a1be-48503f5aad6d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.827503 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/455f0aad-add2-49d0-995c-f92467bce2d6-whereabouts-configmap\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.827537 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4519000b-e475-4c26-a1c0-bf05cd9c242b-utilities\") pod \"community-operators-nmmwm\" (UID: \"4519000b-e475-4c26-a1c0-bf05cd9c242b\") " pod="openshift-marketplace/community-operators-nmmwm" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.827554 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-var-lib-openvswitch\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.827598 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls\") pod \"dns-operator-589895fbb7-jqj5k\" (UID: \"8ad05507-e242-4ff8-ae80-c16ff9ee68e2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-jqj5k" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.827654 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/f3a2cda2-b70f-4128-a1be-48503f5aad6d-operand-assets\") pod \"cluster-olm-operator-77899cf6d-6nvn4\" (UID: \"f3a2cda2-b70f-4128-a1be-48503f5aad6d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.827686 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4519000b-e475-4c26-a1c0-bf05cd9c242b-catalog-content\") pod \"community-operators-nmmwm\" (UID: \"4519000b-e475-4c26-a1c0-bf05cd9c242b\") " pod="openshift-marketplace/community-operators-nmmwm" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.827741 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p4dz\" (UniqueName: \"kubernetes.io/projected/030160af-c915-4f00-903a-1c4b5c2b719a-kube-api-access-9p4dz\") pod \"machine-approver-754bdc9f9d-4w5z7\" (UID: \"030160af-c915-4f00-903a-1c4b5c2b719a\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.827773 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdb9w\" (UniqueName: \"kubernetes.io/projected/b6d288e3-8e73-44d2-874d-64c6c98dd991-kube-api-access-vdb9w\") pod \"network-operator-7c649bf6d4-vksss\" (UID: \"b6d288e3-8e73-44d2-874d-64c6c98dd991\") " pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.827803 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e697746f-fb9e-4d10-ab61-33c68e62cc0d-etcd-ca\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.827832 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa-config\") pod \"openshift-controller-manager-operator-8565d84698-m6hsp\" (UID: \"223a548b-a3ad-40dd-82de-e3dbb7f3e4fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-m6hsp" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.827860 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s55hv\" (UniqueName: \"kubernetes.io/projected/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa-kube-api-access-s55hv\") pod \"openshift-controller-manager-operator-8565d84698-m6hsp\" (UID: \"223a548b-a3ad-40dd-82de-e3dbb7f3e4fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-m6hsp" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.827881 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-metrics-tls\") pod \"dns-operator-589895fbb7-jqj5k\" (UID: \"8ad05507-e242-4ff8-ae80-c16ff9ee68e2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-jqj5k" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.827893 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsdjs\" (UniqueName: \"kubernetes.io/projected/0fb78c61-2051-42e2-8668-fa7404ccac43-kube-api-access-zsdjs\") pod \"cluster-samples-operator-664cb58b85-8762n\" (UID: \"0fb78c61-2051-42e2-8668-fa7404ccac43\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-8762n" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.827949 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp5gk\" (UniqueName: \"kubernetes.io/projected/b38e7fcd-8f7a-4d4f-8702-7ef205261054-kube-api-access-zp5gk\") pod \"packageserver-694648486f-f89lc\" (UID: \"b38e7fcd-8f7a-4d4f-8702-7ef205261054\") " pod="openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.827980 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clz8x\" (UniqueName: \"kubernetes.io/projected/1c016b1e-d47c-47d4-a15f-4160e7731c82-kube-api-access-clz8x\") pod \"controller-manager-7cd74f9776-2rmc9\" (UID: \"1c016b1e-d47c-47d4-a15f-4160e7731c82\") " pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.828010 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-var-lib-cni-bin\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.828029 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/455f0aad-add2-49d0-995c-f92467bce2d6-whereabouts-configmap\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.828036 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/518ffff8-8119-41be-8b76-ce49d5751254-metrics-certs\") pod \"router-default-79f8cd6fdd-79bhf\" (UID: \"518ffff8-8119-41be-8b76-ce49d5751254\") " pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.828067 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bttzm\" (UniqueName: \"kubernetes.io/projected/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-kube-api-access-bttzm\") pod \"cluster-monitoring-operator-674cbfbd9d-fz79c\" (UID: \"e94d098b-fbcc-4e85-b8ad-42f3a21c822c\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.828078 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4519000b-e475-4c26-a1c0-bf05cd9c242b-utilities\") pod \"community-operators-nmmwm\" (UID: \"4519000b-e475-4c26-a1c0-bf05cd9c242b\") " pod="openshift-marketplace/community-operators-nmmwm" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.828100 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9b41258c-ac1d-4e00-ac5e-732d85441f12-audit-dir\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.828132 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b38e7fcd-8f7a-4d4f-8702-7ef205261054-webhook-cert\") pod \"packageserver-694648486f-f89lc\" (UID: \"b38e7fcd-8f7a-4d4f-8702-7ef205261054\") " pod="openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.828161 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.828328 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa-config\") pod \"openshift-controller-manager-operator-8565d84698-m6hsp\" (UID: \"223a548b-a3ad-40dd-82de-e3dbb7f3e4fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-m6hsp" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.828335 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee4c1949-96b4-4444-9675-9df1d46f681e-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-z9srl\" (UID: \"ee4c1949-96b4-4444-9675-9df1d46f681e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" Mar 12 18:29:21.832784 master-0 kubenswrapper[29097]: I0312 18:29:21.828484 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e697746f-fb9e-4d10-ab61-33c68e62cc0d-etcd-ca\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:29:21.876259 master-0 kubenswrapper[29097]: I0312 18:29:21.876221 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 12 18:29:21.880556 master-0 kubenswrapper[29097]: I0312 18:29:21.880522 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 12 18:29:21.880865 master-0 kubenswrapper[29097]: I0312 18:29:21.880841 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9b41258c-ac1d-4e00-ac5e-732d85441f12-audit\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:29:21.883919 master-0 kubenswrapper[29097]: I0312 18:29:21.882760 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 12 18:29:21.884777 master-0 kubenswrapper[29097]: I0312 18:29:21.884749 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9b41258c-ac1d-4e00-ac5e-732d85441f12-etcd-serving-ca\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:29:21.898537 master-0 kubenswrapper[29097]: I0312 18:29:21.895847 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b41258c-ac1d-4e00-ac5e-732d85441f12-trusted-ca-bundle\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:29:21.902026 master-0 kubenswrapper[29097]: I0312 18:29:21.899907 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 12 18:29:21.906818 master-0 kubenswrapper[29097]: I0312 18:29:21.905219 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9b41258c-ac1d-4e00-ac5e-732d85441f12-image-import-ca\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:29:21.921541 master-0 kubenswrapper[29097]: I0312 18:29:21.920305 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 12 18:29:21.921541 master-0 kubenswrapper[29097]: I0312 18:29:21.921525 29097 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.930286 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-lib-modules\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.930371 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/b6d288e3-8e73-44d2-874d-64c6c98dd991-host-etc-kube\") pod \"network-operator-7c649bf6d4-vksss\" (UID: \"b6d288e3-8e73-44d2-874d-64c6c98dd991\") " pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.930415 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-run-netns\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.930483 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.930538 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-host\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.930592 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d92dddc8-a810-43f5-8beb-32d1c8ad8381-host-slash\") pod \"iptables-alerter-4k8wm\" (UID: \"d92dddc8-a810-43f5-8beb-32d1c8ad8381\") " pod="openshift-network-operator/iptables-alerter-4k8wm" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.930661 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-run-netns\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.930714 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d92dddc8-a810-43f5-8beb-32d1c8ad8381-host-slash\") pod \"iptables-alerter-4k8wm\" (UID: \"d92dddc8-a810-43f5-8beb-32d1c8ad8381\") " pod="openshift-network-operator/iptables-alerter-4k8wm" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.930783 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/b6d288e3-8e73-44d2-874d-64c6c98dd991-host-etc-kube\") pod \"network-operator-7c649bf6d4-vksss\" (UID: \"b6d288e3-8e73-44d2-874d-64c6c98dd991\") " pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.930862 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-lib-modules\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.930888 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-9nzsn\" (UID: \"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.930931 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-9nzsn\" (UID: \"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.930968 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-host\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.930981 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/adb0dbbf-458d-46f5-b236-d4904e125418-sys\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.931041 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-9nzsn\" (UID: \"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.931086 29097 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.931108 29097 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.931114 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/adb0dbbf-458d-46f5-b236-d4904e125418-sys\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.931113 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-9nzsn\" (UID: \"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.931122 29097 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.931195 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-run-openvswitch\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.931291 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-run-k8s-cni-cncf-io\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.931327 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-modprobe-d\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.931355 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-run-openvswitch\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.931420 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d1b3859c-20a1-4a1c-8508-86ed843768f5-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-mb6tc\" (UID: \"d1b3859c-20a1-4a1c-8508-86ed843768f5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.931444 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-run-k8s-cni-cncf-io\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.931468 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-kubernetes\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.931635 29097 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.931790 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d1b3859c-20a1-4a1c-8508-86ed843768f5-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-mb6tc\" (UID: \"d1b3859c-20a1-4a1c-8508-86ed843768f5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.931881 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9b41258c-ac1d-4e00-ac5e-732d85441f12-node-pullsecrets\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.931925 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-kubelet\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.931950 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/adb0dbbf-458d-46f5-b236-d4904e125418-root\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.931977 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-cni-netd\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.932062 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-node-log\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.932102 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.932197 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-cni-dir\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.932250 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-socket-dir-parent\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.932364 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-var-lib-kubelet\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.932413 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-cni-bin\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.932482 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4cb73c69-af16-4565-bdb5-aeae9dcfb423-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.932594 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-run-ovn\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.932619 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-slash\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.932725 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.932740 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-node-log\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.932789 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-cni-dir\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.932919 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-modprobe-d\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.932974 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-socket-dir-parent\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.932990 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-var-lib-kubelet\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.933018 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9b41258c-ac1d-4e00-ac5e-732d85441f12-node-pullsecrets\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.933017 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-kubernetes\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.933038 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-cni-bin\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.933055 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-kubelet\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.933065 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4cb73c69-af16-4565-bdb5-aeae9dcfb423-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.933075 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/adb0dbbf-458d-46f5-b236-d4904e125418-root\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.933093 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-run-ovn\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.933116 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-cni-netd\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.933136 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-slash\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.933295 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-var-lib-openvswitch\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.933430 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-var-lib-cni-bin\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.933485 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9b41258c-ac1d-4e00-ac5e-732d85441f12-audit-dir\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.933542 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.933609 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9b41258c-ac1d-4e00-ac5e-732d85441f12-audit-dir\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.933622 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-var-lib-openvswitch\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.933707 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-systemd\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.933749 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-var-lib-cni-bin\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.933747 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/d1b3859c-20a1-4a1c-8508-86ed843768f5-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-mb6tc\" (UID: \"d1b3859c-20a1-4a1c-8508-86ed843768f5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.933794 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.933795 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/d1b3859c-20a1-4a1c-8508-86ed843768f5-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-mb6tc\" (UID: \"d1b3859c-20a1-4a1c-8508-86ed843768f5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.933842 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-systemd\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.933996 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-system-cni-dir\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.934038 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-os-release\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.934148 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-var-lib-cni-multus\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.934231 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fb529297-b3de-4167-a91e-0a63725b3b0f-audit-dir\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.934254 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-systemd-units\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.934303 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/adb0dbbf-458d-46f5-b236-d4904e125418-node-exporter-wtmp\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.934376 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-run-netns\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.934444 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-sysctl-conf\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.934467 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9717d467-af1a-4de0-88e0-c47ec4d12d6e-hosts-file\") pod \"node-resolver-7lzgx\" (UID: \"9717d467-af1a-4de0-88e0-c47ec4d12d6e\") " pod="openshift-dns/node-resolver-7lzgx" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.934489 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-os-release\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.934537 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-sys\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.934561 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-cnibin\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.934585 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-cnibin\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.934610 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/ee4c1949-96b4-4444-9675-9df1d46f681e-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-z9srl\" (UID: \"ee4c1949-96b4-4444-9675-9df1d46f681e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.934653 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-run-netns\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.934715 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/adb0dbbf-458d-46f5-b236-d4904e125418-node-exporter-wtmp\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.934792 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-sysctl-conf\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.934805 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-system-cni-dir\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.934867 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fb529297-b3de-4167-a91e-0a63725b3b0f-audit-dir\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.934922 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-os-release\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.934946 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-var-lib-cni-multus\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.934980 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9717d467-af1a-4de0-88e0-c47ec4d12d6e-hosts-file\") pod \"node-resolver-7lzgx\" (UID: \"9717d467-af1a-4de0-88e0-c47ec4d12d6e\") " pod="openshift-dns/node-resolver-7lzgx" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.934990 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-systemd-units\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.935026 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-os-release\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.935073 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-hostroot\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.935078 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/ee4c1949-96b4-4444-9675-9df1d46f681e-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-z9srl\" (UID: \"ee4c1949-96b4-4444-9675-9df1d46f681e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.935108 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-hostroot\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.935118 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/455f0aad-add2-49d0-995c-f92467bce2d6-cnibin\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.935140 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-etc-kubernetes\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.935161 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-cnibin\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.935203 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4048e453-a983-4708-89b6-a81af0067e29-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-m885k\" (UID: \"4048e453-a983-4708-89b6-a81af0067e29\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.935221 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-etc-kubernetes\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.935239 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-etc-openvswitch\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.935256 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4048e453-a983-4708-89b6-a81af0067e29-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-m885k\" (UID: \"4048e453-a983-4708-89b6-a81af0067e29\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.935277 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-system-cni-dir\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.935292 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-etc-openvswitch\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.935300 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-var-lib-kubelet\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.935330 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-var-lib-kubelet\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.935367 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-system-cni-dir\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.935399 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-run-multus-certs\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.935459 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-run\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.935487 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4048e453-a983-4708-89b6-a81af0067e29-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-m885k\" (UID: \"4048e453-a983-4708-89b6-a81af0067e29\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.935489 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-host-run-multus-certs\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.935563 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-run\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.935592 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/492e9833-4513-4f2f-b865-d05a8973fadc-rootfs\") pod \"machine-config-daemon-mfv5x\" (UID: \"492e9833-4513-4f2f-b865-d05a8973fadc\") " pod="openshift-machine-config-operator/machine-config-daemon-mfv5x" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.935597 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4048e453-a983-4708-89b6-a81af0067e29-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-m885k\" (UID: \"4048e453-a983-4708-89b6-a81af0067e29\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.935675 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-run-systemd\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.935699 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-run-ovn-kubernetes\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.935712 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/492e9833-4513-4f2f-b865-d05a8973fadc-rootfs\") pod \"machine-config-daemon-mfv5x\" (UID: \"492e9833-4513-4f2f-b865-d05a8973fadc\") " pod="openshift-machine-config-operator/machine-config-daemon-mfv5x" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.935786 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-run-systemd\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.935814 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-host-run-ovn-kubernetes\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.936997 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.937050 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-sysctl-d\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.937090 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.937101 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-sysctl-d\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.937138 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-sys\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.937187 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-sysconfig\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.937228 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-log-socket\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.937289 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-etc-sysconfig\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.937310 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-conf-dir\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.937350 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b8dd13a7-10e5-431b-8d30-405dcfea02f5-log-socket\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:21.944787 master-0 kubenswrapper[29097]: I0312 18:29:21.937455 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-multus-conf-dir\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:21.953056 master-0 kubenswrapper[29097]: I0312 18:29:21.953022 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 12 18:29:21.965558 master-0 kubenswrapper[29097]: I0312 18:29:21.964005 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 12 18:29:21.968981 master-0 kubenswrapper[29097]: I0312 18:29:21.968744 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 18:29:21.970233 master-0 kubenswrapper[29097]: I0312 18:29:21.970198 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b41258c-ac1d-4e00-ac5e-732d85441f12-config\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:29:21.978418 master-0 kubenswrapper[29097]: I0312 18:29:21.978385 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 18:29:21.981732 master-0 kubenswrapper[29097]: I0312 18:29:21.981497 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 12 18:29:22.001121 master-0 kubenswrapper[29097]: I0312 18:29:22.001078 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 12 18:29:22.021166 master-0 kubenswrapper[29097]: I0312 18:29:22.021120 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 12 18:29:22.037871 master-0 kubenswrapper[29097]: I0312 18:29:22.037834 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kubelet-dir\") pod \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\" (UID: \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\") " Mar 12 18:29:22.038073 master-0 kubenswrapper[29097]: I0312 18:29:22.038059 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4cb73c69-af16-4565-bdb5-aeae9dcfb423-var-lock\") pod \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\" (UID: \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\") " Mar 12 18:29:22.038700 master-0 kubenswrapper[29097]: I0312 18:29:22.037970 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4cb73c69-af16-4565-bdb5-aeae9dcfb423" (UID: "4cb73c69-af16-4565-bdb5-aeae9dcfb423"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:29:22.038779 master-0 kubenswrapper[29097]: I0312 18:29:22.038166 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cb73c69-af16-4565-bdb5-aeae9dcfb423-var-lock" (OuterVolumeSpecName: "var-lock") pod "4cb73c69-af16-4565-bdb5-aeae9dcfb423" (UID: "4cb73c69-af16-4565-bdb5-aeae9dcfb423"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:29:22.039434 master-0 kubenswrapper[29097]: I0312 18:29:22.039418 29097 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:29:22.039536 master-0 kubenswrapper[29097]: I0312 18:29:22.039506 29097 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4cb73c69-af16-4565-bdb5-aeae9dcfb423-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 18:29:22.040709 master-0 kubenswrapper[29097]: I0312 18:29:22.040486 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 12 18:29:22.040981 master-0 kubenswrapper[29097]: I0312 18:29:22.040966 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-webhook-cert\") pod \"network-node-identity-hqrqt\" (UID: \"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe\") " pod="openshift-network-node-identity/network-node-identity-hqrqt" Mar 12 18:29:22.061279 master-0 kubenswrapper[29097]: I0312 18:29:22.061247 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 12 18:29:22.069001 master-0 kubenswrapper[29097]: I0312 18:29:22.068955 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/518ffff8-8119-41be-8b76-ce49d5751254-metrics-certs\") pod \"router-default-79f8cd6fdd-79bhf\" (UID: \"518ffff8-8119-41be-8b76-ce49d5751254\") " pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:29:22.080174 master-0 kubenswrapper[29097]: I0312 18:29:22.080146 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 12 18:29:22.086075 master-0 kubenswrapper[29097]: I0312 18:29:22.086048 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/518ffff8-8119-41be-8b76-ce49d5751254-stats-auth\") pod \"router-default-79f8cd6fdd-79bhf\" (UID: \"518ffff8-8119-41be-8b76-ce49d5751254\") " pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:29:22.100490 master-0 kubenswrapper[29097]: I0312 18:29:22.100459 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 12 18:29:22.107607 master-0 kubenswrapper[29097]: I0312 18:29:22.107580 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/518ffff8-8119-41be-8b76-ce49d5751254-service-ca-bundle\") pod \"router-default-79f8cd6fdd-79bhf\" (UID: \"518ffff8-8119-41be-8b76-ce49d5751254\") " pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:29:22.121507 master-0 kubenswrapper[29097]: I0312 18:29:22.121470 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 12 18:29:22.139959 master-0 kubenswrapper[29097]: I0312 18:29:22.139904 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 12 18:29:22.160168 master-0 kubenswrapper[29097]: I0312 18:29:22.160129 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 12 18:29:22.170163 master-0 kubenswrapper[29097]: I0312 18:29:22.170111 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/266b9f4f-3fb4-474d-84df-0a6c687c7e9a-config-volume\") pod \"dns-default-6h5tt\" (UID: \"266b9f4f-3fb4-474d-84df-0a6c687c7e9a\") " pod="openshift-dns/dns-default-6h5tt" Mar 12 18:29:22.180748 master-0 kubenswrapper[29097]: I0312 18:29:22.180705 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 12 18:29:22.182925 master-0 kubenswrapper[29097]: I0312 18:29:22.182900 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/266b9f4f-3fb4-474d-84df-0a6c687c7e9a-metrics-tls\") pod \"dns-default-6h5tt\" (UID: \"266b9f4f-3fb4-474d-84df-0a6c687c7e9a\") " pod="openshift-dns/dns-default-6h5tt" Mar 12 18:29:22.200522 master-0 kubenswrapper[29097]: I0312 18:29:22.200430 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 12 18:29:22.210309 master-0 kubenswrapper[29097]: I0312 18:29:22.210268 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/d1b3859c-20a1-4a1c-8508-86ed843768f5-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-mb6tc\" (UID: \"d1b3859c-20a1-4a1c-8508-86ed843768f5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:29:22.220606 master-0 kubenswrapper[29097]: I0312 18:29:22.220574 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 12 18:29:22.226656 master-0 kubenswrapper[29097]: I0312 18:29:22.226632 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/518ffff8-8119-41be-8b76-ce49d5751254-default-certificate\") pod \"router-default-79f8cd6fdd-79bhf\" (UID: \"518ffff8-8119-41be-8b76-ce49d5751254\") " pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:29:22.246928 master-0 kubenswrapper[29097]: I0312 18:29:22.246891 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 12 18:29:22.261366 master-0 kubenswrapper[29097]: I0312 18:29:22.261333 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 12 18:29:22.271400 master-0 kubenswrapper[29097]: I0312 18:29:22.271373 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/d1b3859c-20a1-4a1c-8508-86ed843768f5-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-mb6tc\" (UID: \"d1b3859c-20a1-4a1c-8508-86ed843768f5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:29:22.280309 master-0 kubenswrapper[29097]: I0312 18:29:22.280271 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 12 18:29:22.300726 master-0 kubenswrapper[29097]: I0312 18:29:22.300682 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 12 18:29:22.326197 master-0 kubenswrapper[29097]: I0312 18:29:22.326152 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 12 18:29:22.340737 master-0 kubenswrapper[29097]: I0312 18:29:22.340705 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 12 18:29:22.360221 master-0 kubenswrapper[29097]: I0312 18:29:22.360167 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 12 18:29:22.361610 master-0 kubenswrapper[29097]: I0312 18:29:22.361565 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-9nzsn\" (UID: \"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:29:22.382132 master-0 kubenswrapper[29097]: I0312 18:29:22.382047 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 12 18:29:22.388369 master-0 kubenswrapper[29097]: I0312 18:29:22.388320 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb529297-b3de-4167-a91e-0a63725b3b0f-serving-cert\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:29:22.400622 master-0 kubenswrapper[29097]: I0312 18:29:22.400565 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 12 18:29:22.400941 master-0 kubenswrapper[29097]: I0312 18:29:22.400891 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fb529297-b3de-4167-a91e-0a63725b3b0f-etcd-client\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:29:22.420628 master-0 kubenswrapper[29097]: I0312 18:29:22.420568 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 12 18:29:22.428050 master-0 kubenswrapper[29097]: I0312 18:29:22.427639 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fb529297-b3de-4167-a91e-0a63725b3b0f-etcd-serving-ca\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:29:22.440279 master-0 kubenswrapper[29097]: I0312 18:29:22.440222 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 12 18:29:22.442648 master-0 kubenswrapper[29097]: I0312 18:29:22.442309 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fb529297-b3de-4167-a91e-0a63725b3b0f-encryption-config\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:29:22.461229 master-0 kubenswrapper[29097]: I0312 18:29:22.461079 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 12 18:29:22.465658 master-0 kubenswrapper[29097]: I0312 18:29:22.464648 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fb529297-b3de-4167-a91e-0a63725b3b0f-audit-policies\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:29:22.480154 master-0 kubenswrapper[29097]: I0312 18:29:22.480104 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 12 18:29:22.480531 master-0 kubenswrapper[29097]: I0312 18:29:22.480487 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb529297-b3de-4167-a91e-0a63725b3b0f-trusted-ca-bundle\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:29:22.500712 master-0 kubenswrapper[29097]: I0312 18:29:22.500671 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 12 18:29:22.521226 master-0 kubenswrapper[29097]: I0312 18:29:22.521128 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 12 18:29:22.540591 master-0 kubenswrapper[29097]: I0312 18:29:22.540551 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 12 18:29:22.542997 master-0 kubenswrapper[29097]: I0312 18:29:22.542952 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4048e453-a983-4708-89b6-a81af0067e29-serving-cert\") pod \"cluster-version-operator-8c9c967c7-m885k\" (UID: \"4048e453-a983-4708-89b6-a81af0067e29\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k" Mar 12 18:29:22.560713 master-0 kubenswrapper[29097]: I0312 18:29:22.560645 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 12 18:29:22.561881 master-0 kubenswrapper[29097]: I0312 18:29:22.561841 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4048e453-a983-4708-89b6-a81af0067e29-service-ca\") pod \"cluster-version-operator-8c9c967c7-m885k\" (UID: \"4048e453-a983-4708-89b6-a81af0067e29\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k" Mar 12 18:29:22.581208 master-0 kubenswrapper[29097]: I0312 18:29:22.581158 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 12 18:29:22.586212 master-0 kubenswrapper[29097]: E0312 18:29:22.586171 29097 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 18:29:22.600643 master-0 kubenswrapper[29097]: I0312 18:29:22.600502 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 12 18:29:22.603426 master-0 kubenswrapper[29097]: I0312 18:29:22.603385 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/52d3dc87-0bf8-4a62-a6fc-0ffa6060c6a8-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-tzgs9\" (UID: \"52d3dc87-0bf8-4a62-a6fc-0ffa6060c6a8\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-tzgs9" Mar 12 18:29:22.620278 master-0 kubenswrapper[29097]: I0312 18:29:22.620223 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-b88ct" Mar 12 18:29:22.641680 master-0 kubenswrapper[29097]: I0312 18:29:22.641606 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-7k9rb" Mar 12 18:29:22.661105 master-0 kubenswrapper[29097]: I0312 18:29:22.661036 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-hhnmb" Mar 12 18:29:22.680067 master-0 kubenswrapper[29097]: I0312 18:29:22.680009 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-h5f5n" Mar 12 18:29:22.703480 master-0 kubenswrapper[29097]: I0312 18:29:22.701365 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 12 18:29:22.710479 master-0 kubenswrapper[29097]: I0312 18:29:22.710414 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/34cbf061-4c76-476e-bed9-0a133c744862-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-zd9gm\" (UID: \"34cbf061-4c76-476e-bed9-0a133c744862\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zd9gm" Mar 12 18:29:22.718992 master-0 kubenswrapper[29097]: I0312 18:29:22.718898 29097 request.go:700] Waited for 1.014667138s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Mar 12 18:29:22.721842 master-0 kubenswrapper[29097]: I0312 18:29:22.721754 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 12 18:29:22.746502 master-0 kubenswrapper[29097]: I0312 18:29:22.746309 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 12 18:29:22.765594 master-0 kubenswrapper[29097]: I0312 18:29:22.764944 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-gzn76" Mar 12 18:29:22.802044 master-0 kubenswrapper[29097]: I0312 18:29:22.801857 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 12 18:29:22.807682 master-0 kubenswrapper[29097]: I0312 18:29:22.805814 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-hsjbb" Mar 12 18:29:22.823122 master-0 kubenswrapper[29097]: E0312 18:29:22.820655 29097 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.823122 master-0 kubenswrapper[29097]: E0312 18:29:22.820767 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/adb0dbbf-458d-46f5-b236-d4904e125418-metrics-client-ca podName:adb0dbbf-458d-46f5-b236-d4904e125418 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.320745013 +0000 UTC m=+2.874725100 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/adb0dbbf-458d-46f5-b236-d4904e125418-metrics-client-ca") pod "node-exporter-6v462" (UID: "adb0dbbf-458d-46f5-b236-d4904e125418") : failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.823901 master-0 kubenswrapper[29097]: E0312 18:29:22.823840 29097 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.824152 master-0 kubenswrapper[29097]: E0312 18:29:22.824131 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/492e9833-4513-4f2f-b865-d05a8973fadc-mcd-auth-proxy-config podName:492e9833-4513-4f2f-b865-d05a8973fadc nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.324087998 +0000 UTC m=+2.878068115 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcd-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/492e9833-4513-4f2f-b865-d05a8973fadc-mcd-auth-proxy-config") pod "machine-config-daemon-mfv5x" (UID: "492e9833-4513-4f2f-b865-d05a8973fadc") : failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.824322 master-0 kubenswrapper[29097]: E0312 18:29:22.824301 29097 secret.go:189] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.824482 master-0 kubenswrapper[29097]: E0312 18:29:22.824464 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6-certs podName:4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.324448667 +0000 UTC m=+2.878428774 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6-certs") pod "machine-config-server-2jzxq" (UID: "4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.824661 master-0 kubenswrapper[29097]: E0312 18:29:22.824638 29097 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.824824 master-0 kubenswrapper[29097]: E0312 18:29:22.824807 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c016b1e-d47c-47d4-a15f-4160e7731c82-serving-cert podName:1c016b1e-d47c-47d4-a15f-4160e7731c82 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.324790965 +0000 UTC m=+2.878771072 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1c016b1e-d47c-47d4-a15f-4160e7731c82-serving-cert") pod "controller-manager-7cd74f9776-2rmc9" (UID: "1c016b1e-d47c-47d4-a15f-4160e7731c82") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.829080 master-0 kubenswrapper[29097]: E0312 18:29:22.828623 29097 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.829080 master-0 kubenswrapper[29097]: E0312 18:29:22.828740 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5fb0152-3efd-4000-bce3-fa90b75316ae-cluster-baremetal-operator-tls podName:e5fb0152-3efd-4000-bce3-fa90b75316ae nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.328717794 +0000 UTC m=+2.882697891 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/e5fb0152-3efd-4000-bce3-fa90b75316ae-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5cdb4c5598-2psgb" (UID: "e5fb0152-3efd-4000-bce3-fa90b75316ae") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.829080 master-0 kubenswrapper[29097]: E0312 18:29:22.828767 29097 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-7hcn2cdka018u: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.829080 master-0 kubenswrapper[29097]: E0312 18:29:22.828792 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-client-ca-bundle podName:9f1f60fa-d79d-4f31-b5bf-2ad333151537 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.328786246 +0000 UTC m=+2.882766343 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-client-ca-bundle") pod "metrics-server-5784dff469-l5d64" (UID: "9f1f60fa-d79d-4f31-b5bf-2ad333151537") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.829080 master-0 kubenswrapper[29097]: E0312 18:29:22.828855 29097 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.829080 master-0 kubenswrapper[29097]: E0312 18:29:22.828878 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41c1bd85-369e-4341-9e80-8b4b248b5572-metrics-client-ca podName:41c1bd85-369e-4341-9e80-8b4b248b5572 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.328872418 +0000 UTC m=+2.882852515 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/41c1bd85-369e-4341-9e80-8b4b248b5572-metrics-client-ca") pod "prometheus-operator-5ff8674d55-qs7tx" (UID: "41c1bd85-369e-4341-9e80-8b4b248b5572") : failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.829080 master-0 kubenswrapper[29097]: E0312 18:29:22.828891 29097 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.829080 master-0 kubenswrapper[29097]: E0312 18:29:22.828916 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327-webhook-certs podName:25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.328907299 +0000 UTC m=+2.882887396 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327-webhook-certs") pod "multus-admission-controller-7769569c45-dq2gs" (UID: "25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.829080 master-0 kubenswrapper[29097]: E0312 18:29:22.828935 29097 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.829080 master-0 kubenswrapper[29097]: E0312 18:29:22.828957 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78c13011-7a79-445f-807c-4f5e75643549-metrics-client-ca podName:78c13011-7a79-445f-807c-4f5e75643549 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.32895144 +0000 UTC m=+2.882931537 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/78c13011-7a79-445f-807c-4f5e75643549-metrics-client-ca") pod "openshift-state-metrics-74cc79fd76-f59x9" (UID: "78c13011-7a79-445f-807c-4f5e75643549") : failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.829080 master-0 kubenswrapper[29097]: E0312 18:29:22.828969 29097 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.829869 master-0 kubenswrapper[29097]: E0312 18:29:22.828991 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41c1bd85-369e-4341-9e80-8b4b248b5572-prometheus-operator-kube-rbac-proxy-config podName:41c1bd85-369e-4341-9e80-8b4b248b5572 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.328985171 +0000 UTC m=+2.882965268 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/41c1bd85-369e-4341-9e80-8b4b248b5572-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-5ff8674d55-qs7tx" (UID: "41c1bd85-369e-4341-9e80-8b4b248b5572") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.829869 master-0 kubenswrapper[29097]: E0312 18:29:22.829751 29097 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.829869 master-0 kubenswrapper[29097]: E0312 18:29:22.829786 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee4c1949-96b4-4444-9675-9df1d46f681e-images podName:ee4c1949-96b4-4444-9675-9df1d46f681e nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.329778331 +0000 UTC m=+2.883758428 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/ee4c1949-96b4-4444-9675-9df1d46f681e-images") pod "cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" (UID: "ee4c1949-96b4-4444-9675-9df1d46f681e") : failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.831869 master-0 kubenswrapper[29097]: E0312 18:29:22.829935 29097 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.831869 master-0 kubenswrapper[29097]: E0312 18:29:22.829962 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/030160af-c915-4f00-903a-1c4b5c2b719a-auth-proxy-config podName:030160af-c915-4f00-903a-1c4b5c2b719a nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.329955375 +0000 UTC m=+2.883935472 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/030160af-c915-4f00-903a-1c4b5c2b719a-auth-proxy-config") pod "machine-approver-754bdc9f9d-4w5z7" (UID: "030160af-c915-4f00-903a-1c4b5c2b719a") : failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.831869 master-0 kubenswrapper[29097]: E0312 18:29:22.829975 29097 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.831869 master-0 kubenswrapper[29097]: E0312 18:29:22.829996 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-secret-metrics-client-certs podName:9f1f60fa-d79d-4f31-b5bf-2ad333151537 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.329989266 +0000 UTC m=+2.883969363 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-secret-metrics-client-certs") pod "metrics-server-5784dff469-l5d64" (UID: "9f1f60fa-d79d-4f31-b5bf-2ad333151537") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.831869 master-0 kubenswrapper[29097]: E0312 18:29:22.830019 29097 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.831869 master-0 kubenswrapper[29097]: E0312 18:29:22.830039 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/aee40f88-83e4-45c8-8331-969943f9f9aa-auth-proxy-config podName:aee40f88-83e4-45c8-8331-969943f9f9aa nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.330034027 +0000 UTC m=+2.884014124 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/aee40f88-83e4-45c8-8331-969943f9f9aa-auth-proxy-config") pod "cluster-autoscaler-operator-69576476f7-hkfnq" (UID: "aee40f88-83e4-45c8-8331-969943f9f9aa") : failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.831869 master-0 kubenswrapper[29097]: E0312 18:29:22.830050 29097 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.831869 master-0 kubenswrapper[29097]: E0312 18:29:22.830073 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aee40f88-83e4-45c8-8331-969943f9f9aa-cert podName:aee40f88-83e4-45c8-8331-969943f9f9aa nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.330063568 +0000 UTC m=+2.884043665 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/aee40f88-83e4-45c8-8331-969943f9f9aa-cert") pod "cluster-autoscaler-operator-69576476f7-hkfnq" (UID: "aee40f88-83e4-45c8-8331-969943f9f9aa") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.831869 master-0 kubenswrapper[29097]: E0312 18:29:22.830087 29097 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.831869 master-0 kubenswrapper[29097]: E0312 18:29:22.830110 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6-node-bootstrap-token podName:4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.330105119 +0000 UTC m=+2.884085216 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6-node-bootstrap-token") pod "machine-config-server-2jzxq" (UID: "4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.831869 master-0 kubenswrapper[29097]: E0312 18:29:22.830174 29097 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.831869 master-0 kubenswrapper[29097]: E0312 18:29:22.830194 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/030160af-c915-4f00-903a-1c4b5c2b719a-machine-approver-tls podName:030160af-c915-4f00-903a-1c4b5c2b719a nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.330189021 +0000 UTC m=+2.884169118 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/030160af-c915-4f00-903a-1c4b5c2b719a-machine-approver-tls") pod "machine-approver-754bdc9f9d-4w5z7" (UID: "030160af-c915-4f00-903a-1c4b5c2b719a") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.834136 master-0 kubenswrapper[29097]: E0312 18:29:22.834062 29097 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.834279 master-0 kubenswrapper[29097]: E0312 18:29:22.834236 29097 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.834349 master-0 kubenswrapper[29097]: E0312 18:29:22.834313 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adb0dbbf-458d-46f5-b236-d4904e125418-node-exporter-kube-rbac-proxy-config podName:adb0dbbf-458d-46f5-b236-d4904e125418 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.334296924 +0000 UTC m=+2.888277031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/adb0dbbf-458d-46f5-b236-d4904e125418-node-exporter-kube-rbac-proxy-config") pod "node-exporter-6v462" (UID: "adb0dbbf-458d-46f5-b236-d4904e125418") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.834349 master-0 kubenswrapper[29097]: I0312 18:29:22.834078 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 12 18:29:22.834468 master-0 kubenswrapper[29097]: E0312 18:29:22.834144 29097 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.834548 master-0 kubenswrapper[29097]: E0312 18:29:22.834500 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e5fb0152-3efd-4000-bce3-fa90b75316ae-images podName:e5fb0152-3efd-4000-bce3-fa90b75316ae nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.334489169 +0000 UTC m=+2.888469466 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/e5fb0152-3efd-4000-bce3-fa90b75316ae-images") pod "cluster-baremetal-operator-5cdb4c5598-2psgb" (UID: "e5fb0152-3efd-4000-bce3-fa90b75316ae") : failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.834548 master-0 kubenswrapper[29097]: E0312 18:29:22.834168 29097 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.834697 master-0 kubenswrapper[29097]: E0312 18:29:22.834559 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b2c6cd11-b1ed-4fed-a4ce-4eee0af20868-cco-trusted-ca podName:b2c6cd11-b1ed-4fed-a4ce-4eee0af20868 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.334551481 +0000 UTC m=+2.888531578 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/b2c6cd11-b1ed-4fed-a4ce-4eee0af20868-cco-trusted-ca") pod "cloud-credential-operator-55d85b7b47-vk7lr" (UID: "b2c6cd11-b1ed-4fed-a4ce-4eee0af20868") : failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.834697 master-0 kubenswrapper[29097]: E0312 18:29:22.834183 29097 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.834697 master-0 kubenswrapper[29097]: E0312 18:29:22.834597 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2c6cd11-b1ed-4fed-a4ce-4eee0af20868-cloud-credential-operator-serving-cert podName:b2c6cd11-b1ed-4fed-a4ce-4eee0af20868 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.334588852 +0000 UTC m=+2.888568969 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/b2c6cd11-b1ed-4fed-a4ce-4eee0af20868-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-55d85b7b47-vk7lr" (UID: "b2c6cd11-b1ed-4fed-a4ce-4eee0af20868") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.834697 master-0 kubenswrapper[29097]: E0312 18:29:22.834206 29097 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.834697 master-0 kubenswrapper[29097]: E0312 18:29:22.834632 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/be2da107-a419-423f-a657-44d681291f28-config podName:be2da107-a419-423f-a657-44d681291f28 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.334624953 +0000 UTC m=+2.888605250 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/be2da107-a419-423f-a657-44d681291f28-config") pod "route-controller-manager-7db5456fb7-csszs" (UID: "be2da107-a419-423f-a657-44d681291f28") : failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.834697 master-0 kubenswrapper[29097]: E0312 18:29:22.834225 29097 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.834697 master-0 kubenswrapper[29097]: E0312 18:29:22.834663 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be2da107-a419-423f-a657-44d681291f28-serving-cert podName:be2da107-a419-423f-a657-44d681291f28 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.334656644 +0000 UTC m=+2.888636741 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/be2da107-a419-423f-a657-44d681291f28-serving-cert") pod "route-controller-manager-7db5456fb7-csszs" (UID: "be2da107-a419-423f-a657-44d681291f28") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.834697 master-0 kubenswrapper[29097]: E0312 18:29:22.834688 29097 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.835147 master-0 kubenswrapper[29097]: E0312 18:29:22.834716 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee55b576-6b8d-4217-b5a7-93b023a1e885-proxy-tls podName:ee55b576-6b8d-4217-b5a7-93b023a1e885 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.334709295 +0000 UTC m=+2.888689402 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/ee55b576-6b8d-4217-b5a7-93b023a1e885-proxy-tls") pod "machine-config-controller-ff46b7bdf-k5hsv" (UID: "ee55b576-6b8d-4217-b5a7-93b023a1e885") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.835147 master-0 kubenswrapper[29097]: E0312 18:29:22.834754 29097 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.835147 master-0 kubenswrapper[29097]: E0312 18:29:22.834797 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-proxy-ca-bundles podName:1c016b1e-d47c-47d4-a15f-4160e7731c82 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.334787367 +0000 UTC m=+2.888767674 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-proxy-ca-bundles") pod "controller-manager-7cd74f9776-2rmc9" (UID: "1c016b1e-d47c-47d4-a15f-4160e7731c82") : failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.835147 master-0 kubenswrapper[29097]: E0312 18:29:22.834851 29097 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.835147 master-0 kubenswrapper[29097]: E0312 18:29:22.834882 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee4c1949-96b4-4444-9675-9df1d46f681e-auth-proxy-config podName:ee4c1949-96b4-4444-9675-9df1d46f681e nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.334875689 +0000 UTC m=+2.888855786 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ee4c1949-96b4-4444-9675-9df1d46f681e-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" (UID: "ee4c1949-96b4-4444-9675-9df1d46f681e") : failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.835147 master-0 kubenswrapper[29097]: E0312 18:29:22.834911 29097 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.835147 master-0 kubenswrapper[29097]: E0312 18:29:22.834943 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4687cf53-55d7-42b7-b24d-e57da3989fd6-images podName:4687cf53-55d7-42b7-b24d-e57da3989fd6 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.334934121 +0000 UTC m=+2.888914218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4687cf53-55d7-42b7-b24d-e57da3989fd6-images") pod "machine-api-operator-84bf6db4f9-gnrzd" (UID: "4687cf53-55d7-42b7-b24d-e57da3989fd6") : failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.835147 master-0 kubenswrapper[29097]: E0312 18:29:22.834962 29097 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.835147 master-0 kubenswrapper[29097]: E0312 18:29:22.834988 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5e09875-4445-4584-94f0-243148307bb0-serving-cert podName:f5e09875-4445-4584-94f0-243148307bb0 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.334980242 +0000 UTC m=+2.888960569 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f5e09875-4445-4584-94f0-243148307bb0-serving-cert") pod "insights-operator-8f89dfddd-m6z6d" (UID: "f5e09875-4445-4584-94f0-243148307bb0") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.835147 master-0 kubenswrapper[29097]: E0312 18:29:22.835010 29097 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.835147 master-0 kubenswrapper[29097]: E0312 18:29:22.835040 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/492e9833-4513-4f2f-b865-d05a8973fadc-proxy-tls podName:492e9833-4513-4f2f-b865-d05a8973fadc nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.335031673 +0000 UTC m=+2.889011980 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/492e9833-4513-4f2f-b865-d05a8973fadc-proxy-tls") pod "machine-config-daemon-mfv5x" (UID: "492e9833-4513-4f2f-b865-d05a8973fadc") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.835147 master-0 kubenswrapper[29097]: E0312 18:29:22.835057 29097 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.835147 master-0 kubenswrapper[29097]: E0312 18:29:22.835086 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-secret-metrics-server-tls podName:9f1f60fa-d79d-4f31-b5bf-2ad333151537 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.335077664 +0000 UTC m=+2.889057771 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-secret-metrics-server-tls") pod "metrics-server-5784dff469-l5d64" (UID: "9f1f60fa-d79d-4f31-b5bf-2ad333151537") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.835147 master-0 kubenswrapper[29097]: E0312 18:29:22.835114 29097 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.835147 master-0 kubenswrapper[29097]: E0312 18:29:22.835148 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee55b576-6b8d-4217-b5a7-93b023a1e885-mcc-auth-proxy-config podName:ee55b576-6b8d-4217-b5a7-93b023a1e885 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.335139786 +0000 UTC m=+2.889120083 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcc-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ee55b576-6b8d-4217-b5a7-93b023a1e885-mcc-auth-proxy-config") pod "machine-config-controller-ff46b7bdf-k5hsv" (UID: "ee55b576-6b8d-4217-b5a7-93b023a1e885") : failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.835147 master-0 kubenswrapper[29097]: E0312 18:29:22.835172 29097 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.835204 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4687cf53-55d7-42b7-b24d-e57da3989fd6-machine-api-operator-tls podName:4687cf53-55d7-42b7-b24d-e57da3989fd6 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.335197047 +0000 UTC m=+2.889177364 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4687cf53-55d7-42b7-b24d-e57da3989fd6-machine-api-operator-tls") pod "machine-api-operator-84bf6db4f9-gnrzd" (UID: "4687cf53-55d7-42b7-b24d-e57da3989fd6") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.835223 29097 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.835253 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41c1bd85-369e-4341-9e80-8b4b248b5572-prometheus-operator-tls podName:41c1bd85-369e-4341-9e80-8b4b248b5572 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.335246218 +0000 UTC m=+2.889226525 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/41c1bd85-369e-4341-9e80-8b4b248b5572-prometheus-operator-tls") pod "prometheus-operator-5ff8674d55-qs7tx" (UID: "41c1bd85-369e-4341-9e80-8b4b248b5572") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.835288 29097 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.835316 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78c13011-7a79-445f-807c-4f5e75643549-openshift-state-metrics-kube-rbac-proxy-config podName:78c13011-7a79-445f-807c-4f5e75643549 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.33530881 +0000 UTC m=+2.889289137 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/78c13011-7a79-445f-807c-4f5e75643549-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-74cc79fd76-f59x9" (UID: "78c13011-7a79-445f-807c-4f5e75643549") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.835342 29097 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.835366 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f5e09875-4445-4584-94f0-243148307bb0-service-ca-bundle podName:f5e09875-4445-4584-94f0-243148307bb0 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.335359751 +0000 UTC m=+2.889340068 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/f5e09875-4445-4584-94f0-243148307bb0-service-ca-bundle") pod "insights-operator-8f89dfddd-m6z6d" (UID: "f5e09875-4445-4584-94f0-243148307bb0") : failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.835389 29097 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.835414 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e5fb0152-3efd-4000-bce3-fa90b75316ae-config podName:e5fb0152-3efd-4000-bce3-fa90b75316ae nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.335406622 +0000 UTC m=+2.889386929 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e5fb0152-3efd-4000-bce3-fa90b75316ae-config") pod "cluster-baremetal-operator-5cdb4c5598-2psgb" (UID: "e5fb0152-3efd-4000-bce3-fa90b75316ae") : failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.835463 29097 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.835536 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-config podName:1c016b1e-d47c-47d4-a15f-4160e7731c82 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.335502565 +0000 UTC m=+2.889482882 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-config") pod "controller-manager-7cd74f9776-2rmc9" (UID: "1c016b1e-d47c-47d4-a15f-4160e7731c82") : failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.835562 29097 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.835587 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/be2da107-a419-423f-a657-44d681291f28-client-ca podName:be2da107-a419-423f-a657-44d681291f28 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.335579607 +0000 UTC m=+2.889559704 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/be2da107-a419-423f-a657-44d681291f28-client-ca") pod "route-controller-manager-7db5456fb7-csszs" (UID: "be2da107-a419-423f-a657-44d681291f28") : failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.835612 29097 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.835640 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/604044f4-9b0b-4747-827d-843f3cfa7077-images podName:604044f4-9b0b-4747-827d-843f3cfa7077 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.335632708 +0000 UTC m=+2.889613015 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/604044f4-9b0b-4747-827d-843f3cfa7077-images") pod "machine-config-operator-fdb5c78b5-xbfrg" (UID: "604044f4-9b0b-4747-827d-843f3cfa7077") : failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.835655 29097 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.835681 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d77a98a-0176-4924-81d3-8e9890852b38-kube-state-metrics-kube-rbac-proxy-config podName:3d77a98a-0176-4924-81d3-8e9890852b38 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.335674229 +0000 UTC m=+2.889654536 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/3d77a98a-0176-4924-81d3-8e9890852b38-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-68b88f8cb5-sqdhq" (UID: "3d77a98a-0176-4924-81d3-8e9890852b38") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.835704 29097 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.835729 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/604044f4-9b0b-4747-827d-843f3cfa7077-auth-proxy-config podName:604044f4-9b0b-4747-827d-843f3cfa7077 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.33572196 +0000 UTC m=+2.889702267 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/604044f4-9b0b-4747-827d-843f3cfa7077-auth-proxy-config") pod "machine-config-operator-fdb5c78b5-xbfrg" (UID: "604044f4-9b0b-4747-827d-843f3cfa7077") : failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.835744 29097 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.835767 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f5e09875-4445-4584-94f0-243148307bb0-trusted-ca-bundle podName:f5e09875-4445-4584-94f0-243148307bb0 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.335759841 +0000 UTC m=+2.889740168 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/f5e09875-4445-4584-94f0-243148307bb0-trusted-ca-bundle") pod "insights-operator-8f89dfddd-m6z6d" (UID: "f5e09875-4445-4584-94f0-243148307bb0") : failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.835782 29097 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.835809 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee4c1949-96b4-4444-9675-9df1d46f681e-cloud-controller-manager-operator-tls podName:ee4c1949-96b4-4444-9675-9df1d46f681e nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.335802562 +0000 UTC m=+2.889782659 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/ee4c1949-96b4-4444-9675-9df1d46f681e-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" (UID: "ee4c1949-96b4-4444-9675-9df1d46f681e") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.835825 29097 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.835853 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78c13011-7a79-445f-807c-4f5e75643549-openshift-state-metrics-tls podName:78c13011-7a79-445f-807c-4f5e75643549 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.335845363 +0000 UTC m=+2.889825680 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/78c13011-7a79-445f-807c-4f5e75643549-openshift-state-metrics-tls") pod "openshift-state-metrics-74cc79fd76-f59x9" (UID: "78c13011-7a79-445f-807c-4f5e75643549") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.835881 29097 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.835910 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3d77a98a-0176-4924-81d3-8e9890852b38-kube-state-metrics-custom-resource-state-configmap podName:3d77a98a-0176-4924-81d3-8e9890852b38 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.335902195 +0000 UTC m=+2.889882502 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/3d77a98a-0176-4924-81d3-8e9890852b38-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-68b88f8cb5-sqdhq" (UID: "3d77a98a-0176-4924-81d3-8e9890852b38") : failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.835928 29097 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.835956 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b38e7fcd-8f7a-4d4f-8702-7ef205261054-webhook-cert podName:b38e7fcd-8f7a-4d4f-8702-7ef205261054 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.335949026 +0000 UTC m=+2.889929123 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/b38e7fcd-8f7a-4d4f-8702-7ef205261054-webhook-cert") pod "packageserver-694648486f-f89lc" (UID: "b38e7fcd-8f7a-4d4f-8702-7ef205261054") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.835983 29097 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.836011 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9f1f60fa-d79d-4f31-b5bf-2ad333151537-configmap-kubelet-serving-ca-bundle podName:9f1f60fa-d79d-4f31-b5bf-2ad333151537 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.336002937 +0000 UTC m=+2.889983244 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/9f1f60fa-d79d-4f31-b5bf-2ad333151537-configmap-kubelet-serving-ca-bundle") pod "metrics-server-5784dff469-l5d64" (UID: "9f1f60fa-d79d-4f31-b5bf-2ad333151537") : failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.836028 29097 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.836057 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1287cbb9-c9f6-48d2-9fda-f4464074e41b-cluster-storage-operator-serving-cert podName:1287cbb9-c9f6-48d2-9fda-f4464074e41b nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.336049019 +0000 UTC m=+2.890029326 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/1287cbb9-c9f6-48d2-9fda-f4464074e41b-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-6fbfc8dc8f-88gzm" (UID: "1287cbb9-c9f6-48d2-9fda-f4464074e41b") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.836090 29097 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.836043 master-0 kubenswrapper[29097]: E0312 18:29:22.836119 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3d77a98a-0176-4924-81d3-8e9890852b38-metrics-client-ca podName:3d77a98a-0176-4924-81d3-8e9890852b38 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.33611121 +0000 UTC m=+2.890091307 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/3d77a98a-0176-4924-81d3-8e9890852b38-metrics-client-ca") pod "kube-state-metrics-68b88f8cb5-sqdhq" (UID: "3d77a98a-0176-4924-81d3-8e9890852b38") : failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.838174 master-0 kubenswrapper[29097]: E0312 18:29:22.836145 29097 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.838174 master-0 kubenswrapper[29097]: E0312 18:29:22.836173 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-client-ca podName:1c016b1e-d47c-47d4-a15f-4160e7731c82 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.336164271 +0000 UTC m=+2.890144578 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-client-ca") pod "controller-manager-7cd74f9776-2rmc9" (UID: "1c016b1e-d47c-47d4-a15f-4160e7731c82") : failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.838174 master-0 kubenswrapper[29097]: E0312 18:29:22.836191 29097 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.838174 master-0 kubenswrapper[29097]: E0312 18:29:22.836216 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5fb0152-3efd-4000-bce3-fa90b75316ae-cert podName:e5fb0152-3efd-4000-bce3-fa90b75316ae nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.336209393 +0000 UTC m=+2.890189500 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e5fb0152-3efd-4000-bce3-fa90b75316ae-cert") pod "cluster-baremetal-operator-5cdb4c5598-2psgb" (UID: "e5fb0152-3efd-4000-bce3-fa90b75316ae") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.838174 master-0 kubenswrapper[29097]: E0312 18:29:22.836241 29097 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.838174 master-0 kubenswrapper[29097]: E0312 18:29:22.836269 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d77a98a-0176-4924-81d3-8e9890852b38-kube-state-metrics-tls podName:3d77a98a-0176-4924-81d3-8e9890852b38 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.336259164 +0000 UTC m=+2.890239491 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/3d77a98a-0176-4924-81d3-8e9890852b38-kube-state-metrics-tls") pod "kube-state-metrics-68b88f8cb5-sqdhq" (UID: "3d77a98a-0176-4924-81d3-8e9890852b38") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.838174 master-0 kubenswrapper[29097]: E0312 18:29:22.836305 29097 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.838174 master-0 kubenswrapper[29097]: E0312 18:29:22.836333 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/604044f4-9b0b-4747-827d-843f3cfa7077-proxy-tls podName:604044f4-9b0b-4747-827d-843f3cfa7077 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.336326226 +0000 UTC m=+2.890306323 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/604044f4-9b0b-4747-827d-843f3cfa7077-proxy-tls") pod "machine-config-operator-fdb5c78b5-xbfrg" (UID: "604044f4-9b0b-4747-827d-843f3cfa7077") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.838174 master-0 kubenswrapper[29097]: E0312 18:29:22.836360 29097 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.838174 master-0 kubenswrapper[29097]: E0312 18:29:22.836386 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/030160af-c915-4f00-903a-1c4b5c2b719a-config podName:030160af-c915-4f00-903a-1c4b5c2b719a nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.336377137 +0000 UTC m=+2.890357454 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/030160af-c915-4f00-903a-1c4b5c2b719a-config") pod "machine-approver-754bdc9f9d-4w5z7" (UID: "030160af-c915-4f00-903a-1c4b5c2b719a") : failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.838174 master-0 kubenswrapper[29097]: E0312 18:29:22.836402 29097 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.838174 master-0 kubenswrapper[29097]: E0312 18:29:22.836426 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b38e7fcd-8f7a-4d4f-8702-7ef205261054-apiservice-cert podName:b38e7fcd-8f7a-4d4f-8702-7ef205261054 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.336418708 +0000 UTC m=+2.890399035 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/b38e7fcd-8f7a-4d4f-8702-7ef205261054-apiservice-cert") pod "packageserver-694648486f-f89lc" (UID: "b38e7fcd-8f7a-4d4f-8702-7ef205261054") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.838174 master-0 kubenswrapper[29097]: E0312 18:29:22.836443 29097 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.838174 master-0 kubenswrapper[29097]: E0312 18:29:22.836466 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adb0dbbf-458d-46f5-b236-d4904e125418-node-exporter-tls podName:adb0dbbf-458d-46f5-b236-d4904e125418 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.336459899 +0000 UTC m=+2.890440206 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/adb0dbbf-458d-46f5-b236-d4904e125418-node-exporter-tls") pod "node-exporter-6v462" (UID: "adb0dbbf-458d-46f5-b236-d4904e125418") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:22.838174 master-0 kubenswrapper[29097]: I0312 18:29:22.836738 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0fb78c61-2051-42e2-8668-fa7404ccac43-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-8762n\" (UID: \"0fb78c61-2051-42e2-8668-fa7404ccac43\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-8762n" Mar 12 18:29:22.838174 master-0 kubenswrapper[29097]: E0312 18:29:22.836791 29097 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.838174 master-0 kubenswrapper[29097]: E0312 18:29:22.836824 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9f1f60fa-d79d-4f31-b5bf-2ad333151537-metrics-server-audit-profiles podName:9f1f60fa-d79d-4f31-b5bf-2ad333151537 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.336814688 +0000 UTC m=+2.890795005 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/9f1f60fa-d79d-4f31-b5bf-2ad333151537-metrics-server-audit-profiles") pod "metrics-server-5784dff469-l5d64" (UID: "9f1f60fa-d79d-4f31-b5bf-2ad333151537") : failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.839987 master-0 kubenswrapper[29097]: E0312 18:29:22.839894 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4687cf53-55d7-42b7-b24d-e57da3989fd6-config podName:4687cf53-55d7-42b7-b24d-e57da3989fd6 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:23.339837404 +0000 UTC m=+2.893817671 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4687cf53-55d7-42b7-b24d-e57da3989fd6-config") pod "machine-api-operator-84bf6db4f9-gnrzd" (UID: "4687cf53-55d7-42b7-b24d-e57da3989fd6") : failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:22.850562 master-0 kubenswrapper[29097]: I0312 18:29:22.844173 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 12 18:29:22.867640 master-0 kubenswrapper[29097]: I0312 18:29:22.867088 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 12 18:29:22.881544 master-0 kubenswrapper[29097]: I0312 18:29:22.881205 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-8275t" Mar 12 18:29:22.904134 master-0 kubenswrapper[29097]: I0312 18:29:22.904064 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 12 18:29:22.925081 master-0 kubenswrapper[29097]: I0312 18:29:22.925036 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 12 18:29:22.942786 master-0 kubenswrapper[29097]: I0312 18:29:22.941827 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 12 18:29:22.961041 master-0 kubenswrapper[29097]: I0312 18:29:22.960998 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-g4mx9" Mar 12 18:29:22.975180 master-0 kubenswrapper[29097]: I0312 18:29:22.975070 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 18:29:22.980856 master-0 kubenswrapper[29097]: I0312 18:29:22.980815 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 12 18:29:23.000156 master-0 kubenswrapper[29097]: I0312 18:29:23.000100 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-sjkl7" Mar 12 18:29:23.021349 master-0 kubenswrapper[29097]: I0312 18:29:23.021290 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 12 18:29:23.040426 master-0 kubenswrapper[29097]: I0312 18:29:23.040372 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 12 18:29:23.060309 master-0 kubenswrapper[29097]: I0312 18:29:23.060253 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 12 18:29:23.080163 master-0 kubenswrapper[29097]: I0312 18:29:23.080114 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 12 18:29:23.100402 master-0 kubenswrapper[29097]: I0312 18:29:23.100347 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-nv88b" Mar 12 18:29:23.120640 master-0 kubenswrapper[29097]: I0312 18:29:23.120596 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 12 18:29:23.142543 master-0 kubenswrapper[29097]: I0312 18:29:23.140140 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 12 18:29:23.160453 master-0 kubenswrapper[29097]: I0312 18:29:23.160403 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 12 18:29:23.180915 master-0 kubenswrapper[29097]: I0312 18:29:23.180874 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 12 18:29:23.199894 master-0 kubenswrapper[29097]: I0312 18:29:23.199853 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-hm292" Mar 12 18:29:23.220320 master-0 kubenswrapper[29097]: I0312 18:29:23.220277 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-q4h9m" Mar 12 18:29:23.241702 master-0 kubenswrapper[29097]: I0312 18:29:23.241592 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 12 18:29:23.241852 master-0 kubenswrapper[29097]: I0312 18:29:23.241729 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:29:23.261018 master-0 kubenswrapper[29097]: I0312 18:29:23.260966 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 12 18:29:23.283807 master-0 kubenswrapper[29097]: I0312 18:29:23.283742 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 12 18:29:23.305029 master-0 kubenswrapper[29097]: I0312 18:29:23.304997 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 12 18:29:23.334836 master-0 kubenswrapper[29097]: I0312 18:29:23.334745 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 12 18:29:23.340050 master-0 kubenswrapper[29097]: I0312 18:29:23.339984 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 12 18:29:23.351105 master-0 kubenswrapper[29097]: I0312 18:29:23.351061 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 12 18:29:23.360393 master-0 kubenswrapper[29097]: I0312 18:29:23.360360 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 12 18:29:23.378410 master-0 kubenswrapper[29097]: I0312 18:29:23.378353 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/9f1f60fa-d79d-4f31-b5bf-2ad333151537-metrics-server-audit-profiles\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:29:23.378610 master-0 kubenswrapper[29097]: I0312 18:29:23.378480 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/030160af-c915-4f00-903a-1c4b5c2b719a-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-4w5z7\" (UID: \"030160af-c915-4f00-903a-1c4b5c2b719a\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7" Mar 12 18:29:23.378727 master-0 kubenswrapper[29097]: I0312 18:29:23.378676 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/604044f4-9b0b-4747-827d-843f3cfa7077-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-xbfrg\" (UID: \"604044f4-9b0b-4747-827d-843f3cfa7077\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg" Mar 12 18:29:23.378920 master-0 kubenswrapper[29097]: I0312 18:29:23.378894 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6-node-bootstrap-token\") pod \"machine-config-server-2jzxq\" (UID: \"4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6\") " pod="openshift-machine-config-operator/machine-config-server-2jzxq" Mar 12 18:29:23.379000 master-0 kubenswrapper[29097]: I0312 18:29:23.378963 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/b2c6cd11-b1ed-4fed-a4ce-4eee0af20868-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-vk7lr\" (UID: \"b2c6cd11-b1ed-4fed-a4ce-4eee0af20868\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-vk7lr" Mar 12 18:29:23.379142 master-0 kubenswrapper[29097]: I0312 18:29:23.379091 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/604044f4-9b0b-4747-827d-843f3cfa7077-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-xbfrg\" (UID: \"604044f4-9b0b-4747-827d-843f3cfa7077\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg" Mar 12 18:29:23.379201 master-0 kubenswrapper[29097]: I0312 18:29:23.379143 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1287cbb9-c9f6-48d2-9fda-f4464074e41b-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-88gzm\" (UID: \"1287cbb9-c9f6-48d2-9fda-f4464074e41b\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-88gzm" Mar 12 18:29:23.379509 master-0 kubenswrapper[29097]: I0312 18:29:23.379450 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f1f60fa-d79d-4f31-b5bf-2ad333151537-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:29:23.379628 master-0 kubenswrapper[29097]: I0312 18:29:23.379601 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b2c6cd11-b1ed-4fed-a4ce-4eee0af20868-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-vk7lr\" (UID: \"b2c6cd11-b1ed-4fed-a4ce-4eee0af20868\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-vk7lr" Mar 12 18:29:23.379703 master-0 kubenswrapper[29097]: I0312 18:29:23.379628 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1287cbb9-c9f6-48d2-9fda-f4464074e41b-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-88gzm\" (UID: \"1287cbb9-c9f6-48d2-9fda-f4464074e41b\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-88gzm" Mar 12 18:29:23.379703 master-0 kubenswrapper[29097]: I0312 18:29:23.379675 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/adb0dbbf-458d-46f5-b236-d4904e125418-metrics-client-ca\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:29:23.379798 master-0 kubenswrapper[29097]: I0312 18:29:23.379733 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/b2c6cd11-b1ed-4fed-a4ce-4eee0af20868-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-vk7lr\" (UID: \"b2c6cd11-b1ed-4fed-a4ce-4eee0af20868\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-vk7lr" Mar 12 18:29:23.379847 master-0 kubenswrapper[29097]: I0312 18:29:23.379812 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aee40f88-83e4-45c8-8331-969943f9f9aa-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-hkfnq\" (UID: \"aee40f88-83e4-45c8-8331-969943f9f9aa\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-hkfnq" Mar 12 18:29:23.379910 master-0 kubenswrapper[29097]: I0312 18:29:23.379890 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-secret-metrics-client-certs\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:29:23.379979 master-0 kubenswrapper[29097]: I0312 18:29:23.379954 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be2da107-a419-423f-a657-44d681291f28-config\") pod \"route-controller-manager-7db5456fb7-csszs\" (UID: \"be2da107-a419-423f-a657-44d681291f28\") " pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" Mar 12 18:29:23.380031 master-0 kubenswrapper[29097]: I0312 18:29:23.379996 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6-certs\") pod \"machine-config-server-2jzxq\" (UID: \"4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6\") " pod="openshift-machine-config-operator/machine-config-server-2jzxq" Mar 12 18:29:23.380212 master-0 kubenswrapper[29097]: I0312 18:29:23.380183 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ee55b576-6b8d-4217-b5a7-93b023a1e885-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-k5hsv\" (UID: \"ee55b576-6b8d-4217-b5a7-93b023a1e885\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-k5hsv" Mar 12 18:29:23.380272 master-0 kubenswrapper[29097]: I0312 18:29:23.380206 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b2c6cd11-b1ed-4fed-a4ce-4eee0af20868-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-vk7lr\" (UID: \"b2c6cd11-b1ed-4fed-a4ce-4eee0af20868\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-vk7lr" Mar 12 18:29:23.380272 master-0 kubenswrapper[29097]: I0312 18:29:23.380238 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aee40f88-83e4-45c8-8331-969943f9f9aa-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-hkfnq\" (UID: \"aee40f88-83e4-45c8-8331-969943f9f9aa\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-hkfnq" Mar 12 18:29:23.380361 master-0 kubenswrapper[29097]: I0312 18:29:23.380268 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 12 18:29:23.380361 master-0 kubenswrapper[29097]: I0312 18:29:23.380250 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3d77a98a-0176-4924-81d3-8e9890852b38-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:29:23.380444 master-0 kubenswrapper[29097]: I0312 18:29:23.380367 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aee40f88-83e4-45c8-8331-969943f9f9aa-cert\") pod \"cluster-autoscaler-operator-69576476f7-hkfnq\" (UID: \"aee40f88-83e4-45c8-8331-969943f9f9aa\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-hkfnq" Mar 12 18:29:23.380491 master-0 kubenswrapper[29097]: I0312 18:29:23.380473 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e5fb0152-3efd-4000-bce3-fa90b75316ae-images\") pod \"cluster-baremetal-operator-5cdb4c5598-2psgb\" (UID: \"e5fb0152-3efd-4000-bce3-fa90b75316ae\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" Mar 12 18:29:23.380688 master-0 kubenswrapper[29097]: I0312 18:29:23.380666 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be2da107-a419-423f-a657-44d681291f28-serving-cert\") pod \"route-controller-manager-7db5456fb7-csszs\" (UID: \"be2da107-a419-423f-a657-44d681291f28\") " pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" Mar 12 18:29:23.380758 master-0 kubenswrapper[29097]: I0312 18:29:23.380723 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aee40f88-83e4-45c8-8331-969943f9f9aa-cert\") pod \"cluster-autoscaler-operator-69576476f7-hkfnq\" (UID: \"aee40f88-83e4-45c8-8331-969943f9f9aa\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-hkfnq" Mar 12 18:29:23.380803 master-0 kubenswrapper[29097]: I0312 18:29:23.380729 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/adb0dbbf-458d-46f5-b236-d4904e125418-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:29:23.380882 master-0 kubenswrapper[29097]: I0312 18:29:23.380853 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/adb0dbbf-458d-46f5-b236-d4904e125418-node-exporter-tls\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:29:23.380960 master-0 kubenswrapper[29097]: I0312 18:29:23.380927 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/e5fb0152-3efd-4000-bce3-fa90b75316ae-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-2psgb\" (UID: \"e5fb0152-3efd-4000-bce3-fa90b75316ae\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" Mar 12 18:29:23.381013 master-0 kubenswrapper[29097]: I0312 18:29:23.380973 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/492e9833-4513-4f2f-b865-d05a8973fadc-mcd-auth-proxy-config\") pod \"machine-config-daemon-mfv5x\" (UID: \"492e9833-4513-4f2f-b865-d05a8973fadc\") " pod="openshift-machine-config-operator/machine-config-daemon-mfv5x" Mar 12 18:29:23.381054 master-0 kubenswrapper[29097]: I0312 18:29:23.381038 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/030160af-c915-4f00-903a-1c4b5c2b719a-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-4w5z7\" (UID: \"030160af-c915-4f00-903a-1c4b5c2b719a\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7" Mar 12 18:29:23.381116 master-0 kubenswrapper[29097]: I0312 18:29:23.381091 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c016b1e-d47c-47d4-a15f-4160e7731c82-serving-cert\") pod \"controller-manager-7cd74f9776-2rmc9\" (UID: \"1c016b1e-d47c-47d4-a15f-4160e7731c82\") " pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:29:23.381188 master-0 kubenswrapper[29097]: I0312 18:29:23.381161 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-client-ca\") pod \"controller-manager-7cd74f9776-2rmc9\" (UID: \"1c016b1e-d47c-47d4-a15f-4160e7731c82\") " pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:29:23.381252 master-0 kubenswrapper[29097]: I0312 18:29:23.381231 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e5fb0152-3efd-4000-bce3-fa90b75316ae-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-2psgb\" (UID: \"e5fb0152-3efd-4000-bce3-fa90b75316ae\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" Mar 12 18:29:23.381304 master-0 kubenswrapper[29097]: I0312 18:29:23.381273 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5e09875-4445-4584-94f0-243148307bb0-service-ca-bundle\") pod \"insights-operator-8f89dfddd-m6z6d\" (UID: \"f5e09875-4445-4584-94f0-243148307bb0\") " pod="openshift-insights/insights-operator-8f89dfddd-m6z6d" Mar 12 18:29:23.381344 master-0 kubenswrapper[29097]: I0312 18:29:23.381314 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5fb0152-3efd-4000-bce3-fa90b75316ae-config\") pod \"cluster-baremetal-operator-5cdb4c5598-2psgb\" (UID: \"e5fb0152-3efd-4000-bce3-fa90b75316ae\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" Mar 12 18:29:23.381381 master-0 kubenswrapper[29097]: I0312 18:29:23.381366 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4687cf53-55d7-42b7-b24d-e57da3989fd6-images\") pod \"machine-api-operator-84bf6db4f9-gnrzd\" (UID: \"4687cf53-55d7-42b7-b24d-e57da3989fd6\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-gnrzd" Mar 12 18:29:23.381485 master-0 kubenswrapper[29097]: I0312 18:29:23.381460 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5e09875-4445-4584-94f0-243148307bb0-serving-cert\") pod \"insights-operator-8f89dfddd-m6z6d\" (UID: \"f5e09875-4445-4584-94f0-243148307bb0\") " pod="openshift-insights/insights-operator-8f89dfddd-m6z6d" Mar 12 18:29:23.381566 master-0 kubenswrapper[29097]: I0312 18:29:23.381479 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5e09875-4445-4584-94f0-243148307bb0-service-ca-bundle\") pod \"insights-operator-8f89dfddd-m6z6d\" (UID: \"f5e09875-4445-4584-94f0-243148307bb0\") " pod="openshift-insights/insights-operator-8f89dfddd-m6z6d" Mar 12 18:29:23.381647 master-0 kubenswrapper[29097]: I0312 18:29:23.381623 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/78c13011-7a79-445f-807c-4f5e75643549-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-f59x9\" (UID: \"78c13011-7a79-445f-807c-4f5e75643549\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-f59x9" Mar 12 18:29:23.381706 master-0 kubenswrapper[29097]: I0312 18:29:23.381658 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4687cf53-55d7-42b7-b24d-e57da3989fd6-images\") pod \"machine-api-operator-84bf6db4f9-gnrzd\" (UID: \"4687cf53-55d7-42b7-b24d-e57da3989fd6\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-gnrzd" Mar 12 18:29:23.381706 master-0 kubenswrapper[29097]: I0312 18:29:23.381672 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-secret-metrics-server-tls\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:29:23.381776 master-0 kubenswrapper[29097]: I0312 18:29:23.381703 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/492e9833-4513-4f2f-b865-d05a8973fadc-mcd-auth-proxy-config\") pod \"machine-config-daemon-mfv5x\" (UID: \"492e9833-4513-4f2f-b865-d05a8973fadc\") " pod="openshift-machine-config-operator/machine-config-daemon-mfv5x" Mar 12 18:29:23.381776 master-0 kubenswrapper[29097]: I0312 18:29:23.381712 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-client-ca-bundle\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:29:23.381776 master-0 kubenswrapper[29097]: I0312 18:29:23.381750 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5e09875-4445-4584-94f0-243148307bb0-serving-cert\") pod \"insights-operator-8f89dfddd-m6z6d\" (UID: \"f5e09875-4445-4584-94f0-243148307bb0\") " pod="openshift-insights/insights-operator-8f89dfddd-m6z6d" Mar 12 18:29:23.381972 master-0 kubenswrapper[29097]: I0312 18:29:23.381783 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-config\") pod \"controller-manager-7cd74f9776-2rmc9\" (UID: \"1c016b1e-d47c-47d4-a15f-4160e7731c82\") " pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:29:23.381972 master-0 kubenswrapper[29097]: I0312 18:29:23.381823 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/492e9833-4513-4f2f-b865-d05a8973fadc-proxy-tls\") pod \"machine-config-daemon-mfv5x\" (UID: \"492e9833-4513-4f2f-b865-d05a8973fadc\") " pod="openshift-machine-config-operator/machine-config-daemon-mfv5x" Mar 12 18:29:23.381972 master-0 kubenswrapper[29097]: I0312 18:29:23.381946 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-proxy-ca-bundles\") pod \"controller-manager-7cd74f9776-2rmc9\" (UID: \"1c016b1e-d47c-47d4-a15f-4160e7731c82\") " pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:29:23.382081 master-0 kubenswrapper[29097]: I0312 18:29:23.381991 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/41c1bd85-369e-4341-9e80-8b4b248b5572-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-qs7tx\" (UID: \"41c1bd85-369e-4341-9e80-8b4b248b5572\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx" Mar 12 18:29:23.382081 master-0 kubenswrapper[29097]: I0312 18:29:23.382026 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327-webhook-certs\") pod \"multus-admission-controller-7769569c45-dq2gs\" (UID: \"25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327\") " pod="openshift-multus/multus-admission-controller-7769569c45-dq2gs" Mar 12 18:29:23.382081 master-0 kubenswrapper[29097]: I0312 18:29:23.382052 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee55b576-6b8d-4217-b5a7-93b023a1e885-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-k5hsv\" (UID: \"ee55b576-6b8d-4217-b5a7-93b023a1e885\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-k5hsv" Mar 12 18:29:23.382189 master-0 kubenswrapper[29097]: I0312 18:29:23.382089 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be2da107-a419-423f-a657-44d681291f28-client-ca\") pod \"route-controller-manager-7db5456fb7-csszs\" (UID: \"be2da107-a419-423f-a657-44d681291f28\") " pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" Mar 12 18:29:23.382189 master-0 kubenswrapper[29097]: I0312 18:29:23.382115 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4687cf53-55d7-42b7-b24d-e57da3989fd6-config\") pod \"machine-api-operator-84bf6db4f9-gnrzd\" (UID: \"4687cf53-55d7-42b7-b24d-e57da3989fd6\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-gnrzd" Mar 12 18:29:23.382189 master-0 kubenswrapper[29097]: I0312 18:29:23.382153 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4687cf53-55d7-42b7-b24d-e57da3989fd6-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-gnrzd\" (UID: \"4687cf53-55d7-42b7-b24d-e57da3989fd6\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-gnrzd" Mar 12 18:29:23.382302 master-0 kubenswrapper[29097]: I0312 18:29:23.382236 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3d77a98a-0176-4924-81d3-8e9890852b38-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:29:23.382302 master-0 kubenswrapper[29097]: I0312 18:29:23.382294 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/604044f4-9b0b-4747-827d-843f3cfa7077-images\") pod \"machine-config-operator-fdb5c78b5-xbfrg\" (UID: \"604044f4-9b0b-4747-827d-843f3cfa7077\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg" Mar 12 18:29:23.382379 master-0 kubenswrapper[29097]: I0312 18:29:23.382322 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327-webhook-certs\") pod \"multus-admission-controller-7769569c45-dq2gs\" (UID: \"25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327\") " pod="openshift-multus/multus-admission-controller-7769569c45-dq2gs" Mar 12 18:29:23.382426 master-0 kubenswrapper[29097]: I0312 18:29:23.382388 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/604044f4-9b0b-4747-827d-843f3cfa7077-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-xbfrg\" (UID: \"604044f4-9b0b-4747-827d-843f3cfa7077\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg" Mar 12 18:29:23.382475 master-0 kubenswrapper[29097]: I0312 18:29:23.382436 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5e09875-4445-4584-94f0-243148307bb0-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-m6z6d\" (UID: \"f5e09875-4445-4584-94f0-243148307bb0\") " pod="openshift-insights/insights-operator-8f89dfddd-m6z6d" Mar 12 18:29:23.382541 master-0 kubenswrapper[29097]: I0312 18:29:23.382499 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/78c13011-7a79-445f-807c-4f5e75643549-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-f59x9\" (UID: \"78c13011-7a79-445f-807c-4f5e75643549\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-f59x9" Mar 12 18:29:23.382541 master-0 kubenswrapper[29097]: I0312 18:29:23.382536 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/604044f4-9b0b-4747-827d-843f3cfa7077-images\") pod \"machine-config-operator-fdb5c78b5-xbfrg\" (UID: \"604044f4-9b0b-4747-827d-843f3cfa7077\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg" Mar 12 18:29:23.382622 master-0 kubenswrapper[29097]: I0312 18:29:23.382567 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee4c1949-96b4-4444-9675-9df1d46f681e-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-z9srl\" (UID: \"ee4c1949-96b4-4444-9675-9df1d46f681e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" Mar 12 18:29:23.382670 master-0 kubenswrapper[29097]: I0312 18:29:23.382597 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/41c1bd85-369e-4341-9e80-8b4b248b5572-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-qs7tx\" (UID: \"41c1bd85-369e-4341-9e80-8b4b248b5572\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx" Mar 12 18:29:23.382670 master-0 kubenswrapper[29097]: I0312 18:29:23.382660 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/3d77a98a-0176-4924-81d3-8e9890852b38-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:29:23.382744 master-0 kubenswrapper[29097]: I0312 18:29:23.382709 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/41c1bd85-369e-4341-9e80-8b4b248b5572-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-qs7tx\" (UID: \"41c1bd85-369e-4341-9e80-8b4b248b5572\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx" Mar 12 18:29:23.382744 master-0 kubenswrapper[29097]: I0312 18:29:23.382738 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5e09875-4445-4584-94f0-243148307bb0-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-m6z6d\" (UID: \"f5e09875-4445-4584-94f0-243148307bb0\") " pod="openshift-insights/insights-operator-8f89dfddd-m6z6d" Mar 12 18:29:23.382822 master-0 kubenswrapper[29097]: I0312 18:29:23.382742 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/78c13011-7a79-445f-807c-4f5e75643549-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-f59x9\" (UID: \"78c13011-7a79-445f-807c-4f5e75643549\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-f59x9" Mar 12 18:29:23.382822 master-0 kubenswrapper[29097]: I0312 18:29:23.382784 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ee4c1949-96b4-4444-9675-9df1d46f681e-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-z9srl\" (UID: \"ee4c1949-96b4-4444-9675-9df1d46f681e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" Mar 12 18:29:23.382892 master-0 kubenswrapper[29097]: I0312 18:29:23.382859 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b38e7fcd-8f7a-4d4f-8702-7ef205261054-webhook-cert\") pod \"packageserver-694648486f-f89lc\" (UID: \"b38e7fcd-8f7a-4d4f-8702-7ef205261054\") " pod="openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc" Mar 12 18:29:23.382892 master-0 kubenswrapper[29097]: I0312 18:29:23.382878 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee4c1949-96b4-4444-9675-9df1d46f681e-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-z9srl\" (UID: \"ee4c1949-96b4-4444-9675-9df1d46f681e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" Mar 12 18:29:23.382968 master-0 kubenswrapper[29097]: I0312 18:29:23.382922 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/030160af-c915-4f00-903a-1c4b5c2b719a-config\") pod \"machine-approver-754bdc9f9d-4w5z7\" (UID: \"030160af-c915-4f00-903a-1c4b5c2b719a\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7" Mar 12 18:29:23.382968 master-0 kubenswrapper[29097]: I0312 18:29:23.382941 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b38e7fcd-8f7a-4d4f-8702-7ef205261054-apiservice-cert\") pod \"packageserver-694648486f-f89lc\" (UID: \"b38e7fcd-8f7a-4d4f-8702-7ef205261054\") " pod="openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc" Mar 12 18:29:23.382968 master-0 kubenswrapper[29097]: I0312 18:29:23.382964 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3d77a98a-0176-4924-81d3-8e9890852b38-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:29:23.383076 master-0 kubenswrapper[29097]: I0312 18:29:23.383041 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/604044f4-9b0b-4747-827d-843f3cfa7077-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-xbfrg\" (UID: \"604044f4-9b0b-4747-827d-843f3cfa7077\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg" Mar 12 18:29:23.383155 master-0 kubenswrapper[29097]: I0312 18:29:23.383129 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b38e7fcd-8f7a-4d4f-8702-7ef205261054-webhook-cert\") pod \"packageserver-694648486f-f89lc\" (UID: \"b38e7fcd-8f7a-4d4f-8702-7ef205261054\") " pod="openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc" Mar 12 18:29:23.383336 master-0 kubenswrapper[29097]: I0312 18:29:23.383302 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b38e7fcd-8f7a-4d4f-8702-7ef205261054-apiservice-cert\") pod \"packageserver-694648486f-f89lc\" (UID: \"b38e7fcd-8f7a-4d4f-8702-7ef205261054\") " pod="openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc" Mar 12 18:29:23.383399 master-0 kubenswrapper[29097]: I0312 18:29:23.383316 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee55b576-6b8d-4217-b5a7-93b023a1e885-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-k5hsv\" (UID: \"ee55b576-6b8d-4217-b5a7-93b023a1e885\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-k5hsv" Mar 12 18:29:23.401289 master-0 kubenswrapper[29097]: I0312 18:29:23.401232 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-cc9lz" Mar 12 18:29:23.421204 master-0 kubenswrapper[29097]: I0312 18:29:23.421147 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 12 18:29:23.422774 master-0 kubenswrapper[29097]: I0312 18:29:23.422725 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4687cf53-55d7-42b7-b24d-e57da3989fd6-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-gnrzd\" (UID: \"4687cf53-55d7-42b7-b24d-e57da3989fd6\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-gnrzd" Mar 12 18:29:23.440362 master-0 kubenswrapper[29097]: I0312 18:29:23.440322 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 12 18:29:23.443092 master-0 kubenswrapper[29097]: I0312 18:29:23.443067 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4687cf53-55d7-42b7-b24d-e57da3989fd6-config\") pod \"machine-api-operator-84bf6db4f9-gnrzd\" (UID: \"4687cf53-55d7-42b7-b24d-e57da3989fd6\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-gnrzd" Mar 12 18:29:23.460648 master-0 kubenswrapper[29097]: I0312 18:29:23.460599 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 12 18:29:23.462460 master-0 kubenswrapper[29097]: I0312 18:29:23.462415 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/41c1bd85-369e-4341-9e80-8b4b248b5572-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-qs7tx\" (UID: \"41c1bd85-369e-4341-9e80-8b4b248b5572\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx" Mar 12 18:29:23.463596 master-0 kubenswrapper[29097]: I0312 18:29:23.463567 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/78c13011-7a79-445f-807c-4f5e75643549-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-f59x9\" (UID: \"78c13011-7a79-445f-807c-4f5e75643549\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-f59x9" Mar 12 18:29:23.463667 master-0 kubenswrapper[29097]: I0312 18:29:23.463603 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3d77a98a-0176-4924-81d3-8e9890852b38-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:29:23.471117 master-0 kubenswrapper[29097]: I0312 18:29:23.471081 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/adb0dbbf-458d-46f5-b236-d4904e125418-metrics-client-ca\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:29:23.481616 master-0 kubenswrapper[29097]: I0312 18:29:23.481565 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 18:29:23.491446 master-0 kubenswrapper[29097]: I0312 18:29:23.491394 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be2da107-a419-423f-a657-44d681291f28-serving-cert\") pod \"route-controller-manager-7db5456fb7-csszs\" (UID: \"be2da107-a419-423f-a657-44d681291f28\") " pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" Mar 12 18:29:23.501014 master-0 kubenswrapper[29097]: I0312 18:29:23.500839 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 18:29:23.503170 master-0 kubenswrapper[29097]: I0312 18:29:23.503109 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be2da107-a419-423f-a657-44d681291f28-client-ca\") pod \"route-controller-manager-7db5456fb7-csszs\" (UID: \"be2da107-a419-423f-a657-44d681291f28\") " pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" Mar 12 18:29:23.520895 master-0 kubenswrapper[29097]: I0312 18:29:23.520793 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 18:29:23.540846 master-0 kubenswrapper[29097]: I0312 18:29:23.540782 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-6gh5d" Mar 12 18:29:23.560985 master-0 kubenswrapper[29097]: I0312 18:29:23.560902 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 18:29:23.580710 master-0 kubenswrapper[29097]: I0312 18:29:23.580637 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-ldpgf" Mar 12 18:29:23.600461 master-0 kubenswrapper[29097]: I0312 18:29:23.600387 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 18:29:23.610702 master-0 kubenswrapper[29097]: I0312 18:29:23.610632 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be2da107-a419-423f-a657-44d681291f28-config\") pod \"route-controller-manager-7db5456fb7-csszs\" (UID: \"be2da107-a419-423f-a657-44d681291f28\") " pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" Mar 12 18:29:23.621141 master-0 kubenswrapper[29097]: I0312 18:29:23.621082 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 12 18:29:23.626473 master-0 kubenswrapper[29097]: I0312 18:29:23.626428 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:29:23.626622 master-0 kubenswrapper[29097]: I0312 18:29:23.626601 29097 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 18:29:23.631214 master-0 kubenswrapper[29097]: I0312 18:29:23.631159 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e5fb0152-3efd-4000-bce3-fa90b75316ae-images\") pod \"cluster-baremetal-operator-5cdb4c5598-2psgb\" (UID: \"e5fb0152-3efd-4000-bce3-fa90b75316ae\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" Mar 12 18:29:23.641197 master-0 kubenswrapper[29097]: I0312 18:29:23.641149 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 18:29:23.641713 master-0 kubenswrapper[29097]: I0312 18:29:23.641667 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-client-ca\") pod \"controller-manager-7cd74f9776-2rmc9\" (UID: \"1c016b1e-d47c-47d4-a15f-4160e7731c82\") " pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:29:23.660720 master-0 kubenswrapper[29097]: I0312 18:29:23.660667 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-ssqhn" Mar 12 18:29:23.681190 master-0 kubenswrapper[29097]: I0312 18:29:23.681131 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-vtdm7" Mar 12 18:29:23.700692 master-0 kubenswrapper[29097]: I0312 18:29:23.700639 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 12 18:29:23.701915 master-0 kubenswrapper[29097]: I0312 18:29:23.701876 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e5fb0152-3efd-4000-bce3-fa90b75316ae-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-2psgb\" (UID: \"e5fb0152-3efd-4000-bce3-fa90b75316ae\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" Mar 12 18:29:23.719240 master-0 kubenswrapper[29097]: I0312 18:29:23.719184 29097 request.go:700] Waited for 2.006589565s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-baremetal-operator-tls&limit=500&resourceVersion=0 Mar 12 18:29:23.719629 master-0 kubenswrapper[29097]: I0312 18:29:23.719563 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:29:23.721979 master-0 kubenswrapper[29097]: I0312 18:29:23.721923 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 12 18:29:23.726643 master-0 kubenswrapper[29097]: I0312 18:29:23.726594 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:29:23.732348 master-0 kubenswrapper[29097]: I0312 18:29:23.732299 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/e5fb0152-3efd-4000-bce3-fa90b75316ae-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-2psgb\" (UID: \"e5fb0152-3efd-4000-bce3-fa90b75316ae\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" Mar 12 18:29:23.740884 master-0 kubenswrapper[29097]: I0312 18:29:23.740819 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 12 18:29:23.742093 master-0 kubenswrapper[29097]: I0312 18:29:23.742060 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5fb0152-3efd-4000-bce3-fa90b75316ae-config\") pod \"cluster-baremetal-operator-5cdb4c5598-2psgb\" (UID: \"e5fb0152-3efd-4000-bce3-fa90b75316ae\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" Mar 12 18:29:23.760992 master-0 kubenswrapper[29097]: I0312 18:29:23.760891 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-ffmfp" Mar 12 18:29:23.780209 master-0 kubenswrapper[29097]: I0312 18:29:23.780168 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-w4bj7" Mar 12 18:29:23.800240 master-0 kubenswrapper[29097]: I0312 18:29:23.800197 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 12 18:29:23.803928 master-0 kubenswrapper[29097]: I0312 18:29:23.803779 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/78c13011-7a79-445f-807c-4f5e75643549-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-f59x9\" (UID: \"78c13011-7a79-445f-807c-4f5e75643549\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-f59x9" Mar 12 18:29:23.820611 master-0 kubenswrapper[29097]: I0312 18:29:23.820392 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 12 18:29:23.822566 master-0 kubenswrapper[29097]: I0312 18:29:23.822532 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/78c13011-7a79-445f-807c-4f5e75643549-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-f59x9\" (UID: \"78c13011-7a79-445f-807c-4f5e75643549\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-f59x9" Mar 12 18:29:23.840745 master-0 kubenswrapper[29097]: I0312 18:29:23.840708 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 12 18:29:23.842949 master-0 kubenswrapper[29097]: I0312 18:29:23.842891 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/492e9833-4513-4f2f-b865-d05a8973fadc-proxy-tls\") pod \"machine-config-daemon-mfv5x\" (UID: \"492e9833-4513-4f2f-b865-d05a8973fadc\") " pod="openshift-machine-config-operator/machine-config-daemon-mfv5x" Mar 12 18:29:23.860578 master-0 kubenswrapper[29097]: I0312 18:29:23.860430 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 12 18:29:23.861408 master-0 kubenswrapper[29097]: I0312 18:29:23.861369 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/adb0dbbf-458d-46f5-b236-d4904e125418-node-exporter-tls\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:29:23.880776 master-0 kubenswrapper[29097]: I0312 18:29:23.880706 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 12 18:29:23.881081 master-0 kubenswrapper[29097]: I0312 18:29:23.881045 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/adb0dbbf-458d-46f5-b236-d4904e125418-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:29:23.900158 master-0 kubenswrapper[29097]: I0312 18:29:23.900104 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 12 18:29:23.903444 master-0 kubenswrapper[29097]: I0312 18:29:23.903403 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/3d77a98a-0176-4924-81d3-8e9890852b38-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:29:23.920279 master-0 kubenswrapper[29097]: I0312 18:29:23.920230 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 18:29:23.941926 master-0 kubenswrapper[29097]: I0312 18:29:23.941876 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 12 18:29:23.949095 master-0 kubenswrapper[29097]: I0312 18:29:23.949055 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/9f1f60fa-d79d-4f31-b5bf-2ad333151537-metrics-server-audit-profiles\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:29:23.960880 master-0 kubenswrapper[29097]: I0312 18:29:23.960841 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-6d4tt" Mar 12 18:29:23.981127 master-0 kubenswrapper[29097]: I0312 18:29:23.981068 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 18:29:23.981505 master-0 kubenswrapper[29097]: I0312 18:29:23.981473 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c016b1e-d47c-47d4-a15f-4160e7731c82-serving-cert\") pod \"controller-manager-7cd74f9776-2rmc9\" (UID: \"1c016b1e-d47c-47d4-a15f-4160e7731c82\") " pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:29:23.986609 master-0 kubenswrapper[29097]: I0312 18:29:23.986558 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:29:24.000539 master-0 kubenswrapper[29097]: I0312 18:29:24.000480 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-72pgx" Mar 12 18:29:24.021203 master-0 kubenswrapper[29097]: I0312 18:29:24.021083 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 12 18:29:24.023410 master-0 kubenswrapper[29097]: I0312 18:29:24.023373 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee4c1949-96b4-4444-9675-9df1d46f681e-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-z9srl\" (UID: \"ee4c1949-96b4-4444-9675-9df1d46f681e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" Mar 12 18:29:24.040004 master-0 kubenswrapper[29097]: I0312 18:29:24.039959 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 12 18:29:24.041212 master-0 kubenswrapper[29097]: I0312 18:29:24.041175 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ee55b576-6b8d-4217-b5a7-93b023a1e885-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-k5hsv\" (UID: \"ee55b576-6b8d-4217-b5a7-93b023a1e885\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-k5hsv" Mar 12 18:29:24.066012 master-0 kubenswrapper[29097]: I0312 18:29:24.065950 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 18:29:24.072571 master-0 kubenswrapper[29097]: I0312 18:29:24.072504 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-proxy-ca-bundles\") pod \"controller-manager-7cd74f9776-2rmc9\" (UID: \"1c016b1e-d47c-47d4-a15f-4160e7731c82\") " pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:29:24.080542 master-0 kubenswrapper[29097]: I0312 18:29:24.080487 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 12 18:29:24.083655 master-0 kubenswrapper[29097]: I0312 18:29:24.083612 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ee4c1949-96b4-4444-9675-9df1d46f681e-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-z9srl\" (UID: \"ee4c1949-96b4-4444-9675-9df1d46f681e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" Mar 12 18:29:24.100652 master-0 kubenswrapper[29097]: I0312 18:29:24.100610 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-fbn8j" Mar 12 18:29:24.120104 master-0 kubenswrapper[29097]: I0312 18:29:24.120027 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 18:29:24.140861 master-0 kubenswrapper[29097]: I0312 18:29:24.140803 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 12 18:29:24.143230 master-0 kubenswrapper[29097]: I0312 18:29:24.143187 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee4c1949-96b4-4444-9675-9df1d46f681e-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-z9srl\" (UID: \"ee4c1949-96b4-4444-9675-9df1d46f681e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" Mar 12 18:29:24.163210 master-0 kubenswrapper[29097]: I0312 18:29:24.163145 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 12 18:29:24.180779 master-0 kubenswrapper[29097]: I0312 18:29:24.180724 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-9f7ld" Mar 12 18:29:24.200832 master-0 kubenswrapper[29097]: I0312 18:29:24.200755 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 12 18:29:24.210693 master-0 kubenswrapper[29097]: I0312 18:29:24.210654 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6-certs\") pod \"machine-config-server-2jzxq\" (UID: \"4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6\") " pod="openshift-machine-config-operator/machine-config-server-2jzxq" Mar 12 18:29:24.220365 master-0 kubenswrapper[29097]: I0312 18:29:24.220334 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 12 18:29:24.222617 master-0 kubenswrapper[29097]: I0312 18:29:24.222571 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/030160af-c915-4f00-903a-1c4b5c2b719a-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-4w5z7\" (UID: \"030160af-c915-4f00-903a-1c4b5c2b719a\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7" Mar 12 18:29:24.241132 master-0 kubenswrapper[29097]: I0312 18:29:24.241070 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-7hcn2cdka018u" Mar 12 18:29:24.242790 master-0 kubenswrapper[29097]: I0312 18:29:24.242754 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-client-ca-bundle\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:29:24.261020 master-0 kubenswrapper[29097]: I0312 18:29:24.260986 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 18:29:24.280444 master-0 kubenswrapper[29097]: I0312 18:29:24.280300 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 18:29:24.282457 master-0 kubenswrapper[29097]: I0312 18:29:24.282422 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-config\") pod \"controller-manager-7cd74f9776-2rmc9\" (UID: \"1c016b1e-d47c-47d4-a15f-4160e7731c82\") " pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:29:24.301222 master-0 kubenswrapper[29097]: I0312 18:29:24.301166 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-mbtwq" Mar 12 18:29:24.321103 master-0 kubenswrapper[29097]: I0312 18:29:24.321050 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 12 18:29:24.322142 master-0 kubenswrapper[29097]: I0312 18:29:24.322096 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-secret-metrics-server-tls\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:29:24.340945 master-0 kubenswrapper[29097]: I0312 18:29:24.340875 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 12 18:29:24.350782 master-0 kubenswrapper[29097]: I0312 18:29:24.350730 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-secret-metrics-client-certs\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:29:24.360975 master-0 kubenswrapper[29097]: I0312 18:29:24.360943 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 12 18:29:24.370313 master-0 kubenswrapper[29097]: I0312 18:29:24.370265 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f1f60fa-d79d-4f31-b5bf-2ad333151537-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:29:24.379757 master-0 kubenswrapper[29097]: E0312 18:29:24.379720 29097 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:24.379877 master-0 kubenswrapper[29097]: E0312 18:29:24.379806 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6-node-bootstrap-token podName:4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:25.379788304 +0000 UTC m=+4.933768401 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6-node-bootstrap-token") pod "machine-config-server-2jzxq" (UID: "4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:24.379947 master-0 kubenswrapper[29097]: E0312 18:29:24.379921 29097 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:24.380000 master-0 kubenswrapper[29097]: E0312 18:29:24.379982 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/030160af-c915-4f00-903a-1c4b5c2b719a-auth-proxy-config podName:030160af-c915-4f00-903a-1c4b5c2b719a nodeName:}" failed. No retries permitted until 2026-03-12 18:29:25.379966048 +0000 UTC m=+4.933946155 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/030160af-c915-4f00-903a-1c4b5c2b719a-auth-proxy-config") pod "machine-approver-754bdc9f9d-4w5z7" (UID: "030160af-c915-4f00-903a-1c4b5c2b719a") : failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:24.381236 master-0 kubenswrapper[29097]: E0312 18:29:24.381186 29097 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:24.381337 master-0 kubenswrapper[29097]: E0312 18:29:24.381312 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d77a98a-0176-4924-81d3-8e9890852b38-kube-state-metrics-tls podName:3d77a98a-0176-4924-81d3-8e9890852b38 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:25.381280551 +0000 UTC m=+4.935260688 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/3d77a98a-0176-4924-81d3-8e9890852b38-kube-state-metrics-tls") pod "kube-state-metrics-68b88f8cb5-sqdhq" (UID: "3d77a98a-0176-4924-81d3-8e9890852b38") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:24.382501 master-0 kubenswrapper[29097]: E0312 18:29:24.382473 29097 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:24.382592 master-0 kubenswrapper[29097]: E0312 18:29:24.382556 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d77a98a-0176-4924-81d3-8e9890852b38-kube-state-metrics-kube-rbac-proxy-config podName:3d77a98a-0176-4924-81d3-8e9890852b38 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:25.382541473 +0000 UTC m=+4.936521590 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/3d77a98a-0176-4924-81d3-8e9890852b38-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-68b88f8cb5-sqdhq" (UID: "3d77a98a-0176-4924-81d3-8e9890852b38") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:24.382703 master-0 kubenswrapper[29097]: E0312 18:29:24.382682 29097 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:24.382752 master-0 kubenswrapper[29097]: E0312 18:29:24.382726 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41c1bd85-369e-4341-9e80-8b4b248b5572-prometheus-operator-tls podName:41c1bd85-369e-4341-9e80-8b4b248b5572 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:25.382714337 +0000 UTC m=+4.936694444 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/41c1bd85-369e-4341-9e80-8b4b248b5572-prometheus-operator-tls") pod "prometheus-operator-5ff8674d55-qs7tx" (UID: "41c1bd85-369e-4341-9e80-8b4b248b5572") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:24.383897 master-0 kubenswrapper[29097]: E0312 18:29:24.383869 29097 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:24.383985 master-0 kubenswrapper[29097]: E0312 18:29:24.383918 29097 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:24.383985 master-0 kubenswrapper[29097]: E0312 18:29:24.383932 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41c1bd85-369e-4341-9e80-8b4b248b5572-prometheus-operator-kube-rbac-proxy-config podName:41c1bd85-369e-4341-9e80-8b4b248b5572 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:25.383915647 +0000 UTC m=+4.937895754 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/41c1bd85-369e-4341-9e80-8b4b248b5572-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-5ff8674d55-qs7tx" (UID: "41c1bd85-369e-4341-9e80-8b4b248b5572") : failed to sync secret cache: timed out waiting for the condition Mar 12 18:29:24.383985 master-0 kubenswrapper[29097]: E0312 18:29:24.383968 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/030160af-c915-4f00-903a-1c4b5c2b719a-config podName:030160af-c915-4f00-903a-1c4b5c2b719a nodeName:}" failed. No retries permitted until 2026-03-12 18:29:25.383954608 +0000 UTC m=+4.937934715 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/030160af-c915-4f00-903a-1c4b5c2b719a-config") pod "machine-approver-754bdc9f9d-4w5z7" (UID: "030160af-c915-4f00-903a-1c4b5c2b719a") : failed to sync configmap cache: timed out waiting for the condition Mar 12 18:29:24.398101 master-0 kubenswrapper[29097]: I0312 18:29:24.398024 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggsdx\" (UniqueName: \"kubernetes.io/projected/d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27-kube-api-access-ggsdx\") pod \"olm-operator-d64cfc9db-npt4r\" (UID: \"d7a1b1b8-bf2b-4332-bb2e-e3b378b44b27\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r" Mar 12 18:29:24.400363 master-0 kubenswrapper[29097]: I0312 18:29:24.400312 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 12 18:29:24.422999 master-0 kubenswrapper[29097]: I0312 18:29:24.422946 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 12 18:29:24.440562 master-0 kubenswrapper[29097]: I0312 18:29:24.440491 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-djr46" Mar 12 18:29:24.476780 master-0 kubenswrapper[29097]: I0312 18:29:24.476720 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjmcv\" (UniqueName: \"kubernetes.io/projected/d94dc349-c5cb-4f12-8e48-867030af4981-kube-api-access-zjmcv\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:29:24.480291 master-0 kubenswrapper[29097]: I0312 18:29:24.480260 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 12 18:29:24.524825 master-0 kubenswrapper[29097]: I0312 18:29:24.524762 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krrkl\" (UniqueName: \"kubernetes.io/projected/47850839-bb4b-41e9-ac31-f1cabbb4926d-kube-api-access-krrkl\") pod \"catalog-operator-7d9c49f57b-pslh7\" (UID: \"47850839-bb4b-41e9-ac31-f1cabbb4926d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7" Mar 12 18:29:24.531704 master-0 kubenswrapper[29097]: I0312 18:29:24.531573 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d94dc349-c5cb-4f12-8e48-867030af4981-bound-sa-token\") pod \"ingress-operator-677db989d6-4527l\" (UID: \"d94dc349-c5cb-4f12-8e48-867030af4981\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4527l" Mar 12 18:29:24.540482 master-0 kubenswrapper[29097]: I0312 18:29:24.540425 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 12 18:29:24.595412 master-0 kubenswrapper[29097]: I0312 18:29:24.595356 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkt7d\" (UniqueName: \"kubernetes.io/projected/055f5c67-f512-4510-99c5-e194944b0599-kube-api-access-tkt7d\") pod \"service-ca-operator-69b6fc6b88-9xlgl\" (UID: \"055f5c67-f512-4510-99c5-e194944b0599\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-9xlgl" Mar 12 18:29:24.595868 master-0 kubenswrapper[29097]: I0312 18:29:24.595831 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vpbp\" (UniqueName: \"kubernetes.io/projected/a1e2340b-ebca-40de-b1e0-8133999cd860-kube-api-access-6vpbp\") pod \"kube-storage-version-migrator-operator-7f65c457f5-2xr98\" (UID: \"a1e2340b-ebca-40de-b1e0-8133999cd860\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-2xr98" Mar 12 18:29:24.600497 master-0 kubenswrapper[29097]: I0312 18:29:24.600463 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-pp56m" Mar 12 18:29:24.620719 master-0 kubenswrapper[29097]: I0312 18:29:24.620671 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 12 18:29:24.641251 master-0 kubenswrapper[29097]: I0312 18:29:24.641203 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 12 18:29:24.661140 master-0 kubenswrapper[29097]: I0312 18:29:24.661104 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 12 18:29:24.681651 master-0 kubenswrapper[29097]: I0312 18:29:24.681611 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-8nmsp" Mar 12 18:29:24.700703 master-0 kubenswrapper[29097]: I0312 18:29:24.700646 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 12 18:29:24.719922 master-0 kubenswrapper[29097]: I0312 18:29:24.719852 29097 request.go:700] Waited for 3.000100181s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/marketplace-operator/token Mar 12 18:29:24.733649 master-0 kubenswrapper[29097]: I0312 18:29:24.733593 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdlxn\" (UniqueName: \"kubernetes.io/projected/4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64-kube-api-access-fdlxn\") pod \"marketplace-operator-64bf9778cb-clkx5\" (UID: \"4a4fb3f0-ccb1-45d4-b2f5-ff96ff93fb64\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:29:24.752553 master-0 kubenswrapper[29097]: I0312 18:29:24.752487 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbnx8\" (UniqueName: \"kubernetes.io/projected/51eb717b-d11f-4bc3-8df6-deb51d5889f3-kube-api-access-gbnx8\") pod \"package-server-manager-854648ff6d-kwv7s\" (UID: \"51eb717b-d11f-4bc3-8df6-deb51d5889f3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:29:24.761876 master-0 kubenswrapper[29097]: I0312 18:29:24.761832 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 12 18:29:24.792027 master-0 kubenswrapper[29097]: I0312 18:29:24.791973 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tmqs\" (UniqueName: \"kubernetes.io/projected/266b9f4f-3fb4-474d-84df-0a6c687c7e9a-kube-api-access-6tmqs\") pod \"dns-default-6h5tt\" (UID: \"266b9f4f-3fb4-474d-84df-0a6c687c7e9a\") " pod="openshift-dns/dns-default-6h5tt" Mar 12 18:29:24.813306 master-0 kubenswrapper[29097]: I0312 18:29:24.813254 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l22gw\" (UniqueName: \"kubernetes.io/projected/d92dddc8-a810-43f5-8beb-32d1c8ad8381-kube-api-access-l22gw\") pod \"iptables-alerter-4k8wm\" (UID: \"d92dddc8-a810-43f5-8beb-32d1c8ad8381\") " pod="openshift-network-operator/iptables-alerter-4k8wm" Mar 12 18:29:24.833654 master-0 kubenswrapper[29097]: I0312 18:29:24.833598 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfp84\" (UniqueName: \"kubernetes.io/projected/be2da107-a419-423f-a657-44d681291f28-kube-api-access-jfp84\") pod \"route-controller-manager-7db5456fb7-csszs\" (UID: \"be2da107-a419-423f-a657-44d681291f28\") " pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" Mar 12 18:29:24.851684 master-0 kubenswrapper[29097]: I0312 18:29:24.851621 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqlfx\" (UniqueName: \"kubernetes.io/projected/9f1f60fa-d79d-4f31-b5bf-2ad333151537-kube-api-access-hqlfx\") pod \"metrics-server-5784dff469-l5d64\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:29:24.880141 master-0 kubenswrapper[29097]: I0312 18:29:24.880074 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pn9h\" (UniqueName: \"kubernetes.io/projected/38a4bf73-479e-4bbf-9aa3-639fc288c8bc-kube-api-access-2pn9h\") pod \"multus-656l8\" (UID: \"38a4bf73-479e-4bbf-9aa3-639fc288c8bc\") " pod="openshift-multus/multus-656l8" Mar 12 18:29:24.897924 master-0 kubenswrapper[29097]: I0312 18:29:24.897875 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4048e453-a983-4708-89b6-a81af0067e29-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-m885k\" (UID: \"4048e453-a983-4708-89b6-a81af0067e29\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-m885k" Mar 12 18:29:24.912140 master-0 kubenswrapper[29097]: I0312 18:29:24.912073 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e720e1d0-5a6d-4b76-8b25-5963e24950f5-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-w7xvp\" (UID: \"e720e1d0-5a6d-4b76-8b25-5963e24950f5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-w7xvp" Mar 12 18:29:24.936380 master-0 kubenswrapper[29097]: I0312 18:29:24.936328 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmsnk\" (UniqueName: \"kubernetes.io/projected/34cbf061-4c76-476e-bed9-0a133c744862-kube-api-access-gmsnk\") pod \"control-plane-machine-set-operator-6686554ddc-zd9gm\" (UID: \"34cbf061-4c76-476e-bed9-0a133c744862\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zd9gm" Mar 12 18:29:24.956979 master-0 kubenswrapper[29097]: I0312 18:29:24.956921 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md9dt\" (UniqueName: \"kubernetes.io/projected/b2c6cd11-b1ed-4fed-a4ce-4eee0af20868-kube-api-access-md9dt\") pod \"cloud-credential-operator-55d85b7b47-vk7lr\" (UID: \"b2c6cd11-b1ed-4fed-a4ce-4eee0af20868\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-vk7lr" Mar 12 18:29:24.999440 master-0 kubenswrapper[29097]: I0312 18:29:24.999379 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab926874-9722-4e65-9084-27b2f9915450-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-ddsbn\" (UID: \"ab926874-9722-4e65-9084-27b2f9915450\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-ddsbn" Mar 12 18:29:25.002562 master-0 kubenswrapper[29097]: I0312 18:29:25.002485 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-6h5tt" Mar 12 18:29:25.003375 master-0 kubenswrapper[29097]: I0312 18:29:25.003338 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-6h5tt" Mar 12 18:29:25.006720 master-0 kubenswrapper[29097]: I0312 18:29:25.006679 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k59mb\" (UniqueName: \"kubernetes.io/projected/b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652-kube-api-access-k59mb\") pod \"operator-controller-controller-manager-6598bfb6c4-9nzsn\" (UID: \"b1cf6cb1-8b20-4bc2-a474-52d6d7cc3652\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:29:25.016885 master-0 kubenswrapper[29097]: I0312 18:29:25.016837 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqzmm\" (UniqueName: \"kubernetes.io/projected/604044f4-9b0b-4747-827d-843f3cfa7077-kube-api-access-fqzmm\") pod \"machine-config-operator-fdb5c78b5-xbfrg\" (UID: \"604044f4-9b0b-4747-827d-843f3cfa7077\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-xbfrg" Mar 12 18:29:25.033875 master-0 kubenswrapper[29097]: I0312 18:29:25.033821 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4wsx\" (UniqueName: \"kubernetes.io/projected/ee4c1949-96b4-4444-9675-9df1d46f681e-kube-api-access-x4wsx\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-z9srl\" (UID: \"ee4c1949-96b4-4444-9675-9df1d46f681e\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-z9srl" Mar 12 18:29:25.053650 master-0 kubenswrapper[29097]: I0312 18:29:25.053558 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th8tc\" (UniqueName: \"kubernetes.io/projected/8ad05507-e242-4ff8-ae80-c16ff9ee68e2-kube-api-access-th8tc\") pod \"dns-operator-589895fbb7-jqj5k\" (UID: \"8ad05507-e242-4ff8-ae80-c16ff9ee68e2\") " pod="openshift-dns-operator/dns-operator-589895fbb7-jqj5k" Mar 12 18:29:25.072774 master-0 kubenswrapper[29097]: I0312 18:29:25.072724 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kn2k\" (UniqueName: \"kubernetes.io/projected/492e9833-4513-4f2f-b865-d05a8973fadc-kube-api-access-5kn2k\") pod \"machine-config-daemon-mfv5x\" (UID: \"492e9833-4513-4f2f-b865-d05a8973fadc\") " pod="openshift-machine-config-operator/machine-config-daemon-mfv5x" Mar 12 18:29:25.095332 master-0 kubenswrapper[29097]: I0312 18:29:25.095282 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vct98\" (UniqueName: \"kubernetes.io/projected/e697746f-fb9e-4d10-ab61-33c68e62cc0d-kube-api-access-vct98\") pod \"etcd-operator-5884b9cd56-bfq7b\" (UID: \"e697746f-fb9e-4d10-ab61-33c68e62cc0d\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-bfq7b" Mar 12 18:29:25.112060 master-0 kubenswrapper[29097]: I0312 18:29:25.112016 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4glbr\" (UniqueName: \"kubernetes.io/projected/518ffff8-8119-41be-8b76-ce49d5751254-kube-api-access-4glbr\") pod \"router-default-79f8cd6fdd-79bhf\" (UID: \"518ffff8-8119-41be-8b76-ce49d5751254\") " pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:29:25.131934 master-0 kubenswrapper[29097]: I0312 18:29:25.131889 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rhmv\" (UniqueName: \"kubernetes.io/projected/b8dd13a7-10e5-431b-8d30-405dcfea02f5-kube-api-access-7rhmv\") pod \"ovnkube-node-hx8q8\" (UID: \"b8dd13a7-10e5-431b-8d30-405dcfea02f5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:25.152908 master-0 kubenswrapper[29097]: I0312 18:29:25.152855 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vr66\" (UniqueName: \"kubernetes.io/projected/45aa4887-c913-4ece-ae34-fcde33832621-kube-api-access-4vr66\") pod \"csi-snapshot-controller-operator-5685fbc7d-649db\" (UID: \"45aa4887-c913-4ece-ae34-fcde33832621\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-649db" Mar 12 18:29:25.163479 master-0 kubenswrapper[29097]: I0312 18:29:25.163410 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:29:25.167161 master-0 kubenswrapper[29097]: I0312 18:29:25.167123 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:29:25.172592 master-0 kubenswrapper[29097]: I0312 18:29:25.172559 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdc26\" (UniqueName: \"kubernetes.io/projected/0cc54e47-af53-448a-b1c9-043710890a32-kube-api-access-bdc26\") pod \"redhat-marketplace-ggkqg\" (UID: \"0cc54e47-af53-448a-b1c9-043710890a32\") " pod="openshift-marketplace/redhat-marketplace-ggkqg" Mar 12 18:29:25.179805 master-0 kubenswrapper[29097]: I0312 18:29:25.179766 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r" Mar 12 18:29:25.185030 master-0 kubenswrapper[29097]: I0312 18:29:25.184993 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-npt4r" Mar 12 18:29:25.197736 master-0 kubenswrapper[29097]: I0312 18:29:25.197692 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f72ng\" (UniqueName: \"kubernetes.io/projected/3d77a98a-0176-4924-81d3-8e9890852b38-kube-api-access-f72ng\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:29:25.217152 master-0 kubenswrapper[29097]: I0312 18:29:25.217109 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7pjn\" (UniqueName: \"kubernetes.io/projected/41c1bd85-369e-4341-9e80-8b4b248b5572-kube-api-access-q7pjn\") pod \"prometheus-operator-5ff8674d55-qs7tx\" (UID: \"41c1bd85-369e-4341-9e80-8b4b248b5572\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx" Mar 12 18:29:25.237091 master-0 kubenswrapper[29097]: I0312 18:29:25.237033 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n4d5\" (UniqueName: \"kubernetes.io/projected/b648b6de-59a6-42da-84e2-77ea0264ae25-kube-api-access-7n4d5\") pod \"network-check-source-7c67b67d47-g4dkj\" (UID: \"b648b6de-59a6-42da-84e2-77ea0264ae25\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-g4dkj" Mar 12 18:29:25.254719 master-0 kubenswrapper[29097]: I0312 18:29:25.254681 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th72r\" (UniqueName: \"kubernetes.io/projected/aee40f88-83e4-45c8-8331-969943f9f9aa-kube-api-access-th72r\") pod \"cluster-autoscaler-operator-69576476f7-hkfnq\" (UID: \"aee40f88-83e4-45c8-8331-969943f9f9aa\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-hkfnq" Mar 12 18:29:25.272354 master-0 kubenswrapper[29097]: I0312 18:29:25.272302 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdlcw\" (UniqueName: \"kubernetes.io/projected/8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe-kube-api-access-tdlcw\") pod \"network-node-identity-hqrqt\" (UID: \"8e2a83a2-7063-4e17-bf6c-ca6fc6f8cfbe\") " pod="openshift-network-node-identity/network-node-identity-hqrqt" Mar 12 18:29:25.302488 master-0 kubenswrapper[29097]: I0312 18:29:25.302441 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:29:25.329537 master-0 kubenswrapper[29097]: I0312 18:29:25.329422 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6595\" (UniqueName: \"kubernetes.io/projected/25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327-kube-api-access-x6595\") pod \"multus-admission-controller-7769569c45-dq2gs\" (UID: \"25c5cb6b-a6aa-4b02-ac85-ea45ff6c0327\") " pod="openshift-multus/multus-admission-controller-7769569c45-dq2gs" Mar 12 18:29:25.332739 master-0 kubenswrapper[29097]: I0312 18:29:25.332710 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptrtx\" (UniqueName: \"kubernetes.io/projected/33feec78-4592-4343-965b-aa1b7044fcf3-kube-api-access-ptrtx\") pod \"network-check-target-cpthp\" (UID: \"33feec78-4592-4343-965b-aa1b7044fcf3\") " pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:29:25.340480 master-0 kubenswrapper[29097]: I0312 18:29:25.340434 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcvfv\" (UniqueName: \"kubernetes.io/projected/f3a2cda2-b70f-4128-a1be-48503f5aad6d-kube-api-access-tcvfv\") pod \"cluster-olm-operator-77899cf6d-6nvn4\" (UID: \"f3a2cda2-b70f-4128-a1be-48503f5aad6d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-6nvn4" Mar 12 18:29:25.361699 master-0 kubenswrapper[29097]: I0312 18:29:25.361656 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4ae1240-e04e-48e9-88df-9f1a53508da7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-dpb6k\" (UID: \"d4ae1240-e04e-48e9-88df-9f1a53508da7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-dpb6k" Mar 12 18:29:25.383942 master-0 kubenswrapper[29097]: I0312 18:29:25.383896 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmntw\" (UniqueName: \"kubernetes.io/projected/78c13011-7a79-445f-807c-4f5e75643549-kube-api-access-bmntw\") pod \"openshift-state-metrics-74cc79fd76-f59x9\" (UID: \"78c13011-7a79-445f-807c-4f5e75643549\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-f59x9" Mar 12 18:29:25.395685 master-0 kubenswrapper[29097]: I0312 18:29:25.395434 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8qw4\" (UniqueName: \"kubernetes.io/projected/8c241720-7815-40fd-8d4a-1685a43b5893-kube-api-access-l8qw4\") pod \"migrator-57ccdf9b5-w72wh\" (UID: \"8c241720-7815-40fd-8d4a-1685a43b5893\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-w72wh" Mar 12 18:29:25.411238 master-0 kubenswrapper[29097]: I0312 18:29:25.411186 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbzcs\" (UniqueName: \"kubernetes.io/projected/9717d467-af1a-4de0-88e0-c47ec4d12d6e-kube-api-access-kbzcs\") pod \"node-resolver-7lzgx\" (UID: \"9717d467-af1a-4de0-88e0-c47ec4d12d6e\") " pod="openshift-dns/node-resolver-7lzgx" Mar 12 18:29:25.428770 master-0 kubenswrapper[29097]: I0312 18:29:25.428704 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3d77a98a-0176-4924-81d3-8e9890852b38-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:29:25.428978 master-0 kubenswrapper[29097]: I0312 18:29:25.428831 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/41c1bd85-369e-4341-9e80-8b4b248b5572-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-qs7tx\" (UID: \"41c1bd85-369e-4341-9e80-8b4b248b5572\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx" Mar 12 18:29:25.428978 master-0 kubenswrapper[29097]: I0312 18:29:25.428887 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/41c1bd85-369e-4341-9e80-8b4b248b5572-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-qs7tx\" (UID: \"41c1bd85-369e-4341-9e80-8b4b248b5572\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx" Mar 12 18:29:25.429300 master-0 kubenswrapper[29097]: I0312 18:29:25.429267 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/030160af-c915-4f00-903a-1c4b5c2b719a-config\") pod \"machine-approver-754bdc9f9d-4w5z7\" (UID: \"030160af-c915-4f00-903a-1c4b5c2b719a\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7" Mar 12 18:29:25.429622 master-0 kubenswrapper[29097]: I0312 18:29:25.429594 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3d77a98a-0176-4924-81d3-8e9890852b38-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:29:25.429875 master-0 kubenswrapper[29097]: I0312 18:29:25.429848 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/41c1bd85-369e-4341-9e80-8b4b248b5572-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-qs7tx\" (UID: \"41c1bd85-369e-4341-9e80-8b4b248b5572\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx" Mar 12 18:29:25.430132 master-0 kubenswrapper[29097]: I0312 18:29:25.430050 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/41c1bd85-369e-4341-9e80-8b4b248b5572-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-qs7tx\" (UID: \"41c1bd85-369e-4341-9e80-8b4b248b5572\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-qs7tx" Mar 12 18:29:25.430132 master-0 kubenswrapper[29097]: I0312 18:29:25.429007 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/030160af-c915-4f00-903a-1c4b5c2b719a-config\") pod \"machine-approver-754bdc9f9d-4w5z7\" (UID: \"030160af-c915-4f00-903a-1c4b5c2b719a\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7" Mar 12 18:29:25.430243 master-0 kubenswrapper[29097]: I0312 18:29:25.430137 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/030160af-c915-4f00-903a-1c4b5c2b719a-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-4w5z7\" (UID: \"030160af-c915-4f00-903a-1c4b5c2b719a\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7" Mar 12 18:29:25.430243 master-0 kubenswrapper[29097]: I0312 18:29:25.430202 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6-node-bootstrap-token\") pod \"machine-config-server-2jzxq\" (UID: \"4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6\") " pod="openshift-machine-config-operator/machine-config-server-2jzxq" Mar 12 18:29:25.430365 master-0 kubenswrapper[29097]: I0312 18:29:25.430293 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3d77a98a-0176-4924-81d3-8e9890852b38-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:29:25.430714 master-0 kubenswrapper[29097]: I0312 18:29:25.430685 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3d77a98a-0176-4924-81d3-8e9890852b38-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-sqdhq\" (UID: \"3d77a98a-0176-4924-81d3-8e9890852b38\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-sqdhq" Mar 12 18:29:25.430961 master-0 kubenswrapper[29097]: I0312 18:29:25.430934 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/030160af-c915-4f00-903a-1c4b5c2b719a-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-4w5z7\" (UID: \"030160af-c915-4f00-903a-1c4b5c2b719a\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7" Mar 12 18:29:25.431341 master-0 kubenswrapper[29097]: I0312 18:29:25.431302 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6-node-bootstrap-token\") pod \"machine-config-server-2jzxq\" (UID: \"4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6\") " pod="openshift-machine-config-operator/machine-config-server-2jzxq" Mar 12 18:29:25.443279 master-0 kubenswrapper[29097]: I0312 18:29:25.443226 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 12 18:29:25.443501 master-0 kubenswrapper[29097]: I0312 18:29:25.443402 29097 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 18:29:25.444083 master-0 kubenswrapper[29097]: I0312 18:29:25.443840 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmxc2\" (UniqueName: \"kubernetes.io/projected/e22c7035-4b7a-48cb-9abb-db277b387842-kube-api-access-pmxc2\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:29:25.459103 master-0 kubenswrapper[29097]: I0312 18:29:25.459046 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfpb9\" (UniqueName: \"kubernetes.io/projected/37cd9c0a-697e-4e67-932b-b331ff77c8c0-kube-api-access-pfpb9\") pod \"openshift-config-operator-64488f9d78-tjp2j\" (UID: \"37cd9c0a-697e-4e67-932b-b331ff77c8c0\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" Mar 12 18:29:25.468042 master-0 kubenswrapper[29097]: I0312 18:29:25.467988 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 12 18:29:25.475943 master-0 kubenswrapper[29097]: I0312 18:29:25.475899 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmvnh\" (UniqueName: \"kubernetes.io/projected/b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c-kube-api-access-xmvnh\") pod \"certified-operators-6jhwp\" (UID: \"b86295a3-1fc8-495f-b1b2-5cc4a3c3ab2c\") " pod="openshift-marketplace/certified-operators-6jhwp" Mar 12 18:29:25.502174 master-0 kubenswrapper[29097]: I0312 18:29:25.502125 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68xhl\" (UniqueName: \"kubernetes.io/projected/4687cf53-55d7-42b7-b24d-e57da3989fd6-kube-api-access-68xhl\") pod \"machine-api-operator-84bf6db4f9-gnrzd\" (UID: \"4687cf53-55d7-42b7-b24d-e57da3989fd6\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-gnrzd" Mar 12 18:29:25.513642 master-0 kubenswrapper[29097]: I0312 18:29:25.513601 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmzf4\" (UniqueName: \"kubernetes.io/projected/fb529297-b3de-4167-a91e-0a63725b3b0f-kube-api-access-tmzf4\") pod \"apiserver-6cb976c975-4sxlg\" (UID: \"fb529297-b3de-4167-a91e-0a63725b3b0f\") " pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:29:25.530887 master-0 kubenswrapper[29097]: I0312 18:29:25.530849 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtrvs\" (UniqueName: \"kubernetes.io/projected/8fe3d699-023e-4de7-8d42-6c9d8a5e68f3-kube-api-access-xtrvs\") pod \"service-ca-84bfdbbb7f-769nb\" (UID: \"8fe3d699-023e-4de7-8d42-6c9d8a5e68f3\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-769nb" Mar 12 18:29:25.537573 master-0 kubenswrapper[29097]: I0312 18:29:25.537532 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:29:25.537723 master-0 kubenswrapper[29097]: I0312 18:29:25.537708 29097 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 18:29:25.541398 master-0 kubenswrapper[29097]: I0312 18:29:25.541313 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:29:25.552508 master-0 kubenswrapper[29097]: I0312 18:29:25.552468 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfjj6\" (UniqueName: \"kubernetes.io/projected/bce831df-c604-4608-a24e-b14d62c5287a-kube-api-access-wfjj6\") pod \"csi-snapshot-controller-7577d6f48-2ltx9\" (UID: \"bce831df-c604-4608-a24e-b14d62c5287a\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-2ltx9" Mar 12 18:29:25.571734 master-0 kubenswrapper[29097]: I0312 18:29:25.571697 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6ggc\" (UniqueName: \"kubernetes.io/projected/74eb1407-de29-42e5-9e6c-ce1bec3a9d80-kube-api-access-b6ggc\") pod \"ovnkube-control-plane-66b55d57d-w7wj9\" (UID: \"74eb1407-de29-42e5-9e6c-ce1bec3a9d80\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-w7wj9" Mar 12 18:29:25.591358 master-0 kubenswrapper[29097]: I0312 18:29:25.589663 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" Mar 12 18:29:25.594556 master-0 kubenswrapper[29097]: I0312 18:29:25.593320 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw4m5\" (UniqueName: \"kubernetes.io/projected/d1b3859c-20a1-4a1c-8508-86ed843768f5-kube-api-access-gw4m5\") pod \"catalogd-controller-manager-7f8b8b6f4c-mb6tc\" (UID: \"d1b3859c-20a1-4a1c-8508-86ed843768f5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:29:25.594556 master-0 kubenswrapper[29097]: I0312 18:29:25.593741 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tjp2j" Mar 12 18:29:25.609818 master-0 kubenswrapper[29097]: I0312 18:29:25.609782 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 18:29:25.614556 master-0 kubenswrapper[29097]: I0312 18:29:25.613498 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkftr\" (UniqueName: \"kubernetes.io/projected/e5fb0152-3efd-4000-bce3-fa90b75316ae-kube-api-access-pkftr\") pod \"cluster-baremetal-operator-5cdb4c5598-2psgb\" (UID: \"e5fb0152-3efd-4000-bce3-fa90b75316ae\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-2psgb" Mar 12 18:29:25.631791 master-0 kubenswrapper[29097]: I0312 18:29:25.631730 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x57x\" (UniqueName: \"kubernetes.io/projected/4519000b-e475-4c26-a1c0-bf05cd9c242b-kube-api-access-5x57x\") pod \"community-operators-nmmwm\" (UID: \"4519000b-e475-4c26-a1c0-bf05cd9c242b\") " pod="openshift-marketplace/community-operators-nmmwm" Mar 12 18:29:25.653887 master-0 kubenswrapper[29097]: I0312 18:29:25.653836 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrg6p\" (UniqueName: \"kubernetes.io/projected/d3e5b8c8-a100-4880-a0b9-9c3989d4e739-kube-api-access-jrg6p\") pod \"redhat-operators-d5tcw\" (UID: \"d3e5b8c8-a100-4880-a0b9-9c3989d4e739\") " pod="openshift-marketplace/redhat-operators-d5tcw" Mar 12 18:29:25.670529 master-0 kubenswrapper[29097]: I0312 18:29:25.670476 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxsgv\" (UniqueName: \"kubernetes.io/projected/455f0aad-add2-49d0-995c-f92467bce2d6-kube-api-access-pxsgv\") pod \"multus-additional-cni-plugins-lv8hk\" (UID: \"455f0aad-add2-49d0-995c-f92467bce2d6\") " pod="openshift-multus/multus-additional-cni-plugins-lv8hk" Mar 12 18:29:25.695063 master-0 kubenswrapper[29097]: I0312 18:29:25.694477 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjz8k\" (UniqueName: \"kubernetes.io/projected/1287cbb9-c9f6-48d2-9fda-f4464074e41b-kube-api-access-hjz8k\") pod \"cluster-storage-operator-6fbfc8dc8f-88gzm\" (UID: \"1287cbb9-c9f6-48d2-9fda-f4464074e41b\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-88gzm" Mar 12 18:29:25.714935 master-0 kubenswrapper[29097]: I0312 18:29:25.714901 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e22c7035-4b7a-48cb-9abb-db277b387842-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-l4krq\" (UID: \"e22c7035-4b7a-48cb-9abb-db277b387842\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-l4krq" Mar 12 18:29:25.739101 master-0 kubenswrapper[29097]: I0312 18:29:25.738978 29097 request.go:700] Waited for 3.914196087s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/serviceaccounts/node-exporter/token Mar 12 18:29:25.756849 master-0 kubenswrapper[29097]: I0312 18:29:25.756808 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lmj2\" (UniqueName: \"kubernetes.io/projected/9b41258c-ac1d-4e00-ac5e-732d85441f12-kube-api-access-7lmj2\") pod \"apiserver-5786c989f8-f6jgb\" (UID: \"9b41258c-ac1d-4e00-ac5e-732d85441f12\") " pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:29:25.761150 master-0 kubenswrapper[29097]: I0312 18:29:25.761122 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52svc\" (UniqueName: \"kubernetes.io/projected/adb0dbbf-458d-46f5-b236-d4904e125418-kube-api-access-52svc\") pod \"node-exporter-6v462\" (UID: \"adb0dbbf-458d-46f5-b236-d4904e125418\") " pod="openshift-monitoring/node-exporter-6v462" Mar 12 18:29:25.777198 master-0 kubenswrapper[29097]: I0312 18:29:25.777148 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6ggg\" (UniqueName: \"kubernetes.io/projected/236f2886-bb69-49a7-9471-36454fd1cbd3-kube-api-access-b6ggg\") pod \"openshift-apiserver-operator-799b6db4d7-6qlzz\" (UID: \"236f2886-bb69-49a7-9471-36454fd1cbd3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6qlzz" Mar 12 18:29:25.793815 master-0 kubenswrapper[29097]: I0312 18:29:25.793770 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlf77\" (UniqueName: \"kubernetes.io/projected/306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed-kube-api-access-wlf77\") pod \"cluster-node-tuning-operator-66c7586884-lqpbp\" (UID: \"306e0b7d-e37c-4ee0-bf2b-b787dc6f3aed\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-lqpbp" Mar 12 18:29:25.812914 master-0 kubenswrapper[29097]: I0312 18:29:25.812839 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h65dg\" (UniqueName: \"kubernetes.io/projected/4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6-kube-api-access-h65dg\") pod \"machine-config-server-2jzxq\" (UID: \"4784ec2e-e7c1-4ea4-9e7c-6000ddf1c1b6\") " pod="openshift-machine-config-operator/machine-config-server-2jzxq" Mar 12 18:29:25.835157 master-0 kubenswrapper[29097]: I0312 18:29:25.834957 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn9nf\" (UniqueName: \"kubernetes.io/projected/062f1b21-2ffc-47da-8334-427c3b2a1a90-kube-api-access-jn9nf\") pod \"authentication-operator-7c6989d6c4-ljw8b\" (UID: \"062f1b21-2ffc-47da-8334-427c3b2a1a90\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ljw8b" Mar 12 18:29:25.846939 master-0 kubenswrapper[29097]: I0312 18:29:25.846865 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:29:25.851055 master-0 kubenswrapper[29097]: I0312 18:29:25.851017 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lj7z\" (UniqueName: \"kubernetes.io/projected/999f02f6-e9b8-4d4b-ac35-b8b43a931cfc-kube-api-access-2lj7z\") pod \"tuned-c6qmx\" (UID: \"999f02f6-e9b8-4d4b-ac35-b8b43a931cfc\") " pod="openshift-cluster-node-tuning-operator/tuned-c6qmx" Mar 12 18:29:25.871381 master-0 kubenswrapper[29097]: I0312 18:29:25.871343 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clsd9\" (UniqueName: \"kubernetes.io/projected/f5e09875-4445-4584-94f0-243148307bb0-kube-api-access-clsd9\") pod \"insights-operator-8f89dfddd-m6z6d\" (UID: \"f5e09875-4445-4584-94f0-243148307bb0\") " pod="openshift-insights/insights-operator-8f89dfddd-m6z6d" Mar 12 18:29:25.891213 master-0 kubenswrapper[29097]: I0312 18:29:25.891173 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jgbv\" (UniqueName: \"kubernetes.io/projected/5b48f8fd-2efe-44e3-a6d7-c71358b83a2f-kube-api-access-9jgbv\") pod \"network-metrics-daemon-z4sc9\" (UID: \"5b48f8fd-2efe-44e3-a6d7-c71358b83a2f\") " pod="openshift-multus/network-metrics-daemon-z4sc9" Mar 12 18:29:25.902296 master-0 kubenswrapper[29097]: I0312 18:29:25.902249 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:29:25.902410 master-0 kubenswrapper[29097]: I0312 18:29:25.902313 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:29:25.911160 master-0 kubenswrapper[29097]: I0312 18:29:25.911113 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5lf8\" (UniqueName: \"kubernetes.io/projected/ee55b576-6b8d-4217-b5a7-93b023a1e885-kube-api-access-j5lf8\") pod \"machine-config-controller-ff46b7bdf-k5hsv\" (UID: \"ee55b576-6b8d-4217-b5a7-93b023a1e885\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-k5hsv" Mar 12 18:29:25.930236 master-0 kubenswrapper[29097]: I0312 18:29:25.930185 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:29:25.932272 master-0 kubenswrapper[29097]: I0312 18:29:25.932241 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsdjs\" (UniqueName: \"kubernetes.io/projected/0fb78c61-2051-42e2-8668-fa7404ccac43-kube-api-access-zsdjs\") pod \"cluster-samples-operator-664cb58b85-8762n\" (UID: \"0fb78c61-2051-42e2-8668-fa7404ccac43\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-8762n" Mar 12 18:29:25.934918 master-0 kubenswrapper[29097]: I0312 18:29:25.934877 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-kwv7s" Mar 12 18:29:25.954161 master-0 kubenswrapper[29097]: I0312 18:29:25.954095 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p4dz\" (UniqueName: \"kubernetes.io/projected/030160af-c915-4f00-903a-1c4b5c2b719a-kube-api-access-9p4dz\") pod \"machine-approver-754bdc9f9d-4w5z7\" (UID: \"030160af-c915-4f00-903a-1c4b5c2b719a\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-4w5z7" Mar 12 18:29:25.980817 master-0 kubenswrapper[29097]: I0312 18:29:25.980771 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdb9w\" (UniqueName: \"kubernetes.io/projected/b6d288e3-8e73-44d2-874d-64c6c98dd991-kube-api-access-vdb9w\") pod \"network-operator-7c649bf6d4-vksss\" (UID: \"b6d288e3-8e73-44d2-874d-64c6c98dd991\") " pod="openshift-network-operator/network-operator-7c649bf6d4-vksss" Mar 12 18:29:25.992151 master-0 kubenswrapper[29097]: I0312 18:29:25.992097 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp5gk\" (UniqueName: \"kubernetes.io/projected/b38e7fcd-8f7a-4d4f-8702-7ef205261054-kube-api-access-zp5gk\") pod \"packageserver-694648486f-f89lc\" (UID: \"b38e7fcd-8f7a-4d4f-8702-7ef205261054\") " pod="openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc" Mar 12 18:29:25.994661 master-0 kubenswrapper[29097]: I0312 18:29:25.994566 29097 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 18:29:26.000683 master-0 kubenswrapper[29097]: I0312 18:29:26.000636 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:29:26.014170 master-0 kubenswrapper[29097]: I0312 18:29:26.014129 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bttzm\" (UniqueName: \"kubernetes.io/projected/e94d098b-fbcc-4e85-b8ad-42f3a21c822c-kube-api-access-bttzm\") pod \"cluster-monitoring-operator-674cbfbd9d-fz79c\" (UID: \"e94d098b-fbcc-4e85-b8ad-42f3a21c822c\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-fz79c" Mar 12 18:29:26.031420 master-0 kubenswrapper[29097]: I0312 18:29:26.031390 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clz8x\" (UniqueName: \"kubernetes.io/projected/1c016b1e-d47c-47d4-a15f-4160e7731c82-kube-api-access-clz8x\") pod \"controller-manager-7cd74f9776-2rmc9\" (UID: \"1c016b1e-d47c-47d4-a15f-4160e7731c82\") " pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:29:26.050918 master-0 kubenswrapper[29097]: I0312 18:29:26.050859 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s55hv\" (UniqueName: \"kubernetes.io/projected/223a548b-a3ad-40dd-82de-e3dbb7f3e4fa-kube-api-access-s55hv\") pod \"openshift-controller-manager-operator-8565d84698-m6hsp\" (UID: \"223a548b-a3ad-40dd-82de-e3dbb7f3e4fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-m6hsp" Mar 12 18:29:26.073017 master-0 kubenswrapper[29097]: E0312 18:29:26.072969 29097 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 18:29:26.073017 master-0 kubenswrapper[29097]: E0312 18:29:26.073007 29097 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-retry-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 18:29:26.073226 master-0 kubenswrapper[29097]: E0312 18:29:26.073073 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kube-api-access podName:4cb73c69-af16-4565-bdb5-aeae9dcfb423 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:26.573053011 +0000 UTC m=+6.127033108 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kube-api-access") pod "installer-3-retry-1-master-0" (UID: "4cb73c69-af16-4565-bdb5-aeae9dcfb423") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 18:29:26.142339 master-0 kubenswrapper[29097]: I0312 18:29:26.142235 29097 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 12 18:29:26.142339 master-0 kubenswrapper[29097]: I0312 18:29:26.142329 29097 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 12 18:29:26.304331 master-0 kubenswrapper[29097]: I0312 18:29:26.304239 29097 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:29:26.304331 master-0 kubenswrapper[29097]: [-]has-synced failed: reason withheld Mar 12 18:29:26.304331 master-0 kubenswrapper[29097]: [+]process-running ok Mar 12 18:29:26.304331 master-0 kubenswrapper[29097]: healthz check failed Mar 12 18:29:26.304331 master-0 kubenswrapper[29097]: I0312 18:29:26.304304 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:29:26.354973 master-0 kubenswrapper[29097]: I0312 18:29:26.354889 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=5.354872424 podStartE2EDuration="5.354872424s" podCreationTimestamp="2026-03-12 18:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:29:26.353999392 +0000 UTC m=+5.907979489" watchObservedRunningTime="2026-03-12 18:29:26.354872424 +0000 UTC m=+5.908852531" Mar 12 18:29:26.479558 master-0 kubenswrapper[29097]: I0312 18:29:26.479386 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc" Mar 12 18:29:26.481915 master-0 kubenswrapper[29097]: I0312 18:29:26.481880 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-694648486f-f89lc" Mar 12 18:29:26.549224 master-0 kubenswrapper[29097]: I0312 18:29:26.549169 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:29:26.551699 master-0 kubenswrapper[29097]: I0312 18:29:26.551639 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-mb6tc" Mar 12 18:29:26.646022 master-0 kubenswrapper[29097]: I0312 18:29:26.645957 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 18:29:26.647057 master-0 kubenswrapper[29097]: E0312 18:29:26.647011 29097 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 18:29:26.647057 master-0 kubenswrapper[29097]: E0312 18:29:26.647046 29097 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-retry-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 18:29:26.647250 master-0 kubenswrapper[29097]: E0312 18:29:26.647085 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kube-api-access podName:4cb73c69-af16-4565-bdb5-aeae9dcfb423 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:27.647070039 +0000 UTC m=+7.201050136 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kube-api-access") pod "installer-3-retry-1-master-0" (UID: "4cb73c69-af16-4565-bdb5-aeae9dcfb423") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 18:29:26.668393 master-0 kubenswrapper[29097]: I0312 18:29:26.668338 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:29:26.673011 master-0 kubenswrapper[29097]: I0312 18:29:26.672958 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:29:26.685344 master-0 kubenswrapper[29097]: I0312 18:29:26.685028 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" Mar 12 18:29:26.689038 master-0 kubenswrapper[29097]: I0312 18:29:26.689016 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" Mar 12 18:29:26.832226 master-0 kubenswrapper[29097]: I0312 18:29:26.832183 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:29:26.863210 master-0 kubenswrapper[29097]: I0312 18:29:26.863167 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6jhwp" Mar 12 18:29:26.999802 master-0 kubenswrapper[29097]: I0312 18:29:26.999763 29097 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 18:29:26.999979 master-0 kubenswrapper[29097]: I0312 18:29:26.999838 29097 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 18:29:27.304639 master-0 kubenswrapper[29097]: I0312 18:29:27.304596 29097 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:29:27.304639 master-0 kubenswrapper[29097]: [-]has-synced failed: reason withheld Mar 12 18:29:27.304639 master-0 kubenswrapper[29097]: [+]process-running ok Mar 12 18:29:27.304639 master-0 kubenswrapper[29097]: healthz check failed Mar 12 18:29:27.305027 master-0 kubenswrapper[29097]: I0312 18:29:27.304654 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:29:27.508264 master-0 kubenswrapper[29097]: I0312 18:29:27.508210 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nmmwm" Mar 12 18:29:27.540888 master-0 kubenswrapper[29097]: I0312 18:29:27.540822 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nmmwm" Mar 12 18:29:27.661258 master-0 kubenswrapper[29097]: I0312 18:29:27.661144 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 18:29:27.661771 master-0 kubenswrapper[29097]: E0312 18:29:27.661294 29097 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 18:29:27.661771 master-0 kubenswrapper[29097]: E0312 18:29:27.661317 29097 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-retry-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 18:29:27.661771 master-0 kubenswrapper[29097]: E0312 18:29:27.661361 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kube-api-access podName:4cb73c69-af16-4565-bdb5-aeae9dcfb423 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:29.661345877 +0000 UTC m=+9.215325974 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kube-api-access") pod "installer-3-retry-1-master-0" (UID: "4cb73c69-af16-4565-bdb5-aeae9dcfb423") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 18:29:27.710259 master-0 kubenswrapper[29097]: I0312 18:29:27.710125 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=6.7101104750000005 podStartE2EDuration="6.710110475s" podCreationTimestamp="2026-03-12 18:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:29:27.708617807 +0000 UTC m=+7.262597904" watchObservedRunningTime="2026-03-12 18:29:27.710110475 +0000 UTC m=+7.264090572" Mar 12 18:29:27.807702 master-0 kubenswrapper[29097]: I0312 18:29:27.807634 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:29:28.305119 master-0 kubenswrapper[29097]: I0312 18:29:28.305081 29097 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:29:28.305119 master-0 kubenswrapper[29097]: [-]has-synced failed: reason withheld Mar 12 18:29:28.305119 master-0 kubenswrapper[29097]: [+]process-running ok Mar 12 18:29:28.305119 master-0 kubenswrapper[29097]: healthz check failed Mar 12 18:29:28.305411 master-0 kubenswrapper[29097]: I0312 18:29:28.305131 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:29:28.643785 master-0 kubenswrapper[29097]: I0312 18:29:28.643678 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ggkqg" Mar 12 18:29:28.826113 master-0 kubenswrapper[29097]: I0312 18:29:28.825311 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:28.860894 master-0 kubenswrapper[29097]: I0312 18:29:28.859045 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:29.012417 master-0 kubenswrapper[29097]: I0312 18:29:29.012305 29097 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 18:29:29.012417 master-0 kubenswrapper[29097]: I0312 18:29:29.012347 29097 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 18:29:29.305381 master-0 kubenswrapper[29097]: I0312 18:29:29.305331 29097 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:29:29.305381 master-0 kubenswrapper[29097]: [-]has-synced failed: reason withheld Mar 12 18:29:29.305381 master-0 kubenswrapper[29097]: [+]process-running ok Mar 12 18:29:29.305381 master-0 kubenswrapper[29097]: healthz check failed Mar 12 18:29:29.305381 master-0 kubenswrapper[29097]: I0312 18:29:29.305388 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:29:29.326778 master-0 kubenswrapper[29097]: I0312 18:29:29.326720 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:29:29.328948 master-0 kubenswrapper[29097]: I0312 18:29:29.328916 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-cpthp" Mar 12 18:29:29.697495 master-0 kubenswrapper[29097]: I0312 18:29:29.697358 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 18:29:29.697690 master-0 kubenswrapper[29097]: E0312 18:29:29.697632 29097 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 18:29:29.697690 master-0 kubenswrapper[29097]: E0312 18:29:29.697673 29097 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-retry-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 18:29:29.697789 master-0 kubenswrapper[29097]: E0312 18:29:29.697756 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kube-api-access podName:4cb73c69-af16-4565-bdb5-aeae9dcfb423 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:33.697729801 +0000 UTC m=+13.251709928 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kube-api-access") pod "installer-3-retry-1-master-0" (UID: "4cb73c69-af16-4565-bdb5-aeae9dcfb423") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 18:29:29.713142 master-0 kubenswrapper[29097]: I0312 18:29:29.713088 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:29:29.741120 master-0 kubenswrapper[29097]: I0312 18:29:29.741060 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:29:30.304362 master-0 kubenswrapper[29097]: I0312 18:29:30.304304 29097 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:29:30.304362 master-0 kubenswrapper[29097]: [-]has-synced failed: reason withheld Mar 12 18:29:30.304362 master-0 kubenswrapper[29097]: [+]process-running ok Mar 12 18:29:30.304362 master-0 kubenswrapper[29097]: healthz check failed Mar 12 18:29:30.304923 master-0 kubenswrapper[29097]: I0312 18:29:30.304395 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:29:30.404112 master-0 kubenswrapper[29097]: I0312 18:29:30.403387 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d5tcw" Mar 12 18:29:30.469850 master-0 kubenswrapper[29097]: I0312 18:29:30.469765 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d5tcw" Mar 12 18:29:30.488216 master-0 kubenswrapper[29097]: I0312 18:29:30.488135 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6jhwp" Mar 12 18:29:30.588267 master-0 kubenswrapper[29097]: I0312 18:29:30.588089 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:29:30.588445 master-0 kubenswrapper[29097]: I0312 18:29:30.588288 29097 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 18:29:30.593285 master-0 kubenswrapper[29097]: I0312 18:29:30.593252 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:29:30.910123 master-0 kubenswrapper[29097]: I0312 18:29:30.909996 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:29:30.918818 master-0 kubenswrapper[29097]: I0312 18:29:30.918738 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-5786c989f8-f6jgb" Mar 12 18:29:31.304977 master-0 kubenswrapper[29097]: I0312 18:29:31.304887 29097 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:29:31.304977 master-0 kubenswrapper[29097]: [-]has-synced failed: reason withheld Mar 12 18:29:31.304977 master-0 kubenswrapper[29097]: [+]process-running ok Mar 12 18:29:31.304977 master-0 kubenswrapper[29097]: healthz check failed Mar 12 18:29:31.306030 master-0 kubenswrapper[29097]: I0312 18:29:31.305974 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:29:31.311198 master-0 kubenswrapper[29097]: I0312 18:29:31.311163 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:29:31.311455 master-0 kubenswrapper[29097]: I0312 18:29:31.311419 29097 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 18:29:31.315495 master-0 kubenswrapper[29097]: I0312 18:29:31.315459 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:29:31.844102 master-0 kubenswrapper[29097]: I0312 18:29:31.844054 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:29:31.848982 master-0 kubenswrapper[29097]: I0312 18:29:31.848949 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-6cb976c975-4sxlg" Mar 12 18:29:31.866303 master-0 kubenswrapper[29097]: I0312 18:29:31.866257 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nmmwm" Mar 12 18:29:31.926882 master-0 kubenswrapper[29097]: I0312 18:29:31.926236 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nmmwm" Mar 12 18:29:31.928148 master-0 kubenswrapper[29097]: I0312 18:29:31.928126 29097 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 12 18:29:31.929789 master-0 kubenswrapper[29097]: I0312 18:29:31.928287 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="899242a15b2bdf3b4a04fb323647ca94" containerName="startup-monitor" containerID="cri-o://47171d91400de4e00e465f217262a5cfbabe28599c08b7a76e6b01d33016a909" gracePeriod=5 Mar 12 18:29:32.073082 master-0 kubenswrapper[29097]: I0312 18:29:32.073032 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7" Mar 12 18:29:32.082019 master-0 kubenswrapper[29097]: I0312 18:29:32.081982 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7" Mar 12 18:29:32.305535 master-0 kubenswrapper[29097]: I0312 18:29:32.305475 29097 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:29:32.305535 master-0 kubenswrapper[29097]: [-]has-synced failed: reason withheld Mar 12 18:29:32.305535 master-0 kubenswrapper[29097]: [+]process-running ok Mar 12 18:29:32.305535 master-0 kubenswrapper[29097]: healthz check failed Mar 12 18:29:32.306135 master-0 kubenswrapper[29097]: I0312 18:29:32.305552 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:29:32.441282 master-0 kubenswrapper[29097]: I0312 18:29:32.441211 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-tzgs9" Mar 12 18:29:32.445445 master-0 kubenswrapper[29097]: I0312 18:29:32.445424 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-tzgs9" Mar 12 18:29:33.304907 master-0 kubenswrapper[29097]: I0312 18:29:33.304846 29097 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:29:33.304907 master-0 kubenswrapper[29097]: [-]has-synced failed: reason withheld Mar 12 18:29:33.304907 master-0 kubenswrapper[29097]: [+]process-running ok Mar 12 18:29:33.304907 master-0 kubenswrapper[29097]: healthz check failed Mar 12 18:29:33.305183 master-0 kubenswrapper[29097]: I0312 18:29:33.304916 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:29:33.334707 master-0 kubenswrapper[29097]: I0312 18:29:33.334665 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:29:33.344879 master-0 kubenswrapper[29097]: I0312 18:29:33.344844 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-clkx5" Mar 12 18:29:33.755045 master-0 kubenswrapper[29097]: I0312 18:29:33.754918 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 18:29:33.755228 master-0 kubenswrapper[29097]: E0312 18:29:33.755132 29097 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 18:29:33.755228 master-0 kubenswrapper[29097]: E0312 18:29:33.755151 29097 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-retry-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 18:29:33.755228 master-0 kubenswrapper[29097]: E0312 18:29:33.755193 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kube-api-access podName:4cb73c69-af16-4565-bdb5-aeae9dcfb423 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:41.755179143 +0000 UTC m=+21.309159240 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kube-api-access") pod "installer-3-retry-1-master-0" (UID: "4cb73c69-af16-4565-bdb5-aeae9dcfb423") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 18:29:34.304962 master-0 kubenswrapper[29097]: I0312 18:29:34.304919 29097 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:29:34.304962 master-0 kubenswrapper[29097]: [-]has-synced failed: reason withheld Mar 12 18:29:34.304962 master-0 kubenswrapper[29097]: [+]process-running ok Mar 12 18:29:34.304962 master-0 kubenswrapper[29097]: healthz check failed Mar 12 18:29:34.305245 master-0 kubenswrapper[29097]: I0312 18:29:34.304971 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:29:34.349741 master-0 kubenswrapper[29097]: I0312 18:29:34.349685 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:34.350231 master-0 kubenswrapper[29097]: I0312 18:29:34.349852 29097 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 18:29:34.350231 master-0 kubenswrapper[29097]: I0312 18:29:34.349865 29097 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 18:29:34.376675 master-0 kubenswrapper[29097]: I0312 18:29:34.376636 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:34.989069 master-0 kubenswrapper[29097]: I0312 18:29:34.989018 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ggkqg" Mar 12 18:29:35.052601 master-0 kubenswrapper[29097]: I0312 18:29:35.052564 29097 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 18:29:35.097718 master-0 kubenswrapper[29097]: I0312 18:29:35.097622 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:29:35.098826 master-0 kubenswrapper[29097]: I0312 18:29:35.098799 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-9nzsn" Mar 12 18:29:35.307398 master-0 kubenswrapper[29097]: I0312 18:29:35.307367 29097 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:29:35.307398 master-0 kubenswrapper[29097]: [-]has-synced failed: reason withheld Mar 12 18:29:35.307398 master-0 kubenswrapper[29097]: [+]process-running ok Mar 12 18:29:35.307398 master-0 kubenswrapper[29097]: healthz check failed Mar 12 18:29:35.307751 master-0 kubenswrapper[29097]: I0312 18:29:35.307694 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:29:35.422557 master-0 kubenswrapper[29097]: I0312 18:29:35.422499 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d5tcw" Mar 12 18:29:35.464205 master-0 kubenswrapper[29097]: I0312 18:29:35.464159 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d5tcw" Mar 12 18:29:36.305480 master-0 kubenswrapper[29097]: I0312 18:29:36.305419 29097 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:29:36.305480 master-0 kubenswrapper[29097]: [-]has-synced failed: reason withheld Mar 12 18:29:36.305480 master-0 kubenswrapper[29097]: [+]process-running ok Mar 12 18:29:36.305480 master-0 kubenswrapper[29097]: healthz check failed Mar 12 18:29:36.305817 master-0 kubenswrapper[29097]: I0312 18:29:36.305489 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:29:36.912583 master-0 kubenswrapper[29097]: I0312 18:29:36.912515 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6jhwp" Mar 12 18:29:36.954154 master-0 kubenswrapper[29097]: I0312 18:29:36.953866 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6jhwp" Mar 12 18:29:37.068426 master-0 kubenswrapper[29097]: I0312 18:29:37.068370 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_899242a15b2bdf3b4a04fb323647ca94/startup-monitor/0.log" Mar 12 18:29:37.068426 master-0 kubenswrapper[29097]: I0312 18:29:37.068427 29097 generic.go:334] "Generic (PLEG): container finished" podID="899242a15b2bdf3b4a04fb323647ca94" containerID="47171d91400de4e00e465f217262a5cfbabe28599c08b7a76e6b01d33016a909" exitCode=137 Mar 12 18:29:37.304856 master-0 kubenswrapper[29097]: I0312 18:29:37.304823 29097 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-79bhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 18:29:37.304856 master-0 kubenswrapper[29097]: [-]has-synced failed: reason withheld Mar 12 18:29:37.304856 master-0 kubenswrapper[29097]: [+]process-running ok Mar 12 18:29:37.304856 master-0 kubenswrapper[29097]: healthz check failed Mar 12 18:29:37.305247 master-0 kubenswrapper[29097]: I0312 18:29:37.305199 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" podUID="518ffff8-8119-41be-8b76-ce49d5751254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:29:37.494181 master-0 kubenswrapper[29097]: I0312 18:29:37.494149 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_899242a15b2bdf3b4a04fb323647ca94/startup-monitor/0.log" Mar 12 18:29:37.494349 master-0 kubenswrapper[29097]: I0312 18:29:37.494234 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:37.603719 master-0 kubenswrapper[29097]: I0312 18:29:37.603590 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 12 18:29:37.603719 master-0 kubenswrapper[29097]: I0312 18:29:37.603669 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 12 18:29:37.603719 master-0 kubenswrapper[29097]: I0312 18:29:37.603718 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 12 18:29:37.603719 master-0 kubenswrapper[29097]: I0312 18:29:37.603726 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log" (OuterVolumeSpecName: "var-log") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:29:37.604011 master-0 kubenswrapper[29097]: I0312 18:29:37.603749 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 12 18:29:37.604011 master-0 kubenswrapper[29097]: I0312 18:29:37.603815 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests" (OuterVolumeSpecName: "manifests") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:29:37.604011 master-0 kubenswrapper[29097]: I0312 18:29:37.603854 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:29:37.604011 master-0 kubenswrapper[29097]: I0312 18:29:37.603870 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 12 18:29:37.604011 master-0 kubenswrapper[29097]: I0312 18:29:37.604006 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock" (OuterVolumeSpecName: "var-lock") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:29:37.604173 master-0 kubenswrapper[29097]: I0312 18:29:37.604164 29097 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") on node \"master-0\" DevicePath \"\"" Mar 12 18:29:37.604216 master-0 kubenswrapper[29097]: I0312 18:29:37.604178 29097 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:29:37.604216 master-0 kubenswrapper[29097]: I0312 18:29:37.604187 29097 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") on node \"master-0\" DevicePath \"\"" Mar 12 18:29:37.604216 master-0 kubenswrapper[29097]: I0312 18:29:37.604199 29097 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 18:29:37.609246 master-0 kubenswrapper[29097]: I0312 18:29:37.609194 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:29:37.705024 master-0 kubenswrapper[29097]: I0312 18:29:37.704982 29097 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:29:38.076961 master-0 kubenswrapper[29097]: I0312 18:29:38.076929 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_899242a15b2bdf3b4a04fb323647ca94/startup-monitor/0.log" Mar 12 18:29:38.077488 master-0 kubenswrapper[29097]: I0312 18:29:38.077475 29097 scope.go:117] "RemoveContainer" containerID="47171d91400de4e00e465f217262a5cfbabe28599c08b7a76e6b01d33016a909" Mar 12 18:29:38.077837 master-0 kubenswrapper[29097]: I0312 18:29:38.077681 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:29:38.127518 master-0 kubenswrapper[29097]: I0312 18:29:38.127450 29097 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="256f176c-f033-4839-8a98-80f85a9780ea" Mar 12 18:29:38.307039 master-0 kubenswrapper[29097]: I0312 18:29:38.306983 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:29:38.309950 master-0 kubenswrapper[29097]: I0312 18:29:38.309900 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-79f8cd6fdd-79bhf" Mar 12 18:29:38.697234 master-0 kubenswrapper[29097]: I0312 18:29:38.697184 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ggkqg" Mar 12 18:29:38.731719 master-0 kubenswrapper[29097]: I0312 18:29:38.731661 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="899242a15b2bdf3b4a04fb323647ca94" path="/var/lib/kubelet/pods/899242a15b2bdf3b4a04fb323647ca94/volumes" Mar 12 18:29:38.732112 master-0 kubenswrapper[29097]: I0312 18:29:38.732081 29097 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Mar 12 18:29:38.752000 master-0 kubenswrapper[29097]: I0312 18:29:38.751936 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 12 18:29:38.752000 master-0 kubenswrapper[29097]: I0312 18:29:38.751998 29097 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="256f176c-f033-4839-8a98-80f85a9780ea" Mar 12 18:29:38.754524 master-0 kubenswrapper[29097]: I0312 18:29:38.754469 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 12 18:29:38.754582 master-0 kubenswrapper[29097]: I0312 18:29:38.754529 29097 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="256f176c-f033-4839-8a98-80f85a9780ea" Mar 12 18:29:38.776632 master-0 kubenswrapper[29097]: I0312 18:29:38.776587 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ggkqg" Mar 12 18:29:41.857026 master-0 kubenswrapper[29097]: I0312 18:29:41.856950 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 18:29:41.858005 master-0 kubenswrapper[29097]: E0312 18:29:41.857372 29097 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 18:29:41.858005 master-0 kubenswrapper[29097]: E0312 18:29:41.857402 29097 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-retry-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 18:29:41.858005 master-0 kubenswrapper[29097]: E0312 18:29:41.857466 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kube-api-access podName:4cb73c69-af16-4565-bdb5-aeae9dcfb423 nodeName:}" failed. No retries permitted until 2026-03-12 18:29:57.85744226 +0000 UTC m=+37.411422397 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kube-api-access") pod "installer-3-retry-1-master-0" (UID: "4cb73c69-af16-4565-bdb5-aeae9dcfb423") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 18:29:45.736001 master-0 kubenswrapper[29097]: I0312 18:29:45.735918 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-sdr9x"] Mar 12 18:29:45.736740 master-0 kubenswrapper[29097]: E0312 18:29:45.736280 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c441a05d91070efc538925475b0a44" containerName="kube-controller-manager-cert-syncer" Mar 12 18:29:45.736740 master-0 kubenswrapper[29097]: I0312 18:29:45.736302 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c441a05d91070efc538925475b0a44" containerName="kube-controller-manager-cert-syncer" Mar 12 18:29:45.736740 master-0 kubenswrapper[29097]: E0312 18:29:45.736326 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e418d797-2c31-404b-9dc3-251399e42542" containerName="installer" Mar 12 18:29:45.736740 master-0 kubenswrapper[29097]: I0312 18:29:45.736338 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="e418d797-2c31-404b-9dc3-251399e42542" containerName="installer" Mar 12 18:29:45.736740 master-0 kubenswrapper[29097]: E0312 18:29:45.736364 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cb73c69-af16-4565-bdb5-aeae9dcfb423" containerName="installer" Mar 12 18:29:45.736740 master-0 kubenswrapper[29097]: I0312 18:29:45.736375 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cb73c69-af16-4565-bdb5-aeae9dcfb423" containerName="installer" Mar 12 18:29:45.736740 master-0 kubenswrapper[29097]: E0312 18:29:45.736394 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899242a15b2bdf3b4a04fb323647ca94" containerName="startup-monitor" Mar 12 18:29:45.736740 master-0 kubenswrapper[29097]: I0312 18:29:45.736404 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="899242a15b2bdf3b4a04fb323647ca94" containerName="startup-monitor" Mar 12 18:29:45.736740 master-0 kubenswrapper[29097]: E0312 18:29:45.736421 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 12 18:29:45.736740 master-0 kubenswrapper[29097]: I0312 18:29:45.736431 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 12 18:29:45.736740 master-0 kubenswrapper[29097]: E0312 18:29:45.736444 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec8121ea-f6e9-4232-9837-78b278a8cf54" containerName="installer" Mar 12 18:29:45.736740 master-0 kubenswrapper[29097]: I0312 18:29:45.736454 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec8121ea-f6e9-4232-9837-78b278a8cf54" containerName="installer" Mar 12 18:29:45.736740 master-0 kubenswrapper[29097]: E0312 18:29:45.736465 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7542f3f1-23fe-41df-99b9-4324c75d35b7" containerName="installer" Mar 12 18:29:45.736740 master-0 kubenswrapper[29097]: I0312 18:29:45.736475 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="7542f3f1-23fe-41df-99b9-4324c75d35b7" containerName="installer" Mar 12 18:29:45.736740 master-0 kubenswrapper[29097]: E0312 18:29:45.736497 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a4a981c-9454-4e1f-951e-1a62737659cc" containerName="assisted-installer-controller" Mar 12 18:29:45.736740 master-0 kubenswrapper[29097]: I0312 18:29:45.736510 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a4a981c-9454-4e1f-951e-1a62737659cc" containerName="assisted-installer-controller" Mar 12 18:29:45.736740 master-0 kubenswrapper[29097]: E0312 18:29:45.736586 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2bbd04e-d147-4343-9e5d-300e42de9dbb" containerName="installer" Mar 12 18:29:45.736740 master-0 kubenswrapper[29097]: I0312 18:29:45.736597 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2bbd04e-d147-4343-9e5d-300e42de9dbb" containerName="installer" Mar 12 18:29:45.736740 master-0 kubenswrapper[29097]: E0312 18:29:45.736612 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50322fdb-6d3f-4237-92d2-a170e2071de5" containerName="installer" Mar 12 18:29:45.736740 master-0 kubenswrapper[29097]: I0312 18:29:45.736624 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="50322fdb-6d3f-4237-92d2-a170e2071de5" containerName="installer" Mar 12 18:29:45.736740 master-0 kubenswrapper[29097]: E0312 18:29:45.736649 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c441a05d91070efc538925475b0a44" containerName="kube-controller-manager" Mar 12 18:29:45.736740 master-0 kubenswrapper[29097]: I0312 18:29:45.736660 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c441a05d91070efc538925475b0a44" containerName="kube-controller-manager" Mar 12 18:29:45.736740 master-0 kubenswrapper[29097]: E0312 18:29:45.736674 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a18e6ce-2fed-4e81-9191-45c1e5d3a090" containerName="installer" Mar 12 18:29:45.736740 master-0 kubenswrapper[29097]: I0312 18:29:45.736684 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a18e6ce-2fed-4e81-9191-45c1e5d3a090" containerName="installer" Mar 12 18:29:45.736740 master-0 kubenswrapper[29097]: E0312 18:29:45.736698 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99f63924-b198-4954-ba14-5c48e8830ec0" containerName="installer" Mar 12 18:29:45.736740 master-0 kubenswrapper[29097]: I0312 18:29:45.736707 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="99f63924-b198-4954-ba14-5c48e8830ec0" containerName="installer" Mar 12 18:29:45.736740 master-0 kubenswrapper[29097]: E0312 18:29:45.736725 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30102cc9-45f8-46f8-bb34-eec48fdb297d" containerName="installer" Mar 12 18:29:45.736740 master-0 kubenswrapper[29097]: I0312 18:29:45.736736 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="30102cc9-45f8-46f8-bb34-eec48fdb297d" containerName="installer" Mar 12 18:29:45.736740 master-0 kubenswrapper[29097]: E0312 18:29:45.736757 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38785e6e-3052-405c-8874-4f295985def5" containerName="installer" Mar 12 18:29:45.736740 master-0 kubenswrapper[29097]: I0312 18:29:45.736768 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="38785e6e-3052-405c-8874-4f295985def5" containerName="installer" Mar 12 18:29:45.737976 master-0 kubenswrapper[29097]: E0312 18:29:45.736786 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c441a05d91070efc538925475b0a44" containerName="cluster-policy-controller" Mar 12 18:29:45.737976 master-0 kubenswrapper[29097]: I0312 18:29:45.736797 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c441a05d91070efc538925475b0a44" containerName="cluster-policy-controller" Mar 12 18:29:45.737976 master-0 kubenswrapper[29097]: I0312 18:29:45.736954 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec8121ea-f6e9-4232-9837-78b278a8cf54" containerName="installer" Mar 12 18:29:45.737976 master-0 kubenswrapper[29097]: I0312 18:29:45.736987 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="39c441a05d91070efc538925475b0a44" containerName="kube-controller-manager" Mar 12 18:29:45.737976 master-0 kubenswrapper[29097]: I0312 18:29:45.737004 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="30102cc9-45f8-46f8-bb34-eec48fdb297d" containerName="installer" Mar 12 18:29:45.737976 master-0 kubenswrapper[29097]: I0312 18:29:45.737030 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 12 18:29:45.737976 master-0 kubenswrapper[29097]: I0312 18:29:45.737046 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="39c441a05d91070efc538925475b0a44" containerName="cluster-policy-controller" Mar 12 18:29:45.737976 master-0 kubenswrapper[29097]: I0312 18:29:45.737081 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="38785e6e-3052-405c-8874-4f295985def5" containerName="installer" Mar 12 18:29:45.737976 master-0 kubenswrapper[29097]: I0312 18:29:45.737098 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2bbd04e-d147-4343-9e5d-300e42de9dbb" containerName="installer" Mar 12 18:29:45.737976 master-0 kubenswrapper[29097]: I0312 18:29:45.737112 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="50322fdb-6d3f-4237-92d2-a170e2071de5" containerName="installer" Mar 12 18:29:45.737976 master-0 kubenswrapper[29097]: I0312 18:29:45.737128 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="899242a15b2bdf3b4a04fb323647ca94" containerName="startup-monitor" Mar 12 18:29:45.737976 master-0 kubenswrapper[29097]: I0312 18:29:45.737144 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a4a981c-9454-4e1f-951e-1a62737659cc" containerName="assisted-installer-controller" Mar 12 18:29:45.737976 master-0 kubenswrapper[29097]: I0312 18:29:45.737160 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a18e6ce-2fed-4e81-9191-45c1e5d3a090" containerName="installer" Mar 12 18:29:45.737976 master-0 kubenswrapper[29097]: I0312 18:29:45.737172 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="7542f3f1-23fe-41df-99b9-4324c75d35b7" containerName="installer" Mar 12 18:29:45.737976 master-0 kubenswrapper[29097]: I0312 18:29:45.737192 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="39c441a05d91070efc538925475b0a44" containerName="kube-controller-manager-cert-syncer" Mar 12 18:29:45.737976 master-0 kubenswrapper[29097]: I0312 18:29:45.737213 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="e418d797-2c31-404b-9dc3-251399e42542" containerName="installer" Mar 12 18:29:45.737976 master-0 kubenswrapper[29097]: I0312 18:29:45.737230 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="99f63924-b198-4954-ba14-5c48e8830ec0" containerName="installer" Mar 12 18:29:45.737976 master-0 kubenswrapper[29097]: I0312 18:29:45.737253 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cb73c69-af16-4565-bdb5-aeae9dcfb423" containerName="installer" Mar 12 18:29:45.737976 master-0 kubenswrapper[29097]: I0312 18:29:45.737835 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-sdr9x" Mar 12 18:29:45.740426 master-0 kubenswrapper[29097]: I0312 18:29:45.740152 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 12 18:29:45.740426 master-0 kubenswrapper[29097]: I0312 18:29:45.740319 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-g4mv5" Mar 12 18:29:45.803636 master-0 kubenswrapper[29097]: I0312 18:29:45.803574 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f5fdd831-61b8-4134-a15b-41a2794d7794-serviceca\") pod \"node-ca-sdr9x\" (UID: \"f5fdd831-61b8-4134-a15b-41a2794d7794\") " pod="openshift-image-registry/node-ca-sdr9x" Mar 12 18:29:45.803889 master-0 kubenswrapper[29097]: I0312 18:29:45.803730 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnsdm\" (UniqueName: \"kubernetes.io/projected/f5fdd831-61b8-4134-a15b-41a2794d7794-kube-api-access-tnsdm\") pod \"node-ca-sdr9x\" (UID: \"f5fdd831-61b8-4134-a15b-41a2794d7794\") " pod="openshift-image-registry/node-ca-sdr9x" Mar 12 18:29:45.803889 master-0 kubenswrapper[29097]: I0312 18:29:45.803755 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f5fdd831-61b8-4134-a15b-41a2794d7794-host\") pod \"node-ca-sdr9x\" (UID: \"f5fdd831-61b8-4134-a15b-41a2794d7794\") " pod="openshift-image-registry/node-ca-sdr9x" Mar 12 18:29:45.853570 master-0 kubenswrapper[29097]: I0312 18:29:45.853412 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:29:45.860668 master-0 kubenswrapper[29097]: I0312 18:29:45.860573 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:29:45.905323 master-0 kubenswrapper[29097]: I0312 18:29:45.905262 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnsdm\" (UniqueName: \"kubernetes.io/projected/f5fdd831-61b8-4134-a15b-41a2794d7794-kube-api-access-tnsdm\") pod \"node-ca-sdr9x\" (UID: \"f5fdd831-61b8-4134-a15b-41a2794d7794\") " pod="openshift-image-registry/node-ca-sdr9x" Mar 12 18:29:45.905756 master-0 kubenswrapper[29097]: I0312 18:29:45.905722 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f5fdd831-61b8-4134-a15b-41a2794d7794-host\") pod \"node-ca-sdr9x\" (UID: \"f5fdd831-61b8-4134-a15b-41a2794d7794\") " pod="openshift-image-registry/node-ca-sdr9x" Mar 12 18:29:45.905983 master-0 kubenswrapper[29097]: I0312 18:29:45.905951 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f5fdd831-61b8-4134-a15b-41a2794d7794-serviceca\") pod \"node-ca-sdr9x\" (UID: \"f5fdd831-61b8-4134-a15b-41a2794d7794\") " pod="openshift-image-registry/node-ca-sdr9x" Mar 12 18:29:45.907178 master-0 kubenswrapper[29097]: I0312 18:29:45.907145 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f5fdd831-61b8-4134-a15b-41a2794d7794-serviceca\") pod \"node-ca-sdr9x\" (UID: \"f5fdd831-61b8-4134-a15b-41a2794d7794\") " pod="openshift-image-registry/node-ca-sdr9x" Mar 12 18:29:45.907777 master-0 kubenswrapper[29097]: I0312 18:29:45.907746 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f5fdd831-61b8-4134-a15b-41a2794d7794-host\") pod \"node-ca-sdr9x\" (UID: \"f5fdd831-61b8-4134-a15b-41a2794d7794\") " pod="openshift-image-registry/node-ca-sdr9x" Mar 12 18:29:45.923149 master-0 kubenswrapper[29097]: I0312 18:29:45.923097 29097 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 12 18:29:45.933599 master-0 kubenswrapper[29097]: I0312 18:29:45.929476 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnsdm\" (UniqueName: \"kubernetes.io/projected/f5fdd831-61b8-4134-a15b-41a2794d7794-kube-api-access-tnsdm\") pod \"node-ca-sdr9x\" (UID: \"f5fdd831-61b8-4134-a15b-41a2794d7794\") " pod="openshift-image-registry/node-ca-sdr9x" Mar 12 18:29:46.062273 master-0 kubenswrapper[29097]: I0312 18:29:46.062215 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-sdr9x" Mar 12 18:29:46.079308 master-0 kubenswrapper[29097]: W0312 18:29:46.079257 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5fdd831_61b8_4134_a15b_41a2794d7794.slice/crio-5287926d0ccb1f50ba46bace605868d2b4f7ad0db028e98b5902ca770efa7622 WatchSource:0}: Error finding container 5287926d0ccb1f50ba46bace605868d2b4f7ad0db028e98b5902ca770efa7622: Status 404 returned error can't find the container with id 5287926d0ccb1f50ba46bace605868d2b4f7ad0db028e98b5902ca770efa7622 Mar 12 18:29:46.081188 master-0 kubenswrapper[29097]: I0312 18:29:46.081159 29097 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 18:29:46.137341 master-0 kubenswrapper[29097]: I0312 18:29:46.137273 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-sdr9x" event={"ID":"f5fdd831-61b8-4134-a15b-41a2794d7794","Type":"ContainerStarted","Data":"5287926d0ccb1f50ba46bace605868d2b4f7ad0db028e98b5902ca770efa7622"} Mar 12 18:29:46.247639 master-0 kubenswrapper[29097]: I0312 18:29:46.247277 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:46.247639 master-0 kubenswrapper[29097]: I0312 18:29:46.247416 29097 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 18:29:46.271716 master-0 kubenswrapper[29097]: I0312 18:29:46.271657 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hx8q8" Mar 12 18:29:49.199120 master-0 kubenswrapper[29097]: I0312 18:29:49.199073 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-sdr9x" event={"ID":"f5fdd831-61b8-4134-a15b-41a2794d7794","Type":"ContainerStarted","Data":"beed67895d6f2cc93e8c0990e7d6d69b04ee8c2fd0751a2f27e5af8648ae044e"} Mar 12 18:29:55.617501 master-0 kubenswrapper[29097]: I0312 18:29:55.617429 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 18:29:55.641205 master-0 kubenswrapper[29097]: I0312 18:29:55.641098 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-sdr9x" podStartSLOduration=8.437565782 podStartE2EDuration="10.641072522s" podCreationTimestamp="2026-03-12 18:29:45 +0000 UTC" firstStartedPulling="2026-03-12 18:29:46.081096266 +0000 UTC m=+25.635076383" lastFinishedPulling="2026-03-12 18:29:48.284603026 +0000 UTC m=+27.838583123" observedRunningTime="2026-03-12 18:29:49.218160903 +0000 UTC m=+28.772141040" watchObservedRunningTime="2026-03-12 18:29:55.641072522 +0000 UTC m=+35.195052639" Mar 12 18:29:57.879729 master-0 kubenswrapper[29097]: I0312 18:29:57.879690 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-59b47fdff8-c7z2l"] Mar 12 18:29:57.880390 master-0 kubenswrapper[29097]: I0312 18:29:57.880372 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-59b47fdff8-c7z2l" Mar 12 18:29:57.881723 master-0 kubenswrapper[29097]: I0312 18:29:57.881701 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 12 18:29:57.882055 master-0 kubenswrapper[29097]: I0312 18:29:57.882031 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-cjzzq" Mar 12 18:29:57.889849 master-0 kubenswrapper[29097]: I0312 18:29:57.889796 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 18:29:57.890051 master-0 kubenswrapper[29097]: E0312 18:29:57.889978 29097 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 18:29:57.890051 master-0 kubenswrapper[29097]: E0312 18:29:57.890023 29097 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-retry-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 18:29:57.890146 master-0 kubenswrapper[29097]: E0312 18:29:57.890095 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kube-api-access podName:4cb73c69-af16-4565-bdb5-aeae9dcfb423 nodeName:}" failed. No retries permitted until 2026-03-12 18:30:29.890076666 +0000 UTC m=+69.444056763 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kube-api-access") pod "installer-3-retry-1-master-0" (UID: "4cb73c69-af16-4565-bdb5-aeae9dcfb423") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 18:29:57.893777 master-0 kubenswrapper[29097]: I0312 18:29:57.893735 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-59b47fdff8-c7z2l"] Mar 12 18:29:57.991798 master-0 kubenswrapper[29097]: I0312 18:29:57.991746 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/033d99f9-d059-4be7-b091-e8696d6a735b-monitoring-plugin-cert\") pod \"monitoring-plugin-59b47fdff8-c7z2l\" (UID: \"033d99f9-d059-4be7-b091-e8696d6a735b\") " pod="openshift-monitoring/monitoring-plugin-59b47fdff8-c7z2l" Mar 12 18:29:58.093425 master-0 kubenswrapper[29097]: I0312 18:29:58.093366 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/033d99f9-d059-4be7-b091-e8696d6a735b-monitoring-plugin-cert\") pod \"monitoring-plugin-59b47fdff8-c7z2l\" (UID: \"033d99f9-d059-4be7-b091-e8696d6a735b\") " pod="openshift-monitoring/monitoring-plugin-59b47fdff8-c7z2l" Mar 12 18:29:58.097463 master-0 kubenswrapper[29097]: I0312 18:29:58.097412 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/033d99f9-d059-4be7-b091-e8696d6a735b-monitoring-plugin-cert\") pod \"monitoring-plugin-59b47fdff8-c7z2l\" (UID: \"033d99f9-d059-4be7-b091-e8696d6a735b\") " pod="openshift-monitoring/monitoring-plugin-59b47fdff8-c7z2l" Mar 12 18:29:58.225956 master-0 kubenswrapper[29097]: I0312 18:29:58.225848 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-59b47fdff8-c7z2l" Mar 12 18:29:58.643432 master-0 kubenswrapper[29097]: I0312 18:29:58.643374 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-59b47fdff8-c7z2l"] Mar 12 18:29:58.650285 master-0 kubenswrapper[29097]: W0312 18:29:58.650230 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod033d99f9_d059_4be7_b091_e8696d6a735b.slice/crio-f04740788a158d37913b3b4c158d270dccedefe7de87f51f8f60a2ec830bf9d7 WatchSource:0}: Error finding container f04740788a158d37913b3b4c158d270dccedefe7de87f51f8f60a2ec830bf9d7: Status 404 returned error can't find the container with id f04740788a158d37913b3b4c158d270dccedefe7de87f51f8f60a2ec830bf9d7 Mar 12 18:29:59.265682 master-0 kubenswrapper[29097]: I0312 18:29:59.265614 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-59b47fdff8-c7z2l" event={"ID":"033d99f9-d059-4be7-b091-e8696d6a735b","Type":"ContainerStarted","Data":"f04740788a158d37913b3b4c158d270dccedefe7de87f51f8f60a2ec830bf9d7"} Mar 12 18:30:00.274904 master-0 kubenswrapper[29097]: I0312 18:30:00.274802 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-59b47fdff8-c7z2l" event={"ID":"033d99f9-d059-4be7-b091-e8696d6a735b","Type":"ContainerStarted","Data":"8ed8af861a2485ade226fe4961b11be195a7db220edb8c9d355bab04e7349f8c"} Mar 12 18:30:00.275366 master-0 kubenswrapper[29097]: I0312 18:30:00.275188 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-59b47fdff8-c7z2l" Mar 12 18:30:00.281704 master-0 kubenswrapper[29097]: I0312 18:30:00.281628 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-59b47fdff8-c7z2l" Mar 12 18:30:00.299635 master-0 kubenswrapper[29097]: I0312 18:30:00.297190 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-59b47fdff8-c7z2l" podStartSLOduration=1.993252853 podStartE2EDuration="3.297169782s" podCreationTimestamp="2026-03-12 18:29:57 +0000 UTC" firstStartedPulling="2026-03-12 18:29:58.652367493 +0000 UTC m=+38.206347590" lastFinishedPulling="2026-03-12 18:29:59.956284422 +0000 UTC m=+39.510264519" observedRunningTime="2026-03-12 18:30:00.294075014 +0000 UTC m=+39.848055121" watchObservedRunningTime="2026-03-12 18:30:00.297169782 +0000 UTC m=+39.851149909" Mar 12 18:30:08.838293 master-0 kubenswrapper[29097]: I0312 18:30:08.838243 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 12 18:30:08.839087 master-0 kubenswrapper[29097]: I0312 18:30:08.839060 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 18:30:08.843828 master-0 kubenswrapper[29097]: I0312 18:30:08.843768 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-h72pz" Mar 12 18:30:08.846660 master-0 kubenswrapper[29097]: I0312 18:30:08.846629 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 12 18:30:08.858445 master-0 kubenswrapper[29097]: I0312 18:30:08.858382 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 12 18:30:08.941119 master-0 kubenswrapper[29097]: I0312 18:30:08.941063 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b36c73bd-cc3f-4226-94b9-885671190812-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"b36c73bd-cc3f-4226-94b9-885671190812\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 18:30:08.941314 master-0 kubenswrapper[29097]: I0312 18:30:08.941170 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b36c73bd-cc3f-4226-94b9-885671190812-kube-api-access\") pod \"installer-4-master-0\" (UID: \"b36c73bd-cc3f-4226-94b9-885671190812\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 18:30:08.941314 master-0 kubenswrapper[29097]: I0312 18:30:08.941216 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b36c73bd-cc3f-4226-94b9-885671190812-var-lock\") pod \"installer-4-master-0\" (UID: \"b36c73bd-cc3f-4226-94b9-885671190812\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 18:30:09.043205 master-0 kubenswrapper[29097]: I0312 18:30:09.043143 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b36c73bd-cc3f-4226-94b9-885671190812-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"b36c73bd-cc3f-4226-94b9-885671190812\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 18:30:09.043477 master-0 kubenswrapper[29097]: I0312 18:30:09.043237 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b36c73bd-cc3f-4226-94b9-885671190812-kube-api-access\") pod \"installer-4-master-0\" (UID: \"b36c73bd-cc3f-4226-94b9-885671190812\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 18:30:09.043477 master-0 kubenswrapper[29097]: I0312 18:30:09.043300 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b36c73bd-cc3f-4226-94b9-885671190812-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"b36c73bd-cc3f-4226-94b9-885671190812\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 18:30:09.043477 master-0 kubenswrapper[29097]: I0312 18:30:09.043315 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b36c73bd-cc3f-4226-94b9-885671190812-var-lock\") pod \"installer-4-master-0\" (UID: \"b36c73bd-cc3f-4226-94b9-885671190812\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 18:30:09.043641 master-0 kubenswrapper[29097]: I0312 18:30:09.043393 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b36c73bd-cc3f-4226-94b9-885671190812-var-lock\") pod \"installer-4-master-0\" (UID: \"b36c73bd-cc3f-4226-94b9-885671190812\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 18:30:09.063394 master-0 kubenswrapper[29097]: I0312 18:30:09.063341 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b36c73bd-cc3f-4226-94b9-885671190812-kube-api-access\") pod \"installer-4-master-0\" (UID: \"b36c73bd-cc3f-4226-94b9-885671190812\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 18:30:09.183924 master-0 kubenswrapper[29097]: I0312 18:30:09.183802 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 18:30:09.622615 master-0 kubenswrapper[29097]: I0312 18:30:09.622567 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 12 18:30:10.373342 master-0 kubenswrapper[29097]: I0312 18:30:10.373287 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"b36c73bd-cc3f-4226-94b9-885671190812","Type":"ContainerStarted","Data":"7b9575f2e8d027d3ffeac9cbb50156c489356591025e38aa9732337ec1535c1e"} Mar 12 18:30:10.374377 master-0 kubenswrapper[29097]: I0312 18:30:10.374340 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"b36c73bd-cc3f-4226-94b9-885671190812","Type":"ContainerStarted","Data":"52c37d59dc3839a5335c6805f064d8e5d91a018940cafef59643ee5f8b172b42"} Mar 12 18:30:10.402334 master-0 kubenswrapper[29097]: I0312 18:30:10.402199 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-4-master-0" podStartSLOduration=2.402165707 podStartE2EDuration="2.402165707s" podCreationTimestamp="2026-03-12 18:30:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:30:10.400269909 +0000 UTC m=+49.954250086" watchObservedRunningTime="2026-03-12 18:30:10.402165707 +0000 UTC m=+49.956145844" Mar 12 18:30:20.727706 master-0 kubenswrapper[29097]: I0312 18:30:20.727650 29097 scope.go:117] "RemoveContainer" containerID="91fc9e27f58a493917f258512c2dfe1c4bf9d4efc52492f0f4d3e21237d1136f" Mar 12 18:30:20.754018 master-0 kubenswrapper[29097]: I0312 18:30:20.753812 29097 scope.go:117] "RemoveContainer" containerID="56c803b302b6c89542dd77ed04fecb43a59a8287926d38c4629dc8bd033d7a46" Mar 12 18:30:21.215361 master-0 kubenswrapper[29097]: E0312 18:30:21.215308 29097 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33feec78_4592_4343_965b_aa1b7044fcf3.slice/crio-26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Error finding container 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Status 404 returned error can't find the container with id 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547 Mar 12 18:30:29.938175 master-0 kubenswrapper[29097]: I0312 18:30:29.938125 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 18:30:29.942556 master-0 kubenswrapper[29097]: I0312 18:30:29.942496 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 18:30:30.140448 master-0 kubenswrapper[29097]: I0312 18:30:30.140404 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kube-api-access\") pod \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\" (UID: \"4cb73c69-af16-4565-bdb5-aeae9dcfb423\") " Mar 12 18:30:30.143677 master-0 kubenswrapper[29097]: I0312 18:30:30.143643 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4cb73c69-af16-4565-bdb5-aeae9dcfb423" (UID: "4cb73c69-af16-4565-bdb5-aeae9dcfb423"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:30:30.241879 master-0 kubenswrapper[29097]: I0312 18:30:30.241794 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cb73c69-af16-4565-bdb5-aeae9dcfb423-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 18:30:38.656623 master-0 kubenswrapper[29097]: I0312 18:30:38.656509 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cd74f9776-2rmc9"] Mar 12 18:30:38.657869 master-0 kubenswrapper[29097]: I0312 18:30:38.656852 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" podUID="1c016b1e-d47c-47d4-a15f-4160e7731c82" containerName="controller-manager" containerID="cri-o://969c0db5141344b4b23c0b0781fbe97e28190fa1a6362ee204322812779aa447" gracePeriod=30 Mar 12 18:30:38.666794 master-0 kubenswrapper[29097]: I0312 18:30:38.666726 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs"] Mar 12 18:30:38.667240 master-0 kubenswrapper[29097]: I0312 18:30:38.666997 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" podUID="be2da107-a419-423f-a657-44d681291f28" containerName="route-controller-manager" containerID="cri-o://0a30549c5c55928a6706fd117cdf0c1612cf9d20d7c9ff067345f1b6073c1c45" gracePeriod=30 Mar 12 18:30:38.844149 master-0 kubenswrapper[29097]: I0312 18:30:38.844003 29097 generic.go:334] "Generic (PLEG): container finished" podID="1c016b1e-d47c-47d4-a15f-4160e7731c82" containerID="969c0db5141344b4b23c0b0781fbe97e28190fa1a6362ee204322812779aa447" exitCode=0 Mar 12 18:30:38.844149 master-0 kubenswrapper[29097]: I0312 18:30:38.844092 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" event={"ID":"1c016b1e-d47c-47d4-a15f-4160e7731c82","Type":"ContainerDied","Data":"969c0db5141344b4b23c0b0781fbe97e28190fa1a6362ee204322812779aa447"} Mar 12 18:30:38.850923 master-0 kubenswrapper[29097]: I0312 18:30:38.850879 29097 generic.go:334] "Generic (PLEG): container finished" podID="be2da107-a419-423f-a657-44d681291f28" containerID="0a30549c5c55928a6706fd117cdf0c1612cf9d20d7c9ff067345f1b6073c1c45" exitCode=0 Mar 12 18:30:38.851013 master-0 kubenswrapper[29097]: I0312 18:30:38.850927 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" event={"ID":"be2da107-a419-423f-a657-44d681291f28","Type":"ContainerDied","Data":"0a30549c5c55928a6706fd117cdf0c1612cf9d20d7c9ff067345f1b6073c1c45"} Mar 12 18:30:39.302618 master-0 kubenswrapper[29097]: I0312 18:30:39.302288 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" Mar 12 18:30:39.390640 master-0 kubenswrapper[29097]: I0312 18:30:39.389956 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be2da107-a419-423f-a657-44d681291f28-serving-cert\") pod \"be2da107-a419-423f-a657-44d681291f28\" (UID: \"be2da107-a419-423f-a657-44d681291f28\") " Mar 12 18:30:39.390640 master-0 kubenswrapper[29097]: I0312 18:30:39.390010 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfp84\" (UniqueName: \"kubernetes.io/projected/be2da107-a419-423f-a657-44d681291f28-kube-api-access-jfp84\") pod \"be2da107-a419-423f-a657-44d681291f28\" (UID: \"be2da107-a419-423f-a657-44d681291f28\") " Mar 12 18:30:39.390640 master-0 kubenswrapper[29097]: I0312 18:30:39.390048 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be2da107-a419-423f-a657-44d681291f28-config\") pod \"be2da107-a419-423f-a657-44d681291f28\" (UID: \"be2da107-a419-423f-a657-44d681291f28\") " Mar 12 18:30:39.390640 master-0 kubenswrapper[29097]: I0312 18:30:39.390093 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be2da107-a419-423f-a657-44d681291f28-client-ca\") pod \"be2da107-a419-423f-a657-44d681291f28\" (UID: \"be2da107-a419-423f-a657-44d681291f28\") " Mar 12 18:30:39.390640 master-0 kubenswrapper[29097]: I0312 18:30:39.390635 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be2da107-a419-423f-a657-44d681291f28-client-ca" (OuterVolumeSpecName: "client-ca") pod "be2da107-a419-423f-a657-44d681291f28" (UID: "be2da107-a419-423f-a657-44d681291f28"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:30:39.391559 master-0 kubenswrapper[29097]: I0312 18:30:39.391420 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be2da107-a419-423f-a657-44d681291f28-config" (OuterVolumeSpecName: "config") pod "be2da107-a419-423f-a657-44d681291f28" (UID: "be2da107-a419-423f-a657-44d681291f28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:30:39.393621 master-0 kubenswrapper[29097]: I0312 18:30:39.393587 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be2da107-a419-423f-a657-44d681291f28-kube-api-access-jfp84" (OuterVolumeSpecName: "kube-api-access-jfp84") pod "be2da107-a419-423f-a657-44d681291f28" (UID: "be2da107-a419-423f-a657-44d681291f28"). InnerVolumeSpecName "kube-api-access-jfp84". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:30:39.394405 master-0 kubenswrapper[29097]: I0312 18:30:39.394323 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be2da107-a419-423f-a657-44d681291f28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "be2da107-a419-423f-a657-44d681291f28" (UID: "be2da107-a419-423f-a657-44d681291f28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:30:39.419897 master-0 kubenswrapper[29097]: I0312 18:30:39.419856 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:30:39.491453 master-0 kubenswrapper[29097]: I0312 18:30:39.491402 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clz8x\" (UniqueName: \"kubernetes.io/projected/1c016b1e-d47c-47d4-a15f-4160e7731c82-kube-api-access-clz8x\") pod \"1c016b1e-d47c-47d4-a15f-4160e7731c82\" (UID: \"1c016b1e-d47c-47d4-a15f-4160e7731c82\") " Mar 12 18:30:39.491734 master-0 kubenswrapper[29097]: I0312 18:30:39.491504 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-config\") pod \"1c016b1e-d47c-47d4-a15f-4160e7731c82\" (UID: \"1c016b1e-d47c-47d4-a15f-4160e7731c82\") " Mar 12 18:30:39.491734 master-0 kubenswrapper[29097]: I0312 18:30:39.491550 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-client-ca\") pod \"1c016b1e-d47c-47d4-a15f-4160e7731c82\" (UID: \"1c016b1e-d47c-47d4-a15f-4160e7731c82\") " Mar 12 18:30:39.491734 master-0 kubenswrapper[29097]: I0312 18:30:39.491574 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c016b1e-d47c-47d4-a15f-4160e7731c82-serving-cert\") pod \"1c016b1e-d47c-47d4-a15f-4160e7731c82\" (UID: \"1c016b1e-d47c-47d4-a15f-4160e7731c82\") " Mar 12 18:30:39.491734 master-0 kubenswrapper[29097]: I0312 18:30:39.491592 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-proxy-ca-bundles\") pod \"1c016b1e-d47c-47d4-a15f-4160e7731c82\" (UID: \"1c016b1e-d47c-47d4-a15f-4160e7731c82\") " Mar 12 18:30:39.491859 master-0 kubenswrapper[29097]: I0312 18:30:39.491754 29097 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be2da107-a419-423f-a657-44d681291f28-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 18:30:39.491859 master-0 kubenswrapper[29097]: I0312 18:30:39.491767 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfp84\" (UniqueName: \"kubernetes.io/projected/be2da107-a419-423f-a657-44d681291f28-kube-api-access-jfp84\") on node \"master-0\" DevicePath \"\"" Mar 12 18:30:39.491859 master-0 kubenswrapper[29097]: I0312 18:30:39.491780 29097 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be2da107-a419-423f-a657-44d681291f28-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:30:39.491859 master-0 kubenswrapper[29097]: I0312 18:30:39.491790 29097 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be2da107-a419-423f-a657-44d681291f28-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 18:30:39.492673 master-0 kubenswrapper[29097]: I0312 18:30:39.492643 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-config" (OuterVolumeSpecName: "config") pod "1c016b1e-d47c-47d4-a15f-4160e7731c82" (UID: "1c016b1e-d47c-47d4-a15f-4160e7731c82"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:30:39.492857 master-0 kubenswrapper[29097]: I0312 18:30:39.492814 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-client-ca" (OuterVolumeSpecName: "client-ca") pod "1c016b1e-d47c-47d4-a15f-4160e7731c82" (UID: "1c016b1e-d47c-47d4-a15f-4160e7731c82"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:30:39.493455 master-0 kubenswrapper[29097]: I0312 18:30:39.493426 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1c016b1e-d47c-47d4-a15f-4160e7731c82" (UID: "1c016b1e-d47c-47d4-a15f-4160e7731c82"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:30:39.494820 master-0 kubenswrapper[29097]: I0312 18:30:39.494790 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c016b1e-d47c-47d4-a15f-4160e7731c82-kube-api-access-clz8x" (OuterVolumeSpecName: "kube-api-access-clz8x") pod "1c016b1e-d47c-47d4-a15f-4160e7731c82" (UID: "1c016b1e-d47c-47d4-a15f-4160e7731c82"). InnerVolumeSpecName "kube-api-access-clz8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:30:39.494820 master-0 kubenswrapper[29097]: I0312 18:30:39.494799 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c016b1e-d47c-47d4-a15f-4160e7731c82-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1c016b1e-d47c-47d4-a15f-4160e7731c82" (UID: "1c016b1e-d47c-47d4-a15f-4160e7731c82"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:30:39.592672 master-0 kubenswrapper[29097]: I0312 18:30:39.592626 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clz8x\" (UniqueName: \"kubernetes.io/projected/1c016b1e-d47c-47d4-a15f-4160e7731c82-kube-api-access-clz8x\") on node \"master-0\" DevicePath \"\"" Mar 12 18:30:39.592672 master-0 kubenswrapper[29097]: I0312 18:30:39.592683 29097 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:30:39.592913 master-0 kubenswrapper[29097]: I0312 18:30:39.592696 29097 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 18:30:39.592913 master-0 kubenswrapper[29097]: I0312 18:30:39.592704 29097 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c016b1e-d47c-47d4-a15f-4160e7731c82-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 18:30:39.592913 master-0 kubenswrapper[29097]: I0312 18:30:39.592713 29097 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1c016b1e-d47c-47d4-a15f-4160e7731c82-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 12 18:30:39.860918 master-0 kubenswrapper[29097]: I0312 18:30:39.860791 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" event={"ID":"1c016b1e-d47c-47d4-a15f-4160e7731c82","Type":"ContainerDied","Data":"cbd303c81d220cd5ed6e63d675881c37da5cce6a8a3c62add5c0bf5721b5fd9f"} Mar 12 18:30:39.860918 master-0 kubenswrapper[29097]: I0312 18:30:39.860850 29097 scope.go:117] "RemoveContainer" containerID="969c0db5141344b4b23c0b0781fbe97e28190fa1a6362ee204322812779aa447" Mar 12 18:30:39.861551 master-0 kubenswrapper[29097]: I0312 18:30:39.860899 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cd74f9776-2rmc9" Mar 12 18:30:39.862595 master-0 kubenswrapper[29097]: I0312 18:30:39.862535 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" event={"ID":"be2da107-a419-423f-a657-44d681291f28","Type":"ContainerDied","Data":"a91d85c0ce3e6a8b926dbcc4b0882326fc962f35e4dc2d7cda43fa3db3301729"} Mar 12 18:30:39.862671 master-0 kubenswrapper[29097]: I0312 18:30:39.862657 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs" Mar 12 18:30:39.876385 master-0 kubenswrapper[29097]: I0312 18:30:39.876351 29097 scope.go:117] "RemoveContainer" containerID="0a30549c5c55928a6706fd117cdf0c1612cf9d20d7c9ff067345f1b6073c1c45" Mar 12 18:30:39.932402 master-0 kubenswrapper[29097]: I0312 18:30:39.932369 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs"] Mar 12 18:30:39.937884 master-0 kubenswrapper[29097]: I0312 18:30:39.937275 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7db5456fb7-csszs"] Mar 12 18:30:39.958654 master-0 kubenswrapper[29097]: I0312 18:30:39.958582 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cd74f9776-2rmc9"] Mar 12 18:30:39.971295 master-0 kubenswrapper[29097]: I0312 18:30:39.971216 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7cd74f9776-2rmc9"] Mar 12 18:30:40.335256 master-0 kubenswrapper[29097]: I0312 18:30:40.335196 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-768884f5f7-7gqrl"] Mar 12 18:30:40.335745 master-0 kubenswrapper[29097]: E0312 18:30:40.335712 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c016b1e-d47c-47d4-a15f-4160e7731c82" containerName="controller-manager" Mar 12 18:30:40.335803 master-0 kubenswrapper[29097]: I0312 18:30:40.335745 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c016b1e-d47c-47d4-a15f-4160e7731c82" containerName="controller-manager" Mar 12 18:30:40.335803 master-0 kubenswrapper[29097]: E0312 18:30:40.335763 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be2da107-a419-423f-a657-44d681291f28" containerName="route-controller-manager" Mar 12 18:30:40.335803 master-0 kubenswrapper[29097]: I0312 18:30:40.335777 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="be2da107-a419-423f-a657-44d681291f28" containerName="route-controller-manager" Mar 12 18:30:40.336066 master-0 kubenswrapper[29097]: I0312 18:30:40.336036 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="be2da107-a419-423f-a657-44d681291f28" containerName="route-controller-manager" Mar 12 18:30:40.336111 master-0 kubenswrapper[29097]: I0312 18:30:40.336071 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c016b1e-d47c-47d4-a15f-4160e7731c82" containerName="controller-manager" Mar 12 18:30:40.336713 master-0 kubenswrapper[29097]: I0312 18:30:40.336683 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-768884f5f7-7gqrl" Mar 12 18:30:40.339437 master-0 kubenswrapper[29097]: I0312 18:30:40.339382 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6cc5f5946c-dzbsm"] Mar 12 18:30:40.339721 master-0 kubenswrapper[29097]: I0312 18:30:40.339678 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-ldpgf" Mar 12 18:30:40.339843 master-0 kubenswrapper[29097]: I0312 18:30:40.339794 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 18:30:40.340026 master-0 kubenswrapper[29097]: I0312 18:30:40.340001 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 18:30:40.340076 master-0 kubenswrapper[29097]: I0312 18:30:40.340038 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 18:30:40.340110 master-0 kubenswrapper[29097]: I0312 18:30:40.340070 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 18:30:40.340427 master-0 kubenswrapper[29097]: I0312 18:30:40.340403 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 18:30:40.340733 master-0 kubenswrapper[29097]: I0312 18:30:40.340669 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cc5f5946c-dzbsm" Mar 12 18:30:40.343968 master-0 kubenswrapper[29097]: I0312 18:30:40.343936 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 18:30:40.344320 master-0 kubenswrapper[29097]: I0312 18:30:40.344292 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 18:30:40.344497 master-0 kubenswrapper[29097]: I0312 18:30:40.344461 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 18:30:40.344992 master-0 kubenswrapper[29097]: I0312 18:30:40.344957 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 18:30:40.347201 master-0 kubenswrapper[29097]: I0312 18:30:40.347171 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 18:30:40.350403 master-0 kubenswrapper[29097]: I0312 18:30:40.350314 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-6d4tt" Mar 12 18:30:40.352036 master-0 kubenswrapper[29097]: I0312 18:30:40.351992 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 18:30:40.362997 master-0 kubenswrapper[29097]: I0312 18:30:40.362937 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-768884f5f7-7gqrl"] Mar 12 18:30:40.373265 master-0 kubenswrapper[29097]: I0312 18:30:40.373221 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cc5f5946c-dzbsm"] Mar 12 18:30:40.418281 master-0 kubenswrapper[29097]: I0312 18:30:40.418218 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh5qx\" (UniqueName: \"kubernetes.io/projected/0f7b447b-b612-4e44-aa57-c6b1b8e960ed-kube-api-access-bh5qx\") pod \"route-controller-manager-768884f5f7-7gqrl\" (UID: \"0f7b447b-b612-4e44-aa57-c6b1b8e960ed\") " pod="openshift-route-controller-manager/route-controller-manager-768884f5f7-7gqrl" Mar 12 18:30:40.418499 master-0 kubenswrapper[29097]: I0312 18:30:40.418304 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f7b447b-b612-4e44-aa57-c6b1b8e960ed-client-ca\") pod \"route-controller-manager-768884f5f7-7gqrl\" (UID: \"0f7b447b-b612-4e44-aa57-c6b1b8e960ed\") " pod="openshift-route-controller-manager/route-controller-manager-768884f5f7-7gqrl" Mar 12 18:30:40.418499 master-0 kubenswrapper[29097]: I0312 18:30:40.418450 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f7b447b-b612-4e44-aa57-c6b1b8e960ed-config\") pod \"route-controller-manager-768884f5f7-7gqrl\" (UID: \"0f7b447b-b612-4e44-aa57-c6b1b8e960ed\") " pod="openshift-route-controller-manager/route-controller-manager-768884f5f7-7gqrl" Mar 12 18:30:40.418606 master-0 kubenswrapper[29097]: I0312 18:30:40.418585 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f7b447b-b612-4e44-aa57-c6b1b8e960ed-serving-cert\") pod \"route-controller-manager-768884f5f7-7gqrl\" (UID: \"0f7b447b-b612-4e44-aa57-c6b1b8e960ed\") " pod="openshift-route-controller-manager/route-controller-manager-768884f5f7-7gqrl" Mar 12 18:30:40.519659 master-0 kubenswrapper[29097]: I0312 18:30:40.519575 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49810b8a-ab96-4a84-94b0-fbde491b9f0d-serving-cert\") pod \"controller-manager-6cc5f5946c-dzbsm\" (UID: \"49810b8a-ab96-4a84-94b0-fbde491b9f0d\") " pod="openshift-controller-manager/controller-manager-6cc5f5946c-dzbsm" Mar 12 18:30:40.519659 master-0 kubenswrapper[29097]: I0312 18:30:40.519647 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49810b8a-ab96-4a84-94b0-fbde491b9f0d-proxy-ca-bundles\") pod \"controller-manager-6cc5f5946c-dzbsm\" (UID: \"49810b8a-ab96-4a84-94b0-fbde491b9f0d\") " pod="openshift-controller-manager/controller-manager-6cc5f5946c-dzbsm" Mar 12 18:30:40.519938 master-0 kubenswrapper[29097]: I0312 18:30:40.519713 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmjzg\" (UniqueName: \"kubernetes.io/projected/49810b8a-ab96-4a84-94b0-fbde491b9f0d-kube-api-access-nmjzg\") pod \"controller-manager-6cc5f5946c-dzbsm\" (UID: \"49810b8a-ab96-4a84-94b0-fbde491b9f0d\") " pod="openshift-controller-manager/controller-manager-6cc5f5946c-dzbsm" Mar 12 18:30:40.519938 master-0 kubenswrapper[29097]: I0312 18:30:40.519787 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f7b447b-b612-4e44-aa57-c6b1b8e960ed-config\") pod \"route-controller-manager-768884f5f7-7gqrl\" (UID: \"0f7b447b-b612-4e44-aa57-c6b1b8e960ed\") " pod="openshift-route-controller-manager/route-controller-manager-768884f5f7-7gqrl" Mar 12 18:30:40.522709 master-0 kubenswrapper[29097]: I0312 18:30:40.522648 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f7b447b-b612-4e44-aa57-c6b1b8e960ed-config\") pod \"route-controller-manager-768884f5f7-7gqrl\" (UID: \"0f7b447b-b612-4e44-aa57-c6b1b8e960ed\") " pod="openshift-route-controller-manager/route-controller-manager-768884f5f7-7gqrl" Mar 12 18:30:40.522887 master-0 kubenswrapper[29097]: I0312 18:30:40.522857 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49810b8a-ab96-4a84-94b0-fbde491b9f0d-config\") pod \"controller-manager-6cc5f5946c-dzbsm\" (UID: \"49810b8a-ab96-4a84-94b0-fbde491b9f0d\") " pod="openshift-controller-manager/controller-manager-6cc5f5946c-dzbsm" Mar 12 18:30:40.522933 master-0 kubenswrapper[29097]: I0312 18:30:40.522911 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49810b8a-ab96-4a84-94b0-fbde491b9f0d-client-ca\") pod \"controller-manager-6cc5f5946c-dzbsm\" (UID: \"49810b8a-ab96-4a84-94b0-fbde491b9f0d\") " pod="openshift-controller-manager/controller-manager-6cc5f5946c-dzbsm" Mar 12 18:30:40.523003 master-0 kubenswrapper[29097]: I0312 18:30:40.522982 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f7b447b-b612-4e44-aa57-c6b1b8e960ed-serving-cert\") pod \"route-controller-manager-768884f5f7-7gqrl\" (UID: \"0f7b447b-b612-4e44-aa57-c6b1b8e960ed\") " pod="openshift-route-controller-manager/route-controller-manager-768884f5f7-7gqrl" Mar 12 18:30:40.523087 master-0 kubenswrapper[29097]: I0312 18:30:40.523063 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh5qx\" (UniqueName: \"kubernetes.io/projected/0f7b447b-b612-4e44-aa57-c6b1b8e960ed-kube-api-access-bh5qx\") pod \"route-controller-manager-768884f5f7-7gqrl\" (UID: \"0f7b447b-b612-4e44-aa57-c6b1b8e960ed\") " pod="openshift-route-controller-manager/route-controller-manager-768884f5f7-7gqrl" Mar 12 18:30:40.523138 master-0 kubenswrapper[29097]: I0312 18:30:40.523117 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f7b447b-b612-4e44-aa57-c6b1b8e960ed-client-ca\") pod \"route-controller-manager-768884f5f7-7gqrl\" (UID: \"0f7b447b-b612-4e44-aa57-c6b1b8e960ed\") " pod="openshift-route-controller-manager/route-controller-manager-768884f5f7-7gqrl" Mar 12 18:30:40.524443 master-0 kubenswrapper[29097]: I0312 18:30:40.524400 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f7b447b-b612-4e44-aa57-c6b1b8e960ed-client-ca\") pod \"route-controller-manager-768884f5f7-7gqrl\" (UID: \"0f7b447b-b612-4e44-aa57-c6b1b8e960ed\") " pod="openshift-route-controller-manager/route-controller-manager-768884f5f7-7gqrl" Mar 12 18:30:40.529526 master-0 kubenswrapper[29097]: I0312 18:30:40.529481 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f7b447b-b612-4e44-aa57-c6b1b8e960ed-serving-cert\") pod \"route-controller-manager-768884f5f7-7gqrl\" (UID: \"0f7b447b-b612-4e44-aa57-c6b1b8e960ed\") " pod="openshift-route-controller-manager/route-controller-manager-768884f5f7-7gqrl" Mar 12 18:30:40.541516 master-0 kubenswrapper[29097]: I0312 18:30:40.541445 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh5qx\" (UniqueName: \"kubernetes.io/projected/0f7b447b-b612-4e44-aa57-c6b1b8e960ed-kube-api-access-bh5qx\") pod \"route-controller-manager-768884f5f7-7gqrl\" (UID: \"0f7b447b-b612-4e44-aa57-c6b1b8e960ed\") " pod="openshift-route-controller-manager/route-controller-manager-768884f5f7-7gqrl" Mar 12 18:30:40.624914 master-0 kubenswrapper[29097]: I0312 18:30:40.624758 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49810b8a-ab96-4a84-94b0-fbde491b9f0d-client-ca\") pod \"controller-manager-6cc5f5946c-dzbsm\" (UID: \"49810b8a-ab96-4a84-94b0-fbde491b9f0d\") " pod="openshift-controller-manager/controller-manager-6cc5f5946c-dzbsm" Mar 12 18:30:40.625115 master-0 kubenswrapper[29097]: I0312 18:30:40.624917 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49810b8a-ab96-4a84-94b0-fbde491b9f0d-serving-cert\") pod \"controller-manager-6cc5f5946c-dzbsm\" (UID: \"49810b8a-ab96-4a84-94b0-fbde491b9f0d\") " pod="openshift-controller-manager/controller-manager-6cc5f5946c-dzbsm" Mar 12 18:30:40.625115 master-0 kubenswrapper[29097]: I0312 18:30:40.624955 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49810b8a-ab96-4a84-94b0-fbde491b9f0d-proxy-ca-bundles\") pod \"controller-manager-6cc5f5946c-dzbsm\" (UID: \"49810b8a-ab96-4a84-94b0-fbde491b9f0d\") " pod="openshift-controller-manager/controller-manager-6cc5f5946c-dzbsm" Mar 12 18:30:40.625390 master-0 kubenswrapper[29097]: I0312 18:30:40.625285 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmjzg\" (UniqueName: \"kubernetes.io/projected/49810b8a-ab96-4a84-94b0-fbde491b9f0d-kube-api-access-nmjzg\") pod \"controller-manager-6cc5f5946c-dzbsm\" (UID: \"49810b8a-ab96-4a84-94b0-fbde491b9f0d\") " pod="openshift-controller-manager/controller-manager-6cc5f5946c-dzbsm" Mar 12 18:30:40.625603 master-0 kubenswrapper[29097]: I0312 18:30:40.625569 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49810b8a-ab96-4a84-94b0-fbde491b9f0d-config\") pod \"controller-manager-6cc5f5946c-dzbsm\" (UID: \"49810b8a-ab96-4a84-94b0-fbde491b9f0d\") " pod="openshift-controller-manager/controller-manager-6cc5f5946c-dzbsm" Mar 12 18:30:40.625677 master-0 kubenswrapper[29097]: I0312 18:30:40.625653 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49810b8a-ab96-4a84-94b0-fbde491b9f0d-client-ca\") pod \"controller-manager-6cc5f5946c-dzbsm\" (UID: \"49810b8a-ab96-4a84-94b0-fbde491b9f0d\") " pod="openshift-controller-manager/controller-manager-6cc5f5946c-dzbsm" Mar 12 18:30:40.626732 master-0 kubenswrapper[29097]: I0312 18:30:40.626698 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49810b8a-ab96-4a84-94b0-fbde491b9f0d-config\") pod \"controller-manager-6cc5f5946c-dzbsm\" (UID: \"49810b8a-ab96-4a84-94b0-fbde491b9f0d\") " pod="openshift-controller-manager/controller-manager-6cc5f5946c-dzbsm" Mar 12 18:30:40.628737 master-0 kubenswrapper[29097]: I0312 18:30:40.628696 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49810b8a-ab96-4a84-94b0-fbde491b9f0d-proxy-ca-bundles\") pod \"controller-manager-6cc5f5946c-dzbsm\" (UID: \"49810b8a-ab96-4a84-94b0-fbde491b9f0d\") " pod="openshift-controller-manager/controller-manager-6cc5f5946c-dzbsm" Mar 12 18:30:40.629479 master-0 kubenswrapper[29097]: I0312 18:30:40.629412 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49810b8a-ab96-4a84-94b0-fbde491b9f0d-serving-cert\") pod \"controller-manager-6cc5f5946c-dzbsm\" (UID: \"49810b8a-ab96-4a84-94b0-fbde491b9f0d\") " pod="openshift-controller-manager/controller-manager-6cc5f5946c-dzbsm" Mar 12 18:30:40.642296 master-0 kubenswrapper[29097]: I0312 18:30:40.642236 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmjzg\" (UniqueName: \"kubernetes.io/projected/49810b8a-ab96-4a84-94b0-fbde491b9f0d-kube-api-access-nmjzg\") pod \"controller-manager-6cc5f5946c-dzbsm\" (UID: \"49810b8a-ab96-4a84-94b0-fbde491b9f0d\") " pod="openshift-controller-manager/controller-manager-6cc5f5946c-dzbsm" Mar 12 18:30:40.656700 master-0 kubenswrapper[29097]: I0312 18:30:40.656647 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-768884f5f7-7gqrl" Mar 12 18:30:40.673796 master-0 kubenswrapper[29097]: I0312 18:30:40.673711 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cc5f5946c-dzbsm" Mar 12 18:30:40.748070 master-0 kubenswrapper[29097]: I0312 18:30:40.740634 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c016b1e-d47c-47d4-a15f-4160e7731c82" path="/var/lib/kubelet/pods/1c016b1e-d47c-47d4-a15f-4160e7731c82/volumes" Mar 12 18:30:40.748070 master-0 kubenswrapper[29097]: I0312 18:30:40.741263 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be2da107-a419-423f-a657-44d681291f28" path="/var/lib/kubelet/pods/be2da107-a419-423f-a657-44d681291f28/volumes" Mar 12 18:30:41.146014 master-0 kubenswrapper[29097]: I0312 18:30:41.145957 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cc5f5946c-dzbsm"] Mar 12 18:30:41.156436 master-0 kubenswrapper[29097]: W0312 18:30:41.155751 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49810b8a_ab96_4a84_94b0_fbde491b9f0d.slice/crio-0a8fd9ee0da52f4d7a257d0f8ed92b480b2f2512bf3cc804983f46d5e3f56625 WatchSource:0}: Error finding container 0a8fd9ee0da52f4d7a257d0f8ed92b480b2f2512bf3cc804983f46d5e3f56625: Status 404 returned error can't find the container with id 0a8fd9ee0da52f4d7a257d0f8ed92b480b2f2512bf3cc804983f46d5e3f56625 Mar 12 18:30:41.207147 master-0 kubenswrapper[29097]: I0312 18:30:41.207095 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-768884f5f7-7gqrl"] Mar 12 18:30:41.222506 master-0 kubenswrapper[29097]: W0312 18:30:41.222459 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f7b447b_b612_4e44_aa57_c6b1b8e960ed.slice/crio-fd348f883d22ae96870cac3125057c8617a166bc6cd86c613ec6f885103c91c2 WatchSource:0}: Error finding container fd348f883d22ae96870cac3125057c8617a166bc6cd86c613ec6f885103c91c2: Status 404 returned error can't find the container with id fd348f883d22ae96870cac3125057c8617a166bc6cd86c613ec6f885103c91c2 Mar 12 18:30:41.880000 master-0 kubenswrapper[29097]: I0312 18:30:41.879935 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cc5f5946c-dzbsm" event={"ID":"49810b8a-ab96-4a84-94b0-fbde491b9f0d","Type":"ContainerStarted","Data":"5c84af038c7f5d76a5adb2143591ee1ff4a2236df868eaf0d4d4f8fffaff3906"} Mar 12 18:30:41.880000 master-0 kubenswrapper[29097]: I0312 18:30:41.879998 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cc5f5946c-dzbsm" event={"ID":"49810b8a-ab96-4a84-94b0-fbde491b9f0d","Type":"ContainerStarted","Data":"0a8fd9ee0da52f4d7a257d0f8ed92b480b2f2512bf3cc804983f46d5e3f56625"} Mar 12 18:30:41.880346 master-0 kubenswrapper[29097]: I0312 18:30:41.880264 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6cc5f5946c-dzbsm" Mar 12 18:30:41.881422 master-0 kubenswrapper[29097]: I0312 18:30:41.881379 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-768884f5f7-7gqrl" event={"ID":"0f7b447b-b612-4e44-aa57-c6b1b8e960ed","Type":"ContainerStarted","Data":"a16db4a4db3f8575f01339390d4c65b853b533ae962d589ba0dc86d6ba6b4616"} Mar 12 18:30:41.881422 master-0 kubenswrapper[29097]: I0312 18:30:41.881421 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-768884f5f7-7gqrl" event={"ID":"0f7b447b-b612-4e44-aa57-c6b1b8e960ed","Type":"ContainerStarted","Data":"fd348f883d22ae96870cac3125057c8617a166bc6cd86c613ec6f885103c91c2"} Mar 12 18:30:41.881595 master-0 kubenswrapper[29097]: I0312 18:30:41.881544 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-768884f5f7-7gqrl" Mar 12 18:30:41.885781 master-0 kubenswrapper[29097]: I0312 18:30:41.885752 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-768884f5f7-7gqrl" Mar 12 18:30:41.886644 master-0 kubenswrapper[29097]: I0312 18:30:41.886619 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6cc5f5946c-dzbsm" Mar 12 18:30:41.925875 master-0 kubenswrapper[29097]: I0312 18:30:41.925805 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6cc5f5946c-dzbsm" podStartSLOduration=3.925783138 podStartE2EDuration="3.925783138s" podCreationTimestamp="2026-03-12 18:30:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:30:41.923809028 +0000 UTC m=+81.477789145" watchObservedRunningTime="2026-03-12 18:30:41.925783138 +0000 UTC m=+81.479763235" Mar 12 18:30:41.976997 master-0 kubenswrapper[29097]: I0312 18:30:41.974909 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-768884f5f7-7gqrl" podStartSLOduration=3.974890034 podStartE2EDuration="3.974890034s" podCreationTimestamp="2026-03-12 18:30:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:30:41.973576681 +0000 UTC m=+81.527556768" watchObservedRunningTime="2026-03-12 18:30:41.974890034 +0000 UTC m=+81.528870131" Mar 12 18:30:47.756481 master-0 kubenswrapper[29097]: I0312 18:30:47.756418 29097 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 12 18:30:47.759619 master-0 kubenswrapper[29097]: I0312 18:30:47.757262 29097 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 12 18:30:47.759619 master-0 kubenswrapper[29097]: I0312 18:30:47.757471 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:30:47.759619 master-0 kubenswrapper[29097]: I0312 18:30:47.757709 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver" containerID="cri-o://f0795f7815622793e2682cda3a2b19590293281fa98d6a395d40fd76c0da7491" gracePeriod=15 Mar 12 18:30:47.759619 master-0 kubenswrapper[29097]: I0312 18:30:47.757777 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://df94ca5ecee83a72fb47501f1f919eaf4028159e1d146ee2a47f3391c16e15fc" gracePeriod=15 Mar 12 18:30:47.759619 master-0 kubenswrapper[29097]: I0312 18:30:47.757827 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://0e37d83cdb0f462d8ef6bf2539d2a10c7d84ad143eb688508398e11bcd665ee1" gracePeriod=15 Mar 12 18:30:47.759619 master-0 kubenswrapper[29097]: I0312 18:30:47.757856 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-check-endpoints" containerID="cri-o://01eeb3272c240b68931788fdbe353a89d0b5911f3532e9e6d3891f108d5a47fc" gracePeriod=15 Mar 12 18:30:47.759619 master-0 kubenswrapper[29097]: I0312 18:30:47.758353 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-syncer" containerID="cri-o://1f3cc25e5fa149ff612b530c2777559e47a1f93ddab8c630d27f6a1e07dff2d3" gracePeriod=15 Mar 12 18:30:47.759619 master-0 kubenswrapper[29097]: I0312 18:30:47.759158 29097 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 12 18:30:47.759619 master-0 kubenswrapper[29097]: E0312 18:30:47.759495 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-check-endpoints" Mar 12 18:30:47.759619 master-0 kubenswrapper[29097]: I0312 18:30:47.759530 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-check-endpoints" Mar 12 18:30:47.759619 master-0 kubenswrapper[29097]: E0312 18:30:47.759547 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077dd10388b9e3e48a07382126e86621" containerName="setup" Mar 12 18:30:47.759619 master-0 kubenswrapper[29097]: I0312 18:30:47.759557 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="077dd10388b9e3e48a07382126e86621" containerName="setup" Mar 12 18:30:47.759619 master-0 kubenswrapper[29097]: E0312 18:30:47.759574 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-insecure-readyz" Mar 12 18:30:47.759619 master-0 kubenswrapper[29097]: I0312 18:30:47.759582 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-insecure-readyz" Mar 12 18:30:47.759619 master-0 kubenswrapper[29097]: E0312 18:30:47.759596 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-syncer" Mar 12 18:30:47.759619 master-0 kubenswrapper[29097]: I0312 18:30:47.759604 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-syncer" Mar 12 18:30:47.759619 master-0 kubenswrapper[29097]: E0312 18:30:47.759622 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-regeneration-controller" Mar 12 18:30:47.759619 master-0 kubenswrapper[29097]: I0312 18:30:47.759630 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-regeneration-controller" Mar 12 18:30:47.760974 master-0 kubenswrapper[29097]: E0312 18:30:47.759666 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver" Mar 12 18:30:47.760974 master-0 kubenswrapper[29097]: I0312 18:30:47.759675 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver" Mar 12 18:30:47.760974 master-0 kubenswrapper[29097]: I0312 18:30:47.759988 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-regeneration-controller" Mar 12 18:30:47.760974 master-0 kubenswrapper[29097]: I0312 18:30:47.760018 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-check-endpoints" Mar 12 18:30:47.760974 master-0 kubenswrapper[29097]: I0312 18:30:47.760035 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="077dd10388b9e3e48a07382126e86621" containerName="setup" Mar 12 18:30:47.760974 master-0 kubenswrapper[29097]: I0312 18:30:47.760050 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-insecure-readyz" Mar 12 18:30:47.760974 master-0 kubenswrapper[29097]: I0312 18:30:47.760065 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver" Mar 12 18:30:47.760974 master-0 kubenswrapper[29097]: I0312 18:30:47.760082 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-syncer" Mar 12 18:30:47.825640 master-0 kubenswrapper[29097]: I0312 18:30:47.825584 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:30:47.825640 master-0 kubenswrapper[29097]: I0312 18:30:47.825655 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:30:47.825896 master-0 kubenswrapper[29097]: I0312 18:30:47.825732 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:30:47.825896 master-0 kubenswrapper[29097]: I0312 18:30:47.825766 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:30:47.825896 master-0 kubenswrapper[29097]: I0312 18:30:47.825792 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:30:47.826005 master-0 kubenswrapper[29097]: I0312 18:30:47.825904 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:30:47.826005 master-0 kubenswrapper[29097]: I0312 18:30:47.825966 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:30:47.826123 master-0 kubenswrapper[29097]: I0312 18:30:47.826079 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:30:47.863617 master-0 kubenswrapper[29097]: I0312 18:30:47.862992 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 12 18:30:47.925251 master-0 kubenswrapper[29097]: I0312 18:30:47.925203 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_077dd10388b9e3e48a07382126e86621/kube-apiserver-cert-syncer/0.log" Mar 12 18:30:47.926012 master-0 kubenswrapper[29097]: I0312 18:30:47.925967 29097 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="01eeb3272c240b68931788fdbe353a89d0b5911f3532e9e6d3891f108d5a47fc" exitCode=0 Mar 12 18:30:47.926012 master-0 kubenswrapper[29097]: I0312 18:30:47.926005 29097 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="df94ca5ecee83a72fb47501f1f919eaf4028159e1d146ee2a47f3391c16e15fc" exitCode=0 Mar 12 18:30:47.926103 master-0 kubenswrapper[29097]: I0312 18:30:47.926014 29097 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="0e37d83cdb0f462d8ef6bf2539d2a10c7d84ad143eb688508398e11bcd665ee1" exitCode=0 Mar 12 18:30:47.926103 master-0 kubenswrapper[29097]: I0312 18:30:47.926025 29097 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="1f3cc25e5fa149ff612b530c2777559e47a1f93ddab8c630d27f6a1e07dff2d3" exitCode=2 Mar 12 18:30:47.927940 master-0 kubenswrapper[29097]: I0312 18:30:47.927906 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:30:47.928014 master-0 kubenswrapper[29097]: I0312 18:30:47.927973 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:30:47.928014 master-0 kubenswrapper[29097]: I0312 18:30:47.927984 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:30:47.928014 master-0 kubenswrapper[29097]: I0312 18:30:47.928000 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:30:47.928137 master-0 kubenswrapper[29097]: I0312 18:30:47.928070 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:30:47.928137 master-0 kubenswrapper[29097]: I0312 18:30:47.928112 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:30:47.928137 master-0 kubenswrapper[29097]: I0312 18:30:47.928120 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:30:47.928222 master-0 kubenswrapper[29097]: I0312 18:30:47.928130 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:30:47.928222 master-0 kubenswrapper[29097]: I0312 18:30:47.928143 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:30:47.928222 master-0 kubenswrapper[29097]: I0312 18:30:47.928185 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:30:47.928222 master-0 kubenswrapper[29097]: I0312 18:30:47.928213 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:30:47.928343 master-0 kubenswrapper[29097]: I0312 18:30:47.928244 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:30:47.928343 master-0 kubenswrapper[29097]: I0312 18:30:47.928286 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:30:47.928343 master-0 kubenswrapper[29097]: I0312 18:30:47.928334 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:30:47.928432 master-0 kubenswrapper[29097]: I0312 18:30:47.928248 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:30:47.928432 master-0 kubenswrapper[29097]: I0312 18:30:47.928219 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:30:48.161258 master-0 kubenswrapper[29097]: I0312 18:30:48.161182 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:30:48.189676 master-0 kubenswrapper[29097]: W0312 18:30:48.189616 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a18cac8a90d6913a6a0391d805cddc9.slice/crio-650b881109a4cd179fbe199d84712ebf19615db4cf9f3a626f232d0f498449d4 WatchSource:0}: Error finding container 650b881109a4cd179fbe199d84712ebf19615db4cf9f3a626f232d0f498449d4: Status 404 returned error can't find the container with id 650b881109a4cd179fbe199d84712ebf19615db4cf9f3a626f232d0f498449d4 Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: I0312 18:30:48.634166 29097 patch_prober.go:28] interesting pod/kube-apiserver-master-0 container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]log ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]api-openshift-apiserver-available ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]api-openshift-oauth-apiserver-available ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]informer-sync ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]poststarthook/openshift.io-api-request-count-filter ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]poststarthook/openshift.io-startkubeinformers ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]poststarthook/generic-apiserver-start-informers ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]poststarthook/priority-and-fairness-config-consumer ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]poststarthook/priority-and-fairness-filter ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]poststarthook/start-apiextensions-informers ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]poststarthook/start-apiextensions-controllers ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]poststarthook/crd-informer-synced ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]poststarthook/start-system-namespaces-controller ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]poststarthook/start-cluster-authentication-info-controller ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]poststarthook/start-legacy-token-tracking-controller ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]poststarthook/start-service-ip-repair-controllers ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]poststarthook/rbac/bootstrap-roles ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]poststarthook/priority-and-fairness-config-producer ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]poststarthook/bootstrap-controller ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]poststarthook/start-kube-aggregator-informers ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]poststarthook/apiservice-status-local-available-controller ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]poststarthook/apiservice-status-remote-available-controller ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]poststarthook/apiservice-registration-controller ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]poststarthook/apiservice-wait-for-first-sync ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]poststarthook/apiservice-discovery-controller ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]poststarthook/kube-apiserver-autoregistration ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]autoregister-completion ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]poststarthook/apiservice-openapi-controller ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [+]poststarthook/apiservice-openapiv3-controller ok Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: [-]shutdown failed: reason withheld Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: readyz check failed Mar 12 18:30:48.634234 master-0 kubenswrapper[29097]: I0312 18:30:48.634223 29097 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 18:30:48.934153 master-0 kubenswrapper[29097]: I0312 18:30:48.934074 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"3a18cac8a90d6913a6a0391d805cddc9","Type":"ContainerStarted","Data":"ecfeb1293f9a1b97113e934bdb74cec11a8b89f956150a594f150e3f38e9f909"} Mar 12 18:30:48.934153 master-0 kubenswrapper[29097]: I0312 18:30:48.934129 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"3a18cac8a90d6913a6a0391d805cddc9","Type":"ContainerStarted","Data":"650b881109a4cd179fbe199d84712ebf19615db4cf9f3a626f232d0f498449d4"} Mar 12 18:30:53.944096 master-0 kubenswrapper[29097]: I0312 18:30:53.943986 29097 status_manager.go:851] "Failed to get status for pod" podUID="3a18cac8a90d6913a6a0391d805cddc9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:30:53.948749 master-0 kubenswrapper[29097]: I0312 18:30:53.948693 29097 status_manager.go:851] "Failed to get status for pod" podUID="3a18cac8a90d6913a6a0391d805cddc9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:30:53.984008 master-0 kubenswrapper[29097]: I0312 18:30:53.983958 29097 generic.go:334] "Generic (PLEG): container finished" podID="b36c73bd-cc3f-4226-94b9-885671190812" containerID="7b9575f2e8d027d3ffeac9cbb50156c489356591025e38aa9732337ec1535c1e" exitCode=0 Mar 12 18:30:53.984296 master-0 kubenswrapper[29097]: I0312 18:30:53.984014 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"b36c73bd-cc3f-4226-94b9-885671190812","Type":"ContainerDied","Data":"7b9575f2e8d027d3ffeac9cbb50156c489356591025e38aa9732337ec1535c1e"} Mar 12 18:30:53.985260 master-0 kubenswrapper[29097]: I0312 18:30:53.985206 29097 status_manager.go:851] "Failed to get status for pod" podUID="b36c73bd-cc3f-4226-94b9-885671190812" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:30:53.986034 master-0 kubenswrapper[29097]: I0312 18:30:53.985976 29097 status_manager.go:851] "Failed to get status for pod" podUID="3a18cac8a90d6913a6a0391d805cddc9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:30:54.134486 master-0 kubenswrapper[29097]: E0312 18:30:54.134390 29097 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:30:54.135390 master-0 kubenswrapper[29097]: E0312 18:30:54.135333 29097 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:30:54.136318 master-0 kubenswrapper[29097]: E0312 18:30:54.136267 29097 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:30:54.137225 master-0 kubenswrapper[29097]: E0312 18:30:54.137179 29097 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:30:54.138152 master-0 kubenswrapper[29097]: E0312 18:30:54.138096 29097 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:30:54.138388 master-0 kubenswrapper[29097]: I0312 18:30:54.138355 29097 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 12 18:30:54.139446 master-0 kubenswrapper[29097]: E0312 18:30:54.139385 29097 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 12 18:30:54.341143 master-0 kubenswrapper[29097]: E0312 18:30:54.341077 29097 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 12 18:30:54.743631 master-0 kubenswrapper[29097]: E0312 18:30:54.743424 29097 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 12 18:30:55.544756 master-0 kubenswrapper[29097]: E0312 18:30:55.544698 29097 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 12 18:30:55.623899 master-0 kubenswrapper[29097]: I0312 18:30:55.623854 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 18:30:55.624606 master-0 kubenswrapper[29097]: I0312 18:30:55.624554 29097 status_manager.go:851] "Failed to get status for pod" podUID="b36c73bd-cc3f-4226-94b9-885671190812" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:30:55.624939 master-0 kubenswrapper[29097]: I0312 18:30:55.624912 29097 status_manager.go:851] "Failed to get status for pod" podUID="3a18cac8a90d6913a6a0391d805cddc9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:30:55.630804 master-0 kubenswrapper[29097]: I0312 18:30:55.630755 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_077dd10388b9e3e48a07382126e86621/kube-apiserver-cert-syncer/0.log" Mar 12 18:30:55.631704 master-0 kubenswrapper[29097]: I0312 18:30:55.631679 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:30:55.632927 master-0 kubenswrapper[29097]: I0312 18:30:55.632875 29097 status_manager.go:851] "Failed to get status for pod" podUID="077dd10388b9e3e48a07382126e86621" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:30:55.633489 master-0 kubenswrapper[29097]: I0312 18:30:55.633447 29097 status_manager.go:851] "Failed to get status for pod" podUID="3a18cac8a90d6913a6a0391d805cddc9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:30:55.634236 master-0 kubenswrapper[29097]: I0312 18:30:55.634155 29097 status_manager.go:851] "Failed to get status for pod" podUID="b36c73bd-cc3f-4226-94b9-885671190812" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:30:55.654086 master-0 kubenswrapper[29097]: I0312 18:30:55.653968 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"077dd10388b9e3e48a07382126e86621\" (UID: \"077dd10388b9e3e48a07382126e86621\") " Mar 12 18:30:55.654163 master-0 kubenswrapper[29097]: I0312 18:30:55.654087 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "077dd10388b9e3e48a07382126e86621" (UID: "077dd10388b9e3e48a07382126e86621"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:30:55.654224 master-0 kubenswrapper[29097]: I0312 18:30:55.654189 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"077dd10388b9e3e48a07382126e86621\" (UID: \"077dd10388b9e3e48a07382126e86621\") " Mar 12 18:30:55.654224 master-0 kubenswrapper[29097]: I0312 18:30:55.654214 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "077dd10388b9e3e48a07382126e86621" (UID: "077dd10388b9e3e48a07382126e86621"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:30:55.654351 master-0 kubenswrapper[29097]: I0312 18:30:55.654318 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b36c73bd-cc3f-4226-94b9-885671190812-kube-api-access\") pod \"b36c73bd-cc3f-4226-94b9-885671190812\" (UID: \"b36c73bd-cc3f-4226-94b9-885671190812\") " Mar 12 18:30:55.654422 master-0 kubenswrapper[29097]: I0312 18:30:55.654390 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"077dd10388b9e3e48a07382126e86621\" (UID: \"077dd10388b9e3e48a07382126e86621\") " Mar 12 18:30:55.654482 master-0 kubenswrapper[29097]: I0312 18:30:55.654418 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b36c73bd-cc3f-4226-94b9-885671190812-var-lock\") pod \"b36c73bd-cc3f-4226-94b9-885671190812\" (UID: \"b36c73bd-cc3f-4226-94b9-885671190812\") " Mar 12 18:30:55.654482 master-0 kubenswrapper[29097]: I0312 18:30:55.654463 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "077dd10388b9e3e48a07382126e86621" (UID: "077dd10388b9e3e48a07382126e86621"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:30:55.654579 master-0 kubenswrapper[29097]: I0312 18:30:55.654485 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b36c73bd-cc3f-4226-94b9-885671190812-kubelet-dir\") pod \"b36c73bd-cc3f-4226-94b9-885671190812\" (UID: \"b36c73bd-cc3f-4226-94b9-885671190812\") " Mar 12 18:30:55.654579 master-0 kubenswrapper[29097]: I0312 18:30:55.654534 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b36c73bd-cc3f-4226-94b9-885671190812-var-lock" (OuterVolumeSpecName: "var-lock") pod "b36c73bd-cc3f-4226-94b9-885671190812" (UID: "b36c73bd-cc3f-4226-94b9-885671190812"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:30:55.654664 master-0 kubenswrapper[29097]: I0312 18:30:55.654631 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b36c73bd-cc3f-4226-94b9-885671190812-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b36c73bd-cc3f-4226-94b9-885671190812" (UID: "b36c73bd-cc3f-4226-94b9-885671190812"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:30:55.654966 master-0 kubenswrapper[29097]: I0312 18:30:55.654930 29097 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:30:55.654966 master-0 kubenswrapper[29097]: I0312 18:30:55.654962 29097 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b36c73bd-cc3f-4226-94b9-885671190812-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 18:30:55.655032 master-0 kubenswrapper[29097]: I0312 18:30:55.654977 29097 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b36c73bd-cc3f-4226-94b9-885671190812-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:30:55.655032 master-0 kubenswrapper[29097]: I0312 18:30:55.654994 29097 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:30:55.655032 master-0 kubenswrapper[29097]: I0312 18:30:55.655009 29097 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:30:55.656804 master-0 kubenswrapper[29097]: I0312 18:30:55.656768 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b36c73bd-cc3f-4226-94b9-885671190812-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b36c73bd-cc3f-4226-94b9-885671190812" (UID: "b36c73bd-cc3f-4226-94b9-885671190812"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:30:55.756396 master-0 kubenswrapper[29097]: I0312 18:30:55.756324 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b36c73bd-cc3f-4226-94b9-885671190812-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 18:30:56.000656 master-0 kubenswrapper[29097]: I0312 18:30:56.000482 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 18:30:56.000656 master-0 kubenswrapper[29097]: I0312 18:30:56.000502 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"b36c73bd-cc3f-4226-94b9-885671190812","Type":"ContainerDied","Data":"52c37d59dc3839a5335c6805f064d8e5d91a018940cafef59643ee5f8b172b42"} Mar 12 18:30:56.000656 master-0 kubenswrapper[29097]: I0312 18:30:56.000591 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52c37d59dc3839a5335c6805f064d8e5d91a018940cafef59643ee5f8b172b42" Mar 12 18:30:56.006006 master-0 kubenswrapper[29097]: I0312 18:30:56.005947 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_077dd10388b9e3e48a07382126e86621/kube-apiserver-cert-syncer/0.log" Mar 12 18:30:56.007146 master-0 kubenswrapper[29097]: I0312 18:30:56.007103 29097 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="f0795f7815622793e2682cda3a2b19590293281fa98d6a395d40fd76c0da7491" exitCode=0 Mar 12 18:30:56.007225 master-0 kubenswrapper[29097]: I0312 18:30:56.007180 29097 scope.go:117] "RemoveContainer" containerID="01eeb3272c240b68931788fdbe353a89d0b5911f3532e9e6d3891f108d5a47fc" Mar 12 18:30:56.007438 master-0 kubenswrapper[29097]: I0312 18:30:56.007383 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:30:56.016385 master-0 kubenswrapper[29097]: I0312 18:30:56.016306 29097 status_manager.go:851] "Failed to get status for pod" podUID="077dd10388b9e3e48a07382126e86621" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:30:56.017191 master-0 kubenswrapper[29097]: I0312 18:30:56.017131 29097 status_manager.go:851] "Failed to get status for pod" podUID="b36c73bd-cc3f-4226-94b9-885671190812" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:30:56.017919 master-0 kubenswrapper[29097]: I0312 18:30:56.017828 29097 status_manager.go:851] "Failed to get status for pod" podUID="3a18cac8a90d6913a6a0391d805cddc9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:30:56.032867 master-0 kubenswrapper[29097]: I0312 18:30:56.032279 29097 scope.go:117] "RemoveContainer" containerID="df94ca5ecee83a72fb47501f1f919eaf4028159e1d146ee2a47f3391c16e15fc" Mar 12 18:30:56.034402 master-0 kubenswrapper[29097]: I0312 18:30:56.034150 29097 status_manager.go:851] "Failed to get status for pod" podUID="077dd10388b9e3e48a07382126e86621" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:30:56.034987 master-0 kubenswrapper[29097]: I0312 18:30:56.034930 29097 status_manager.go:851] "Failed to get status for pod" podUID="b36c73bd-cc3f-4226-94b9-885671190812" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:30:56.035591 master-0 kubenswrapper[29097]: I0312 18:30:56.035498 29097 status_manager.go:851] "Failed to get status for pod" podUID="3a18cac8a90d6913a6a0391d805cddc9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:30:56.058698 master-0 kubenswrapper[29097]: I0312 18:30:56.058621 29097 scope.go:117] "RemoveContainer" containerID="0e37d83cdb0f462d8ef6bf2539d2a10c7d84ad143eb688508398e11bcd665ee1" Mar 12 18:30:56.077329 master-0 kubenswrapper[29097]: I0312 18:30:56.077284 29097 scope.go:117] "RemoveContainer" containerID="1f3cc25e5fa149ff612b530c2777559e47a1f93ddab8c630d27f6a1e07dff2d3" Mar 12 18:30:56.107183 master-0 kubenswrapper[29097]: I0312 18:30:56.107143 29097 scope.go:117] "RemoveContainer" containerID="f0795f7815622793e2682cda3a2b19590293281fa98d6a395d40fd76c0da7491" Mar 12 18:30:56.121248 master-0 kubenswrapper[29097]: I0312 18:30:56.121212 29097 scope.go:117] "RemoveContainer" containerID="577df93b02b1bb8864460514509f0380f90a4c7a14a9f501133a0fe6e0eb58f9" Mar 12 18:30:56.133129 master-0 kubenswrapper[29097]: I0312 18:30:56.133090 29097 scope.go:117] "RemoveContainer" containerID="01eeb3272c240b68931788fdbe353a89d0b5911f3532e9e6d3891f108d5a47fc" Mar 12 18:30:56.133720 master-0 kubenswrapper[29097]: E0312 18:30:56.133655 29097 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01eeb3272c240b68931788fdbe353a89d0b5911f3532e9e6d3891f108d5a47fc\": container with ID starting with 01eeb3272c240b68931788fdbe353a89d0b5911f3532e9e6d3891f108d5a47fc not found: ID does not exist" containerID="01eeb3272c240b68931788fdbe353a89d0b5911f3532e9e6d3891f108d5a47fc" Mar 12 18:30:56.133929 master-0 kubenswrapper[29097]: I0312 18:30:56.133795 29097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01eeb3272c240b68931788fdbe353a89d0b5911f3532e9e6d3891f108d5a47fc"} err="failed to get container status \"01eeb3272c240b68931788fdbe353a89d0b5911f3532e9e6d3891f108d5a47fc\": rpc error: code = NotFound desc = could not find container \"01eeb3272c240b68931788fdbe353a89d0b5911f3532e9e6d3891f108d5a47fc\": container with ID starting with 01eeb3272c240b68931788fdbe353a89d0b5911f3532e9e6d3891f108d5a47fc not found: ID does not exist" Mar 12 18:30:56.133982 master-0 kubenswrapper[29097]: I0312 18:30:56.133945 29097 scope.go:117] "RemoveContainer" containerID="df94ca5ecee83a72fb47501f1f919eaf4028159e1d146ee2a47f3391c16e15fc" Mar 12 18:30:56.134416 master-0 kubenswrapper[29097]: E0312 18:30:56.134358 29097 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df94ca5ecee83a72fb47501f1f919eaf4028159e1d146ee2a47f3391c16e15fc\": container with ID starting with df94ca5ecee83a72fb47501f1f919eaf4028159e1d146ee2a47f3391c16e15fc not found: ID does not exist" containerID="df94ca5ecee83a72fb47501f1f919eaf4028159e1d146ee2a47f3391c16e15fc" Mar 12 18:30:56.134465 master-0 kubenswrapper[29097]: I0312 18:30:56.134422 29097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df94ca5ecee83a72fb47501f1f919eaf4028159e1d146ee2a47f3391c16e15fc"} err="failed to get container status \"df94ca5ecee83a72fb47501f1f919eaf4028159e1d146ee2a47f3391c16e15fc\": rpc error: code = NotFound desc = could not find container \"df94ca5ecee83a72fb47501f1f919eaf4028159e1d146ee2a47f3391c16e15fc\": container with ID starting with df94ca5ecee83a72fb47501f1f919eaf4028159e1d146ee2a47f3391c16e15fc not found: ID does not exist" Mar 12 18:30:56.134465 master-0 kubenswrapper[29097]: I0312 18:30:56.134445 29097 scope.go:117] "RemoveContainer" containerID="0e37d83cdb0f462d8ef6bf2539d2a10c7d84ad143eb688508398e11bcd665ee1" Mar 12 18:30:56.134833 master-0 kubenswrapper[29097]: E0312 18:30:56.134799 29097 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e37d83cdb0f462d8ef6bf2539d2a10c7d84ad143eb688508398e11bcd665ee1\": container with ID starting with 0e37d83cdb0f462d8ef6bf2539d2a10c7d84ad143eb688508398e11bcd665ee1 not found: ID does not exist" containerID="0e37d83cdb0f462d8ef6bf2539d2a10c7d84ad143eb688508398e11bcd665ee1" Mar 12 18:30:56.134833 master-0 kubenswrapper[29097]: I0312 18:30:56.134826 29097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e37d83cdb0f462d8ef6bf2539d2a10c7d84ad143eb688508398e11bcd665ee1"} err="failed to get container status \"0e37d83cdb0f462d8ef6bf2539d2a10c7d84ad143eb688508398e11bcd665ee1\": rpc error: code = NotFound desc = could not find container \"0e37d83cdb0f462d8ef6bf2539d2a10c7d84ad143eb688508398e11bcd665ee1\": container with ID starting with 0e37d83cdb0f462d8ef6bf2539d2a10c7d84ad143eb688508398e11bcd665ee1 not found: ID does not exist" Mar 12 18:30:56.134927 master-0 kubenswrapper[29097]: I0312 18:30:56.134840 29097 scope.go:117] "RemoveContainer" containerID="1f3cc25e5fa149ff612b530c2777559e47a1f93ddab8c630d27f6a1e07dff2d3" Mar 12 18:30:56.135270 master-0 kubenswrapper[29097]: E0312 18:30:56.135235 29097 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f3cc25e5fa149ff612b530c2777559e47a1f93ddab8c630d27f6a1e07dff2d3\": container with ID starting with 1f3cc25e5fa149ff612b530c2777559e47a1f93ddab8c630d27f6a1e07dff2d3 not found: ID does not exist" containerID="1f3cc25e5fa149ff612b530c2777559e47a1f93ddab8c630d27f6a1e07dff2d3" Mar 12 18:30:56.135270 master-0 kubenswrapper[29097]: I0312 18:30:56.135257 29097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f3cc25e5fa149ff612b530c2777559e47a1f93ddab8c630d27f6a1e07dff2d3"} err="failed to get container status \"1f3cc25e5fa149ff612b530c2777559e47a1f93ddab8c630d27f6a1e07dff2d3\": rpc error: code = NotFound desc = could not find container \"1f3cc25e5fa149ff612b530c2777559e47a1f93ddab8c630d27f6a1e07dff2d3\": container with ID starting with 1f3cc25e5fa149ff612b530c2777559e47a1f93ddab8c630d27f6a1e07dff2d3 not found: ID does not exist" Mar 12 18:30:56.135366 master-0 kubenswrapper[29097]: I0312 18:30:56.135274 29097 scope.go:117] "RemoveContainer" containerID="f0795f7815622793e2682cda3a2b19590293281fa98d6a395d40fd76c0da7491" Mar 12 18:30:56.135631 master-0 kubenswrapper[29097]: E0312 18:30:56.135593 29097 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0795f7815622793e2682cda3a2b19590293281fa98d6a395d40fd76c0da7491\": container with ID starting with f0795f7815622793e2682cda3a2b19590293281fa98d6a395d40fd76c0da7491 not found: ID does not exist" containerID="f0795f7815622793e2682cda3a2b19590293281fa98d6a395d40fd76c0da7491" Mar 12 18:30:56.135690 master-0 kubenswrapper[29097]: I0312 18:30:56.135642 29097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0795f7815622793e2682cda3a2b19590293281fa98d6a395d40fd76c0da7491"} err="failed to get container status \"f0795f7815622793e2682cda3a2b19590293281fa98d6a395d40fd76c0da7491\": rpc error: code = NotFound desc = could not find container \"f0795f7815622793e2682cda3a2b19590293281fa98d6a395d40fd76c0da7491\": container with ID starting with f0795f7815622793e2682cda3a2b19590293281fa98d6a395d40fd76c0da7491 not found: ID does not exist" Mar 12 18:30:56.135690 master-0 kubenswrapper[29097]: I0312 18:30:56.135682 29097 scope.go:117] "RemoveContainer" containerID="577df93b02b1bb8864460514509f0380f90a4c7a14a9f501133a0fe6e0eb58f9" Mar 12 18:30:56.135990 master-0 kubenswrapper[29097]: E0312 18:30:56.135945 29097 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"577df93b02b1bb8864460514509f0380f90a4c7a14a9f501133a0fe6e0eb58f9\": container with ID starting with 577df93b02b1bb8864460514509f0380f90a4c7a14a9f501133a0fe6e0eb58f9 not found: ID does not exist" containerID="577df93b02b1bb8864460514509f0380f90a4c7a14a9f501133a0fe6e0eb58f9" Mar 12 18:30:56.135990 master-0 kubenswrapper[29097]: I0312 18:30:56.135984 29097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"577df93b02b1bb8864460514509f0380f90a4c7a14a9f501133a0fe6e0eb58f9"} err="failed to get container status \"577df93b02b1bb8864460514509f0380f90a4c7a14a9f501133a0fe6e0eb58f9\": rpc error: code = NotFound desc = could not find container \"577df93b02b1bb8864460514509f0380f90a4c7a14a9f501133a0fe6e0eb58f9\": container with ID starting with 577df93b02b1bb8864460514509f0380f90a4c7a14a9f501133a0fe6e0eb58f9 not found: ID does not exist" Mar 12 18:30:56.728815 master-0 kubenswrapper[29097]: I0312 18:30:56.728733 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="077dd10388b9e3e48a07382126e86621" path="/var/lib/kubelet/pods/077dd10388b9e3e48a07382126e86621/volumes" Mar 12 18:30:57.146328 master-0 kubenswrapper[29097]: E0312 18:30:57.146260 29097 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 12 18:30:58.202256 master-0 kubenswrapper[29097]: E0312 18:30:58.201972 29097 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189c2b8f676f947d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:3a18cac8a90d6913a6a0391d805cddc9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:30:48.192816253 +0000 UTC m=+87.746796350,LastTimestamp:2026-03-12 18:30:48.192816253 +0000 UTC m=+87.746796350,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:31:00.348216 master-0 kubenswrapper[29097]: E0312 18:31:00.348139 29097 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 12 18:31:00.720827 master-0 kubenswrapper[29097]: I0312 18:31:00.720630 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:31:00.728977 master-0 kubenswrapper[29097]: I0312 18:31:00.728901 29097 status_manager.go:851] "Failed to get status for pod" podUID="b36c73bd-cc3f-4226-94b9-885671190812" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:31:00.730192 master-0 kubenswrapper[29097]: I0312 18:31:00.730095 29097 status_manager.go:851] "Failed to get status for pod" podUID="3a18cac8a90d6913a6a0391d805cddc9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:31:00.731149 master-0 kubenswrapper[29097]: I0312 18:31:00.731085 29097 status_manager.go:851] "Failed to get status for pod" podUID="b36c73bd-cc3f-4226-94b9-885671190812" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:31:00.732015 master-0 kubenswrapper[29097]: I0312 18:31:00.731938 29097 status_manager.go:851] "Failed to get status for pod" podUID="3a18cac8a90d6913a6a0391d805cddc9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:31:00.764110 master-0 kubenswrapper[29097]: I0312 18:31:00.764048 29097 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="877648b8-a003-4a97-84e7-a774f71cc43c" Mar 12 18:31:00.764110 master-0 kubenswrapper[29097]: I0312 18:31:00.764100 29097 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="877648b8-a003-4a97-84e7-a774f71cc43c" Mar 12 18:31:00.765704 master-0 kubenswrapper[29097]: E0312 18:31:00.765610 29097 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:31:00.766558 master-0 kubenswrapper[29097]: I0312 18:31:00.766488 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:31:00.801619 master-0 kubenswrapper[29097]: W0312 18:31:00.801454 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48512e02022680c9d90092634f0fc146.slice/crio-b94854e3d6158f5ccf14dc0bb2f9091ea42ad250daf571f5e1c1823ce39eb16e WatchSource:0}: Error finding container b94854e3d6158f5ccf14dc0bb2f9091ea42ad250daf571f5e1c1823ce39eb16e: Status 404 returned error can't find the container with id b94854e3d6158f5ccf14dc0bb2f9091ea42ad250daf571f5e1c1823ce39eb16e Mar 12 18:31:01.059807 master-0 kubenswrapper[29097]: I0312 18:31:01.059710 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"b94854e3d6158f5ccf14dc0bb2f9091ea42ad250daf571f5e1c1823ce39eb16e"} Mar 12 18:31:02.069686 master-0 kubenswrapper[29097]: I0312 18:31:02.069618 29097 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="fc961e393edb556b8cbec8ab17e54863b3322ecccf97284873c2a4ada171ec46" exitCode=0 Mar 12 18:31:02.070557 master-0 kubenswrapper[29097]: I0312 18:31:02.069714 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerDied","Data":"fc961e393edb556b8cbec8ab17e54863b3322ecccf97284873c2a4ada171ec46"} Mar 12 18:31:02.071178 master-0 kubenswrapper[29097]: I0312 18:31:02.071134 29097 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="877648b8-a003-4a97-84e7-a774f71cc43c" Mar 12 18:31:02.071372 master-0 kubenswrapper[29097]: I0312 18:31:02.071343 29097 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="877648b8-a003-4a97-84e7-a774f71cc43c" Mar 12 18:31:02.072259 master-0 kubenswrapper[29097]: E0312 18:31:02.072201 29097 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:31:02.073148 master-0 kubenswrapper[29097]: I0312 18:31:02.072331 29097 status_manager.go:851] "Failed to get status for pod" podUID="b36c73bd-cc3f-4226-94b9-885671190812" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:31:02.073646 master-0 kubenswrapper[29097]: I0312 18:31:02.073331 29097 status_manager.go:851] "Failed to get status for pod" podUID="3a18cac8a90d6913a6a0391d805cddc9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:31:02.074737 master-0 kubenswrapper[29097]: I0312 18:31:02.074686 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_49835aec35bdc5feca0d7cf24779b8da/kube-controller-manager/0.log" Mar 12 18:31:02.074737 master-0 kubenswrapper[29097]: I0312 18:31:02.074731 29097 generic.go:334] "Generic (PLEG): container finished" podID="49835aec35bdc5feca0d7cf24779b8da" containerID="996ba8fb061459a072fc0dac62d85e8970305954c92459dbfc764353eca2dc98" exitCode=1 Mar 12 18:31:02.074935 master-0 kubenswrapper[29097]: I0312 18:31:02.074757 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"49835aec35bdc5feca0d7cf24779b8da","Type":"ContainerDied","Data":"996ba8fb061459a072fc0dac62d85e8970305954c92459dbfc764353eca2dc98"} Mar 12 18:31:02.075227 master-0 kubenswrapper[29097]: I0312 18:31:02.075182 29097 scope.go:117] "RemoveContainer" containerID="996ba8fb061459a072fc0dac62d85e8970305954c92459dbfc764353eca2dc98" Mar 12 18:31:02.076718 master-0 kubenswrapper[29097]: I0312 18:31:02.076260 29097 status_manager.go:851] "Failed to get status for pod" podUID="b36c73bd-cc3f-4226-94b9-885671190812" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:31:02.077433 master-0 kubenswrapper[29097]: I0312 18:31:02.077359 29097 status_manager.go:851] "Failed to get status for pod" podUID="3a18cac8a90d6913a6a0391d805cddc9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:31:02.078548 master-0 kubenswrapper[29097]: I0312 18:31:02.078453 29097 status_manager.go:851] "Failed to get status for pod" podUID="49835aec35bdc5feca0d7cf24779b8da" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:31:03.093262 master-0 kubenswrapper[29097]: I0312 18:31:03.093218 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_49835aec35bdc5feca0d7cf24779b8da/kube-controller-manager/0.log" Mar 12 18:31:03.093842 master-0 kubenswrapper[29097]: I0312 18:31:03.093299 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"49835aec35bdc5feca0d7cf24779b8da","Type":"ContainerStarted","Data":"62dd3567f8e7ab9a6f6b7c22887f90bf8f9e191c219728fb147a35e29d0e7d8e"} Mar 12 18:31:03.101487 master-0 kubenswrapper[29097]: I0312 18:31:03.101455 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"e7ce24ab0b9f229716c1a4b1a3fb2207e524cf67808c81bc6c33b4728c2eace5"} Mar 12 18:31:03.101792 master-0 kubenswrapper[29097]: I0312 18:31:03.101771 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"bf20bfc4d81330e9293a7f1910215e8cf740a716550d0740753717eae110e681"} Mar 12 18:31:03.101931 master-0 kubenswrapper[29097]: I0312 18:31:03.101914 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"c9bc9e878bed3b90772ae5003d1b2ca4996292289bf7d1cc533124053934668a"} Mar 12 18:31:03.242876 master-0 kubenswrapper[29097]: I0312 18:31:03.242818 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:31:03.720720 master-0 kubenswrapper[29097]: I0312 18:31:03.720511 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:31:03.720934 master-0 kubenswrapper[29097]: I0312 18:31:03.720759 29097 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 12 18:31:03.720934 master-0 kubenswrapper[29097]: I0312 18:31:03.720810 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="49835aec35bdc5feca0d7cf24779b8da" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 12 18:31:04.113529 master-0 kubenswrapper[29097]: I0312 18:31:04.113440 29097 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="877648b8-a003-4a97-84e7-a774f71cc43c" Mar 12 18:31:04.113529 master-0 kubenswrapper[29097]: I0312 18:31:04.113504 29097 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="877648b8-a003-4a97-84e7-a774f71cc43c" Mar 12 18:31:04.114075 master-0 kubenswrapper[29097]: I0312 18:31:04.113675 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"40c14f8ed3988e0ad05d3439bdeed6169aee5127ecaba660a91f16c1877529e6"} Mar 12 18:31:04.114075 master-0 kubenswrapper[29097]: I0312 18:31:04.113754 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"d875071495799394d0af1424c90324a61a48bf14e6ee5a465b2d494f41651511"} Mar 12 18:31:04.114075 master-0 kubenswrapper[29097]: I0312 18:31:04.113777 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:31:05.767290 master-0 kubenswrapper[29097]: I0312 18:31:05.767214 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:31:05.767290 master-0 kubenswrapper[29097]: I0312 18:31:05.767279 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:31:05.776355 master-0 kubenswrapper[29097]: I0312 18:31:05.776289 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:31:09.606044 master-0 kubenswrapper[29097]: I0312 18:31:09.605897 29097 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:31:10.776318 master-0 kubenswrapper[29097]: I0312 18:31:10.776072 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:31:11.036128 master-0 kubenswrapper[29097]: I0312 18:31:11.035906 29097 request.go:700] Waited for 1.004912219s, retries: 1, retry-after: 5s - retry-reason: 503 - request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/secrets?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dprometheus-operator-kube-rbac-proxy-config&resourceVersion=15490&timeout=53m58s&timeoutSeconds=3238&watch=true Mar 12 18:31:11.165857 master-0 kubenswrapper[29097]: I0312 18:31:11.165742 29097 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="877648b8-a003-4a97-84e7-a774f71cc43c" Mar 12 18:31:11.165857 master-0 kubenswrapper[29097]: I0312 18:31:11.165807 29097 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="877648b8-a003-4a97-84e7-a774f71cc43c" Mar 12 18:31:11.299708 master-0 kubenswrapper[29097]: I0312 18:31:11.299600 29097 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="48512e02022680c9d90092634f0fc146" podUID="823203d3-b506-4b3a-87a5-37f5c25eac99" Mar 12 18:31:12.175428 master-0 kubenswrapper[29097]: I0312 18:31:12.175340 29097 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="877648b8-a003-4a97-84e7-a774f71cc43c" Mar 12 18:31:12.175428 master-0 kubenswrapper[29097]: I0312 18:31:12.175391 29097 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="877648b8-a003-4a97-84e7-a774f71cc43c" Mar 12 18:31:12.180146 master-0 kubenswrapper[29097]: I0312 18:31:12.180075 29097 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="48512e02022680c9d90092634f0fc146" podUID="823203d3-b506-4b3a-87a5-37f5c25eac99" Mar 12 18:31:12.390321 master-0 kubenswrapper[29097]: I0312 18:31:12.390243 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 12 18:31:12.719249 master-0 kubenswrapper[29097]: I0312 18:31:12.719169 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 12 18:31:12.765820 master-0 kubenswrapper[29097]: I0312 18:31:12.765771 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 12 18:31:12.968109 master-0 kubenswrapper[29097]: I0312 18:31:12.968013 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 12 18:31:13.199788 master-0 kubenswrapper[29097]: I0312 18:31:13.199692 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 12 18:31:13.244202 master-0 kubenswrapper[29097]: I0312 18:31:13.244133 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 12 18:31:13.720706 master-0 kubenswrapper[29097]: I0312 18:31:13.720645 29097 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 12 18:31:13.721121 master-0 kubenswrapper[29097]: I0312 18:31:13.721063 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="49835aec35bdc5feca0d7cf24779b8da" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 12 18:31:13.900759 master-0 kubenswrapper[29097]: I0312 18:31:13.900676 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 12 18:31:13.940500 master-0 kubenswrapper[29097]: I0312 18:31:13.940391 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 12 18:31:13.966166 master-0 kubenswrapper[29097]: I0312 18:31:13.966102 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 12 18:31:14.067282 master-0 kubenswrapper[29097]: I0312 18:31:14.067204 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 12 18:31:14.337914 master-0 kubenswrapper[29097]: I0312 18:31:14.337661 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 12 18:31:14.759839 master-0 kubenswrapper[29097]: I0312 18:31:14.759591 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 12 18:31:15.405626 master-0 kubenswrapper[29097]: I0312 18:31:15.405425 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 12 18:31:15.420829 master-0 kubenswrapper[29097]: I0312 18:31:15.420772 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 12 18:31:15.424742 master-0 kubenswrapper[29097]: I0312 18:31:15.424687 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-ldpgf" Mar 12 18:31:15.492978 master-0 kubenswrapper[29097]: I0312 18:31:15.492906 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 12 18:31:15.510080 master-0 kubenswrapper[29097]: I0312 18:31:15.510028 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 12 18:31:15.534334 master-0 kubenswrapper[29097]: I0312 18:31:15.534282 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 12 18:31:15.609948 master-0 kubenswrapper[29097]: I0312 18:31:15.609902 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 12 18:31:15.632005 master-0 kubenswrapper[29097]: I0312 18:31:15.631944 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 12 18:31:15.635101 master-0 kubenswrapper[29097]: I0312 18:31:15.635057 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 12 18:31:15.666613 master-0 kubenswrapper[29097]: I0312 18:31:15.666344 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 12 18:31:15.806850 master-0 kubenswrapper[29097]: I0312 18:31:15.806791 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 12 18:31:15.865680 master-0 kubenswrapper[29097]: I0312 18:31:15.865629 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 12 18:31:15.896944 master-0 kubenswrapper[29097]: I0312 18:31:15.896838 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 12 18:31:15.917827 master-0 kubenswrapper[29097]: I0312 18:31:15.917690 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 12 18:31:15.922967 master-0 kubenswrapper[29097]: I0312 18:31:15.922927 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 12 18:31:15.925408 master-0 kubenswrapper[29097]: I0312 18:31:15.925375 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 12 18:31:15.932249 master-0 kubenswrapper[29097]: I0312 18:31:15.932213 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 18:31:16.000045 master-0 kubenswrapper[29097]: I0312 18:31:15.999763 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 12 18:31:16.101589 master-0 kubenswrapper[29097]: I0312 18:31:16.101473 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 12 18:31:16.113613 master-0 kubenswrapper[29097]: I0312 18:31:16.113550 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 18:31:16.118734 master-0 kubenswrapper[29097]: I0312 18:31:16.118692 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 12 18:31:16.183107 master-0 kubenswrapper[29097]: I0312 18:31:16.183011 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 12 18:31:16.242085 master-0 kubenswrapper[29097]: I0312 18:31:16.242044 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 12 18:31:16.280681 master-0 kubenswrapper[29097]: I0312 18:31:16.280617 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 18:31:16.312165 master-0 kubenswrapper[29097]: I0312 18:31:16.312120 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 12 18:31:16.395947 master-0 kubenswrapper[29097]: I0312 18:31:16.395875 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 12 18:31:16.397396 master-0 kubenswrapper[29097]: I0312 18:31:16.397346 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 12 18:31:16.671844 master-0 kubenswrapper[29097]: I0312 18:31:16.671576 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 12 18:31:16.676050 master-0 kubenswrapper[29097]: I0312 18:31:16.671908 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 12 18:31:16.676050 master-0 kubenswrapper[29097]: I0312 18:31:16.672061 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 12 18:31:16.676050 master-0 kubenswrapper[29097]: I0312 18:31:16.674405 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 12 18:31:16.727872 master-0 kubenswrapper[29097]: I0312 18:31:16.727817 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 12 18:31:16.754758 master-0 kubenswrapper[29097]: I0312 18:31:16.750720 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-b88ct" Mar 12 18:31:16.759587 master-0 kubenswrapper[29097]: I0312 18:31:16.759544 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-q4h9m" Mar 12 18:31:16.772078 master-0 kubenswrapper[29097]: I0312 18:31:16.772041 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-ffmfp" Mar 12 18:31:16.815079 master-0 kubenswrapper[29097]: I0312 18:31:16.815015 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 12 18:31:16.974371 master-0 kubenswrapper[29097]: I0312 18:31:16.974261 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 18:31:17.091156 master-0 kubenswrapper[29097]: I0312 18:31:17.091083 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 12 18:31:17.119639 master-0 kubenswrapper[29097]: I0312 18:31:17.119594 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 12 18:31:17.127998 master-0 kubenswrapper[29097]: I0312 18:31:17.127958 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 12 18:31:17.148284 master-0 kubenswrapper[29097]: I0312 18:31:17.148234 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-cc9lz" Mar 12 18:31:17.177396 master-0 kubenswrapper[29097]: I0312 18:31:17.177328 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-sjkl7" Mar 12 18:31:17.187020 master-0 kubenswrapper[29097]: I0312 18:31:17.186977 29097 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 12 18:31:17.214396 master-0 kubenswrapper[29097]: I0312 18:31:17.214346 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 12 18:31:17.230071 master-0 kubenswrapper[29097]: I0312 18:31:17.229969 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 12 18:31:17.265233 master-0 kubenswrapper[29097]: I0312 18:31:17.265186 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 12 18:31:17.291862 master-0 kubenswrapper[29097]: I0312 18:31:17.291815 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-hm292" Mar 12 18:31:17.333444 master-0 kubenswrapper[29097]: I0312 18:31:17.333375 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 12 18:31:17.390956 master-0 kubenswrapper[29097]: I0312 18:31:17.390812 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 12 18:31:17.391672 master-0 kubenswrapper[29097]: I0312 18:31:17.391590 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 12 18:31:17.406547 master-0 kubenswrapper[29097]: I0312 18:31:17.406476 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 12 18:31:17.489271 master-0 kubenswrapper[29097]: I0312 18:31:17.489005 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-w4bj7" Mar 12 18:31:17.495344 master-0 kubenswrapper[29097]: I0312 18:31:17.495156 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 12 18:31:17.528120 master-0 kubenswrapper[29097]: I0312 18:31:17.528056 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 18:31:17.529861 master-0 kubenswrapper[29097]: I0312 18:31:17.529810 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 18:31:17.538994 master-0 kubenswrapper[29097]: I0312 18:31:17.538931 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 12 18:31:17.608393 master-0 kubenswrapper[29097]: I0312 18:31:17.608324 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 12 18:31:17.677211 master-0 kubenswrapper[29097]: I0312 18:31:17.677134 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 12 18:31:17.707943 master-0 kubenswrapper[29097]: I0312 18:31:17.707896 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-72pgx" Mar 12 18:31:17.726001 master-0 kubenswrapper[29097]: I0312 18:31:17.725962 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 12 18:31:17.805600 master-0 kubenswrapper[29097]: I0312 18:31:17.805551 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 12 18:31:17.851346 master-0 kubenswrapper[29097]: I0312 18:31:17.851295 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 12 18:31:17.886109 master-0 kubenswrapper[29097]: I0312 18:31:17.886060 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 12 18:31:17.960029 master-0 kubenswrapper[29097]: I0312 18:31:17.959948 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 18:31:17.976942 master-0 kubenswrapper[29097]: I0312 18:31:17.976894 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 12 18:31:17.986318 master-0 kubenswrapper[29097]: I0312 18:31:17.986290 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 12 18:31:18.019244 master-0 kubenswrapper[29097]: I0312 18:31:18.019188 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 12 18:31:18.034045 master-0 kubenswrapper[29097]: I0312 18:31:18.033994 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 12 18:31:18.071945 master-0 kubenswrapper[29097]: I0312 18:31:18.071831 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-djr46" Mar 12 18:31:18.103786 master-0 kubenswrapper[29097]: I0312 18:31:18.103710 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 12 18:31:18.124455 master-0 kubenswrapper[29097]: I0312 18:31:18.124395 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 12 18:31:18.142682 master-0 kubenswrapper[29097]: I0312 18:31:18.142626 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 12 18:31:18.178631 master-0 kubenswrapper[29097]: I0312 18:31:18.178421 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 18:31:18.200482 master-0 kubenswrapper[29097]: I0312 18:31:18.200292 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 12 18:31:18.220253 master-0 kubenswrapper[29097]: I0312 18:31:18.220225 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 12 18:31:18.293658 master-0 kubenswrapper[29097]: I0312 18:31:18.293582 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 12 18:31:18.326697 master-0 kubenswrapper[29097]: I0312 18:31:18.326606 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 12 18:31:18.333114 master-0 kubenswrapper[29097]: I0312 18:31:18.333077 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-vtdm7" Mar 12 18:31:18.338851 master-0 kubenswrapper[29097]: I0312 18:31:18.338804 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 12 18:31:18.356304 master-0 kubenswrapper[29097]: I0312 18:31:18.356259 29097 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 12 18:31:18.366360 master-0 kubenswrapper[29097]: I0312 18:31:18.366333 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 12 18:31:18.375636 master-0 kubenswrapper[29097]: I0312 18:31:18.375617 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 12 18:31:18.379338 master-0 kubenswrapper[29097]: I0312 18:31:18.379323 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 12 18:31:18.386363 master-0 kubenswrapper[29097]: I0312 18:31:18.386302 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 12 18:31:18.391304 master-0 kubenswrapper[29097]: I0312 18:31:18.391257 29097 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 12 18:31:18.411145 master-0 kubenswrapper[29097]: I0312 18:31:18.411097 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-h5f5n" Mar 12 18:31:18.413593 master-0 kubenswrapper[29097]: I0312 18:31:18.413567 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 12 18:31:18.423603 master-0 kubenswrapper[29097]: I0312 18:31:18.423566 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 12 18:31:18.497543 master-0 kubenswrapper[29097]: I0312 18:31:18.497465 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 12 18:31:18.504840 master-0 kubenswrapper[29097]: I0312 18:31:18.504791 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 12 18:31:18.535819 master-0 kubenswrapper[29097]: I0312 18:31:18.535758 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 12 18:31:18.541475 master-0 kubenswrapper[29097]: I0312 18:31:18.541426 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 12 18:31:18.569444 master-0 kubenswrapper[29097]: I0312 18:31:18.569360 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-6d4tt" Mar 12 18:31:18.595152 master-0 kubenswrapper[29097]: I0312 18:31:18.595032 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 12 18:31:18.605288 master-0 kubenswrapper[29097]: I0312 18:31:18.605196 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 12 18:31:18.606058 master-0 kubenswrapper[29097]: I0312 18:31:18.605985 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 12 18:31:18.672688 master-0 kubenswrapper[29097]: I0312 18:31:18.672313 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 18:31:18.685381 master-0 kubenswrapper[29097]: I0312 18:31:18.685297 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 12 18:31:18.712373 master-0 kubenswrapper[29097]: I0312 18:31:18.712248 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-mbtwq" Mar 12 18:31:18.773877 master-0 kubenswrapper[29097]: I0312 18:31:18.773814 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 12 18:31:18.806499 master-0 kubenswrapper[29097]: I0312 18:31:18.806392 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 12 18:31:18.814680 master-0 kubenswrapper[29097]: I0312 18:31:18.814609 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-hsjbb" Mar 12 18:31:18.843174 master-0 kubenswrapper[29097]: I0312 18:31:18.843074 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 12 18:31:18.888586 master-0 kubenswrapper[29097]: I0312 18:31:18.888410 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 12 18:31:18.964922 master-0 kubenswrapper[29097]: I0312 18:31:18.964836 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 12 18:31:18.968692 master-0 kubenswrapper[29097]: I0312 18:31:18.968628 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 12 18:31:18.988806 master-0 kubenswrapper[29097]: I0312 18:31:18.988708 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 12 18:31:18.999980 master-0 kubenswrapper[29097]: I0312 18:31:18.999855 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 12 18:31:19.000176 master-0 kubenswrapper[29097]: I0312 18:31:18.999983 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 12 18:31:19.038067 master-0 kubenswrapper[29097]: I0312 18:31:19.037989 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 12 18:31:19.071921 master-0 kubenswrapper[29097]: I0312 18:31:19.071823 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 12 18:31:19.106248 master-0 kubenswrapper[29097]: I0312 18:31:19.106178 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-7k9rb" Mar 12 18:31:19.109637 master-0 kubenswrapper[29097]: I0312 18:31:19.109562 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 12 18:31:19.111952 master-0 kubenswrapper[29097]: I0312 18:31:19.111868 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 12 18:31:19.130555 master-0 kubenswrapper[29097]: I0312 18:31:19.130153 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 12 18:31:19.134525 master-0 kubenswrapper[29097]: I0312 18:31:19.132286 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 18:31:19.188955 master-0 kubenswrapper[29097]: I0312 18:31:19.188844 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 12 18:31:19.206577 master-0 kubenswrapper[29097]: I0312 18:31:19.206485 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 12 18:31:19.299274 master-0 kubenswrapper[29097]: I0312 18:31:19.299201 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 12 18:31:19.349784 master-0 kubenswrapper[29097]: I0312 18:31:19.348879 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 12 18:31:19.357285 master-0 kubenswrapper[29097]: I0312 18:31:19.357224 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 12 18:31:19.368235 master-0 kubenswrapper[29097]: I0312 18:31:19.368190 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 12 18:31:19.393560 master-0 kubenswrapper[29097]: I0312 18:31:19.393426 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 12 18:31:19.431825 master-0 kubenswrapper[29097]: I0312 18:31:19.431631 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 12 18:31:19.441085 master-0 kubenswrapper[29097]: I0312 18:31:19.440960 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 12 18:31:19.445827 master-0 kubenswrapper[29097]: I0312 18:31:19.445766 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 12 18:31:19.458408 master-0 kubenswrapper[29097]: I0312 18:31:19.458365 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 12 18:31:19.464809 master-0 kubenswrapper[29097]: I0312 18:31:19.464770 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 12 18:31:19.477578 master-0 kubenswrapper[29097]: I0312 18:31:19.477498 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 12 18:31:19.499573 master-0 kubenswrapper[29097]: I0312 18:31:19.499322 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 12 18:31:19.519048 master-0 kubenswrapper[29097]: I0312 18:31:19.518978 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-fbn8j" Mar 12 18:31:19.536109 master-0 kubenswrapper[29097]: I0312 18:31:19.536019 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 12 18:31:19.547361 master-0 kubenswrapper[29097]: I0312 18:31:19.547305 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 12 18:31:19.575584 master-0 kubenswrapper[29097]: I0312 18:31:19.565484 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-g4mv5" Mar 12 18:31:19.686923 master-0 kubenswrapper[29097]: I0312 18:31:19.686825 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 12 18:31:19.767487 master-0 kubenswrapper[29097]: I0312 18:31:19.767236 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 12 18:31:19.792557 master-0 kubenswrapper[29097]: I0312 18:31:19.790819 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-8275t" Mar 12 18:31:19.819914 master-0 kubenswrapper[29097]: I0312 18:31:19.819754 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 12 18:31:19.953659 master-0 kubenswrapper[29097]: I0312 18:31:19.953573 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 12 18:31:20.005901 master-0 kubenswrapper[29097]: I0312 18:31:20.005769 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 12 18:31:20.026344 master-0 kubenswrapper[29097]: I0312 18:31:20.026144 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 12 18:31:20.062653 master-0 kubenswrapper[29097]: I0312 18:31:20.062576 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 12 18:31:20.137560 master-0 kubenswrapper[29097]: I0312 18:31:20.137449 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 18:31:20.225201 master-0 kubenswrapper[29097]: I0312 18:31:20.225108 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 12 18:31:20.241245 master-0 kubenswrapper[29097]: I0312 18:31:20.241156 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 12 18:31:20.250205 master-0 kubenswrapper[29097]: I0312 18:31:20.250135 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 12 18:31:20.262110 master-0 kubenswrapper[29097]: I0312 18:31:20.262051 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 12 18:31:20.280654 master-0 kubenswrapper[29097]: I0312 18:31:20.280460 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 12 18:31:20.287939 master-0 kubenswrapper[29097]: I0312 18:31:20.287310 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 12 18:31:20.448471 master-0 kubenswrapper[29097]: I0312 18:31:20.448265 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 12 18:31:20.480601 master-0 kubenswrapper[29097]: I0312 18:31:20.474349 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 18:31:20.480601 master-0 kubenswrapper[29097]: I0312 18:31:20.479919 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 12 18:31:20.500041 master-0 kubenswrapper[29097]: I0312 18:31:20.499964 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 12 18:31:20.571062 master-0 kubenswrapper[29097]: I0312 18:31:20.570976 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 12 18:31:20.592910 master-0 kubenswrapper[29097]: I0312 18:31:20.592828 29097 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 12 18:31:20.674284 master-0 kubenswrapper[29097]: I0312 18:31:20.673423 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 12 18:31:20.725404 master-0 kubenswrapper[29097]: I0312 18:31:20.725338 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 12 18:31:20.760791 master-0 kubenswrapper[29097]: I0312 18:31:20.760683 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 12 18:31:20.917729 master-0 kubenswrapper[29097]: I0312 18:31:20.917680 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 12 18:31:20.925009 master-0 kubenswrapper[29097]: I0312 18:31:20.924948 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 12 18:31:20.986838 master-0 kubenswrapper[29097]: I0312 18:31:20.986752 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 12 18:31:21.020137 master-0 kubenswrapper[29097]: I0312 18:31:21.020050 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 12 18:31:21.062117 master-0 kubenswrapper[29097]: I0312 18:31:21.062048 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 12 18:31:21.065226 master-0 kubenswrapper[29097]: I0312 18:31:21.065191 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 12 18:31:21.086373 master-0 kubenswrapper[29097]: I0312 18:31:21.086278 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 12 18:31:21.108810 master-0 kubenswrapper[29097]: I0312 18:31:21.108638 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 12 18:31:21.114572 master-0 kubenswrapper[29097]: I0312 18:31:21.114483 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 12 18:31:21.142902 master-0 kubenswrapper[29097]: I0312 18:31:21.142840 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 12 18:31:21.149280 master-0 kubenswrapper[29097]: I0312 18:31:21.149214 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 12 18:31:21.206402 master-0 kubenswrapper[29097]: E0312 18:31:21.206327 29097 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33feec78_4592_4343_965b_aa1b7044fcf3.slice/crio-26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Error finding container 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Status 404 returned error can't find the container with id 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547 Mar 12 18:31:21.215205 master-0 kubenswrapper[29097]: I0312 18:31:21.215126 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 12 18:31:21.283241 master-0 kubenswrapper[29097]: I0312 18:31:21.282944 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 12 18:31:21.287683 master-0 kubenswrapper[29097]: I0312 18:31:21.287474 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-hhnmb" Mar 12 18:31:21.292549 master-0 kubenswrapper[29097]: I0312 18:31:21.292459 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 12 18:31:21.303294 master-0 kubenswrapper[29097]: I0312 18:31:21.303234 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 12 18:31:21.312842 master-0 kubenswrapper[29097]: I0312 18:31:21.312744 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 12 18:31:21.364840 master-0 kubenswrapper[29097]: I0312 18:31:21.364700 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 12 18:31:21.415055 master-0 kubenswrapper[29097]: I0312 18:31:21.414989 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-ssqhn" Mar 12 18:31:21.442480 master-0 kubenswrapper[29097]: I0312 18:31:21.442440 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-g4mx9" Mar 12 18:31:21.494797 master-0 kubenswrapper[29097]: I0312 18:31:21.494745 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 12 18:31:21.499947 master-0 kubenswrapper[29097]: I0312 18:31:21.499908 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 12 18:31:21.515194 master-0 kubenswrapper[29097]: I0312 18:31:21.515145 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 12 18:31:21.529344 master-0 kubenswrapper[29097]: I0312 18:31:21.529295 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-cjzzq" Mar 12 18:31:21.540749 master-0 kubenswrapper[29097]: I0312 18:31:21.540682 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 12 18:31:21.541043 master-0 kubenswrapper[29097]: I0312 18:31:21.541013 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-8nmsp" Mar 12 18:31:21.557807 master-0 kubenswrapper[29097]: I0312 18:31:21.557759 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-6gh5d" Mar 12 18:31:21.592654 master-0 kubenswrapper[29097]: I0312 18:31:21.592508 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 12 18:31:21.610977 master-0 kubenswrapper[29097]: I0312 18:31:21.610926 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 18:31:21.618022 master-0 kubenswrapper[29097]: I0312 18:31:21.617918 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 12 18:31:21.624250 master-0 kubenswrapper[29097]: I0312 18:31:21.624187 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 12 18:31:21.666570 master-0 kubenswrapper[29097]: I0312 18:31:21.666431 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 12 18:31:21.686941 master-0 kubenswrapper[29097]: I0312 18:31:21.686824 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 12 18:31:21.742609 master-0 kubenswrapper[29097]: I0312 18:31:21.742541 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 12 18:31:21.792630 master-0 kubenswrapper[29097]: I0312 18:31:21.792576 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 12 18:31:21.892023 master-0 kubenswrapper[29097]: I0312 18:31:21.891879 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 12 18:31:22.025504 master-0 kubenswrapper[29097]: I0312 18:31:22.025436 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 12 18:31:22.058176 master-0 kubenswrapper[29097]: I0312 18:31:22.058109 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 12 18:31:22.129295 master-0 kubenswrapper[29097]: I0312 18:31:22.129190 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 12 18:31:22.147212 master-0 kubenswrapper[29097]: I0312 18:31:22.147094 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 12 18:31:22.276883 master-0 kubenswrapper[29097]: I0312 18:31:22.276825 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 12 18:31:22.347278 master-0 kubenswrapper[29097]: I0312 18:31:22.347233 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-9f7ld" Mar 12 18:31:22.442454 master-0 kubenswrapper[29097]: I0312 18:31:22.442362 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 12 18:31:22.453351 master-0 kubenswrapper[29097]: I0312 18:31:22.453307 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 12 18:31:22.495436 master-0 kubenswrapper[29097]: I0312 18:31:22.495391 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 12 18:31:22.679744 master-0 kubenswrapper[29097]: I0312 18:31:22.679675 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-nv88b" Mar 12 18:31:22.692439 master-0 kubenswrapper[29097]: I0312 18:31:22.692380 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 12 18:31:22.694771 master-0 kubenswrapper[29097]: I0312 18:31:22.694649 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 12 18:31:22.769779 master-0 kubenswrapper[29097]: I0312 18:31:22.769728 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 12 18:31:22.799752 master-0 kubenswrapper[29097]: I0312 18:31:22.799678 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 12 18:31:22.938629 master-0 kubenswrapper[29097]: I0312 18:31:22.938539 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 12 18:31:23.007882 master-0 kubenswrapper[29097]: I0312 18:31:23.007697 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 12 18:31:23.261327 master-0 kubenswrapper[29097]: I0312 18:31:23.261171 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 12 18:31:23.524180 master-0 kubenswrapper[29097]: I0312 18:31:23.523977 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 12 18:31:23.526711 master-0 kubenswrapper[29097]: I0312 18:31:23.526659 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 12 18:31:23.720332 master-0 kubenswrapper[29097]: I0312 18:31:23.720245 29097 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 12 18:31:23.720705 master-0 kubenswrapper[29097]: I0312 18:31:23.720367 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="49835aec35bdc5feca0d7cf24779b8da" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 12 18:31:23.720705 master-0 kubenswrapper[29097]: I0312 18:31:23.720446 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:31:23.721315 master-0 kubenswrapper[29097]: I0312 18:31:23.721248 29097 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"62dd3567f8e7ab9a6f6b7c22887f90bf8f9e191c219728fb147a35e29d0e7d8e"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 12 18:31:23.721554 master-0 kubenswrapper[29097]: I0312 18:31:23.721455 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="49835aec35bdc5feca0d7cf24779b8da" containerName="kube-controller-manager" containerID="cri-o://62dd3567f8e7ab9a6f6b7c22887f90bf8f9e191c219728fb147a35e29d0e7d8e" gracePeriod=30 Mar 12 18:31:24.294663 master-0 kubenswrapper[29097]: I0312 18:31:24.294595 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 12 18:31:24.961572 master-0 kubenswrapper[29097]: I0312 18:31:24.961473 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 12 18:31:25.254763 master-0 kubenswrapper[29097]: I0312 18:31:25.254639 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-gzn76" Mar 12 18:31:25.272482 master-0 kubenswrapper[29097]: I0312 18:31:25.272417 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 12 18:31:25.312970 master-0 kubenswrapper[29097]: I0312 18:31:25.312885 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 12 18:31:26.071459 master-0 kubenswrapper[29097]: I0312 18:31:26.071418 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 12 18:31:27.427064 master-0 kubenswrapper[29097]: I0312 18:31:27.427017 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 12 18:31:28.247062 master-0 kubenswrapper[29097]: I0312 18:31:28.246978 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 12 18:31:29.080856 master-0 kubenswrapper[29097]: I0312 18:31:29.080774 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 12 18:31:29.435391 master-0 kubenswrapper[29097]: I0312 18:31:29.435308 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 12 18:31:29.487281 master-0 kubenswrapper[29097]: I0312 18:31:29.487238 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 12 18:31:29.580021 master-0 kubenswrapper[29097]: I0312 18:31:29.579975 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 12 18:31:31.435006 master-0 kubenswrapper[29097]: I0312 18:31:31.434929 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 12 18:31:31.782376 master-0 kubenswrapper[29097]: I0312 18:31:31.782243 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-7hcn2cdka018u" Mar 12 18:31:32.085956 master-0 kubenswrapper[29097]: I0312 18:31:32.085893 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 12 18:31:32.108984 master-0 kubenswrapper[29097]: I0312 18:31:32.108932 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 12 18:31:32.167042 master-0 kubenswrapper[29097]: I0312 18:31:32.166993 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 12 18:31:32.532362 master-0 kubenswrapper[29097]: I0312 18:31:32.532242 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-pp56m" Mar 12 18:31:32.618492 master-0 kubenswrapper[29097]: I0312 18:31:32.618418 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 12 18:31:33.358119 master-0 kubenswrapper[29097]: I0312 18:31:33.358051 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 12 18:31:34.244507 master-0 kubenswrapper[29097]: I0312 18:31:34.244434 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 12 18:31:34.559140 master-0 kubenswrapper[29097]: I0312 18:31:34.559063 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 12 18:31:34.631372 master-0 kubenswrapper[29097]: I0312 18:31:34.631244 29097 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 12 18:31:34.632321 master-0 kubenswrapper[29097]: I0312 18:31:34.632207 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=47.63218262 podStartE2EDuration="47.63218262s" podCreationTimestamp="2026-03-12 18:30:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:31:11.208899196 +0000 UTC m=+110.762879333" watchObservedRunningTime="2026-03-12 18:31:34.63218262 +0000 UTC m=+134.186162727" Mar 12 18:31:34.643225 master-0 kubenswrapper[29097]: I0312 18:31:34.643146 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 12 18:31:34.643382 master-0 kubenswrapper[29097]: I0312 18:31:34.643264 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 12 18:31:34.651472 master-0 kubenswrapper[29097]: I0312 18:31:34.651423 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:31:34.674280 master-0 kubenswrapper[29097]: I0312 18:31:34.674165 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=24.674135224 podStartE2EDuration="24.674135224s" podCreationTimestamp="2026-03-12 18:31:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:31:34.672296457 +0000 UTC m=+134.226276594" watchObservedRunningTime="2026-03-12 18:31:34.674135224 +0000 UTC m=+134.228115361" Mar 12 18:31:35.116135 master-0 kubenswrapper[29097]: I0312 18:31:35.116083 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 12 18:31:36.658972 master-0 kubenswrapper[29097]: I0312 18:31:36.658906 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 12 18:31:36.800767 master-0 kubenswrapper[29097]: I0312 18:31:36.800702 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 12 18:31:37.374437 master-0 kubenswrapper[29097]: I0312 18:31:37.374366 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 12 18:31:37.462033 master-0 kubenswrapper[29097]: I0312 18:31:37.461969 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 12 18:31:44.355103 master-0 kubenswrapper[29097]: I0312 18:31:44.355020 29097 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 12 18:31:44.356937 master-0 kubenswrapper[29097]: I0312 18:31:44.356886 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="3a18cac8a90d6913a6a0391d805cddc9" containerName="startup-monitor" containerID="cri-o://ecfeb1293f9a1b97113e934bdb74cec11a8b89f956150a594f150e3f38e9f909" gracePeriod=5 Mar 12 18:31:49.490431 master-0 kubenswrapper[29097]: I0312 18:31:49.490335 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_3a18cac8a90d6913a6a0391d805cddc9/startup-monitor/0.log" Mar 12 18:31:49.491563 master-0 kubenswrapper[29097]: I0312 18:31:49.490426 29097 generic.go:334] "Generic (PLEG): container finished" podID="3a18cac8a90d6913a6a0391d805cddc9" containerID="ecfeb1293f9a1b97113e934bdb74cec11a8b89f956150a594f150e3f38e9f909" exitCode=137 Mar 12 18:31:49.959505 master-0 kubenswrapper[29097]: I0312 18:31:49.959411 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_3a18cac8a90d6913a6a0391d805cddc9/startup-monitor/0.log" Mar 12 18:31:49.959776 master-0 kubenswrapper[29097]: I0312 18:31:49.959601 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:31:50.077650 master-0 kubenswrapper[29097]: I0312 18:31:50.077561 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log\") pod \"3a18cac8a90d6913a6a0391d805cddc9\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " Mar 12 18:31:50.077981 master-0 kubenswrapper[29097]: I0312 18:31:50.077719 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock\") pod \"3a18cac8a90d6913a6a0391d805cddc9\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " Mar 12 18:31:50.077981 master-0 kubenswrapper[29097]: I0312 18:31:50.077759 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir\") pod \"3a18cac8a90d6913a6a0391d805cddc9\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " Mar 12 18:31:50.077981 master-0 kubenswrapper[29097]: I0312 18:31:50.077853 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir\") pod \"3a18cac8a90d6913a6a0391d805cddc9\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " Mar 12 18:31:50.077981 master-0 kubenswrapper[29097]: I0312 18:31:50.077874 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock" (OuterVolumeSpecName: "var-lock") pod "3a18cac8a90d6913a6a0391d805cddc9" (UID: "3a18cac8a90d6913a6a0391d805cddc9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:31:50.077981 master-0 kubenswrapper[29097]: I0312 18:31:50.077920 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log" (OuterVolumeSpecName: "var-log") pod "3a18cac8a90d6913a6a0391d805cddc9" (UID: "3a18cac8a90d6913a6a0391d805cddc9"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:31:50.077981 master-0 kubenswrapper[29097]: I0312 18:31:50.077906 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests\") pod \"3a18cac8a90d6913a6a0391d805cddc9\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " Mar 12 18:31:50.078577 master-0 kubenswrapper[29097]: I0312 18:31:50.077957 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a18cac8a90d6913a6a0391d805cddc9" (UID: "3a18cac8a90d6913a6a0391d805cddc9"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:31:50.078577 master-0 kubenswrapper[29097]: I0312 18:31:50.077980 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests" (OuterVolumeSpecName: "manifests") pod "3a18cac8a90d6913a6a0391d805cddc9" (UID: "3a18cac8a90d6913a6a0391d805cddc9"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:31:50.079044 master-0 kubenswrapper[29097]: I0312 18:31:50.078974 29097 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests\") on node \"master-0\" DevicePath \"\"" Mar 12 18:31:50.079044 master-0 kubenswrapper[29097]: I0312 18:31:50.079031 29097 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log\") on node \"master-0\" DevicePath \"\"" Mar 12 18:31:50.079224 master-0 kubenswrapper[29097]: I0312 18:31:50.079051 29097 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 18:31:50.079224 master-0 kubenswrapper[29097]: I0312 18:31:50.079071 29097 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:31:50.112762 master-0 kubenswrapper[29097]: I0312 18:31:50.112687 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "3a18cac8a90d6913a6a0391d805cddc9" (UID: "3a18cac8a90d6913a6a0391d805cddc9"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:31:50.180958 master-0 kubenswrapper[29097]: I0312 18:31:50.180843 29097 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:31:50.500941 master-0 kubenswrapper[29097]: I0312 18:31:50.500894 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_3a18cac8a90d6913a6a0391d805cddc9/startup-monitor/0.log" Mar 12 18:31:50.501994 master-0 kubenswrapper[29097]: I0312 18:31:50.501811 29097 scope.go:117] "RemoveContainer" containerID="ecfeb1293f9a1b97113e934bdb74cec11a8b89f956150a594f150e3f38e9f909" Mar 12 18:31:50.501994 master-0 kubenswrapper[29097]: I0312 18:31:50.501880 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:31:50.736341 master-0 kubenswrapper[29097]: I0312 18:31:50.736156 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a18cac8a90d6913a6a0391d805cddc9" path="/var/lib/kubelet/pods/3a18cac8a90d6913a6a0391d805cddc9/volumes" Mar 12 18:31:50.737012 master-0 kubenswrapper[29097]: I0312 18:31:50.736966 29097 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Mar 12 18:31:50.760300 master-0 kubenswrapper[29097]: I0312 18:31:50.760139 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 12 18:31:50.760300 master-0 kubenswrapper[29097]: I0312 18:31:50.760207 29097 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="675ed95d-5f5f-4f03-8641-dda686113fb2" Mar 12 18:31:50.767832 master-0 kubenswrapper[29097]: I0312 18:31:50.767780 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 12 18:31:50.770497 master-0 kubenswrapper[29097]: I0312 18:31:50.767871 29097 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="675ed95d-5f5f-4f03-8641-dda686113fb2" Mar 12 18:31:54.539646 master-0 kubenswrapper[29097]: I0312 18:31:54.539320 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_49835aec35bdc5feca0d7cf24779b8da/kube-controller-manager/1.log" Mar 12 18:31:54.548428 master-0 kubenswrapper[29097]: I0312 18:31:54.542030 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_49835aec35bdc5feca0d7cf24779b8da/kube-controller-manager/0.log" Mar 12 18:31:54.548428 master-0 kubenswrapper[29097]: I0312 18:31:54.542102 29097 generic.go:334] "Generic (PLEG): container finished" podID="49835aec35bdc5feca0d7cf24779b8da" containerID="62dd3567f8e7ab9a6f6b7c22887f90bf8f9e191c219728fb147a35e29d0e7d8e" exitCode=137 Mar 12 18:31:54.548428 master-0 kubenswrapper[29097]: I0312 18:31:54.542172 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"49835aec35bdc5feca0d7cf24779b8da","Type":"ContainerDied","Data":"62dd3567f8e7ab9a6f6b7c22887f90bf8f9e191c219728fb147a35e29d0e7d8e"} Mar 12 18:31:54.548428 master-0 kubenswrapper[29097]: I0312 18:31:54.542222 29097 scope.go:117] "RemoveContainer" containerID="996ba8fb061459a072fc0dac62d85e8970305954c92459dbfc764353eca2dc98" Mar 12 18:31:55.553941 master-0 kubenswrapper[29097]: I0312 18:31:55.553865 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_49835aec35bdc5feca0d7cf24779b8da/kube-controller-manager/1.log" Mar 12 18:31:55.555870 master-0 kubenswrapper[29097]: I0312 18:31:55.555795 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"49835aec35bdc5feca0d7cf24779b8da","Type":"ContainerStarted","Data":"d8bc1ab80b512a9b34b5a39f82b7bfc61939a83d8fe4158a7181d62b837fd9c1"} Mar 12 18:32:03.242289 master-0 kubenswrapper[29097]: I0312 18:32:03.242208 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:32:03.720807 master-0 kubenswrapper[29097]: I0312 18:32:03.720696 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:32:03.728020 master-0 kubenswrapper[29097]: I0312 18:32:03.727980 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:32:04.635212 master-0 kubenswrapper[29097]: I0312 18:32:04.635132 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:32:12.435728 master-0 kubenswrapper[29097]: I0312 18:32:12.435672 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-6c7fb6b958-jq5c9"] Mar 12 18:32:12.436673 master-0 kubenswrapper[29097]: E0312 18:32:12.435898 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a18cac8a90d6913a6a0391d805cddc9" containerName="startup-monitor" Mar 12 18:32:12.436673 master-0 kubenswrapper[29097]: I0312 18:32:12.435915 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a18cac8a90d6913a6a0391d805cddc9" containerName="startup-monitor" Mar 12 18:32:12.436673 master-0 kubenswrapper[29097]: E0312 18:32:12.435934 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b36c73bd-cc3f-4226-94b9-885671190812" containerName="installer" Mar 12 18:32:12.436673 master-0 kubenswrapper[29097]: I0312 18:32:12.435944 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="b36c73bd-cc3f-4226-94b9-885671190812" containerName="installer" Mar 12 18:32:12.436673 master-0 kubenswrapper[29097]: I0312 18:32:12.436069 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a18cac8a90d6913a6a0391d805cddc9" containerName="startup-monitor" Mar 12 18:32:12.436673 master-0 kubenswrapper[29097]: I0312 18:32:12.436103 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="b36c73bd-cc3f-4226-94b9-885671190812" containerName="installer" Mar 12 18:32:12.436673 master-0 kubenswrapper[29097]: I0312 18:32:12.436524 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-6c7fb6b958-jq5c9" Mar 12 18:32:12.439477 master-0 kubenswrapper[29097]: I0312 18:32:12.439095 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 12 18:32:12.439477 master-0 kubenswrapper[29097]: I0312 18:32:12.439254 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 12 18:32:12.441119 master-0 kubenswrapper[29097]: I0312 18:32:12.440136 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-n2hnv" Mar 12 18:32:12.441119 master-0 kubenswrapper[29097]: I0312 18:32:12.440920 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 12 18:32:12.442997 master-0 kubenswrapper[29097]: I0312 18:32:12.442679 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 12 18:32:12.447703 master-0 kubenswrapper[29097]: I0312 18:32:12.447660 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 12 18:32:12.465497 master-0 kubenswrapper[29097]: I0312 18:32:12.465429 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-6c7fb6b958-jq5c9"] Mar 12 18:32:12.529919 master-0 kubenswrapper[29097]: I0312 18:32:12.529880 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cd31e59-6cb6-42b7-8384-56a1d9d8a482-trusted-ca\") pod \"console-operator-6c7fb6b958-jq5c9\" (UID: \"4cd31e59-6cb6-42b7-8384-56a1d9d8a482\") " pod="openshift-console-operator/console-operator-6c7fb6b958-jq5c9" Mar 12 18:32:12.530163 master-0 kubenswrapper[29097]: I0312 18:32:12.530149 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cd31e59-6cb6-42b7-8384-56a1d9d8a482-config\") pod \"console-operator-6c7fb6b958-jq5c9\" (UID: \"4cd31e59-6cb6-42b7-8384-56a1d9d8a482\") " pod="openshift-console-operator/console-operator-6c7fb6b958-jq5c9" Mar 12 18:32:12.530284 master-0 kubenswrapper[29097]: I0312 18:32:12.530271 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqndl\" (UniqueName: \"kubernetes.io/projected/4cd31e59-6cb6-42b7-8384-56a1d9d8a482-kube-api-access-cqndl\") pod \"console-operator-6c7fb6b958-jq5c9\" (UID: \"4cd31e59-6cb6-42b7-8384-56a1d9d8a482\") " pod="openshift-console-operator/console-operator-6c7fb6b958-jq5c9" Mar 12 18:32:12.530394 master-0 kubenswrapper[29097]: I0312 18:32:12.530381 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cd31e59-6cb6-42b7-8384-56a1d9d8a482-serving-cert\") pod \"console-operator-6c7fb6b958-jq5c9\" (UID: \"4cd31e59-6cb6-42b7-8384-56a1d9d8a482\") " pod="openshift-console-operator/console-operator-6c7fb6b958-jq5c9" Mar 12 18:32:12.541139 master-0 kubenswrapper[29097]: I0312 18:32:12.541077 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-n2tfc"] Mar 12 18:32:12.542033 master-0 kubenswrapper[29097]: I0312 18:32:12.542002 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-n2tfc" Mar 12 18:32:12.544865 master-0 kubenswrapper[29097]: I0312 18:32:12.544839 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 12 18:32:12.545140 master-0 kubenswrapper[29097]: I0312 18:32:12.545090 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 12 18:32:12.545140 master-0 kubenswrapper[29097]: I0312 18:32:12.545114 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-mbn45" Mar 12 18:32:12.545424 master-0 kubenswrapper[29097]: I0312 18:32:12.545329 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 12 18:32:12.552394 master-0 kubenswrapper[29097]: I0312 18:32:12.552331 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-n2tfc"] Mar 12 18:32:12.631364 master-0 kubenswrapper[29097]: I0312 18:32:12.631314 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqndl\" (UniqueName: \"kubernetes.io/projected/4cd31e59-6cb6-42b7-8384-56a1d9d8a482-kube-api-access-cqndl\") pod \"console-operator-6c7fb6b958-jq5c9\" (UID: \"4cd31e59-6cb6-42b7-8384-56a1d9d8a482\") " pod="openshift-console-operator/console-operator-6c7fb6b958-jq5c9" Mar 12 18:32:12.631690 master-0 kubenswrapper[29097]: I0312 18:32:12.631652 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cd31e59-6cb6-42b7-8384-56a1d9d8a482-serving-cert\") pod \"console-operator-6c7fb6b958-jq5c9\" (UID: \"4cd31e59-6cb6-42b7-8384-56a1d9d8a482\") " pod="openshift-console-operator/console-operator-6c7fb6b958-jq5c9" Mar 12 18:32:12.631897 master-0 kubenswrapper[29097]: I0312 18:32:12.631875 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cd31e59-6cb6-42b7-8384-56a1d9d8a482-trusted-ca\") pod \"console-operator-6c7fb6b958-jq5c9\" (UID: \"4cd31e59-6cb6-42b7-8384-56a1d9d8a482\") " pod="openshift-console-operator/console-operator-6c7fb6b958-jq5c9" Mar 12 18:32:12.631956 master-0 kubenswrapper[29097]: I0312 18:32:12.631903 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cd31e59-6cb6-42b7-8384-56a1d9d8a482-config\") pod \"console-operator-6c7fb6b958-jq5c9\" (UID: \"4cd31e59-6cb6-42b7-8384-56a1d9d8a482\") " pod="openshift-console-operator/console-operator-6c7fb6b958-jq5c9" Mar 12 18:32:12.633867 master-0 kubenswrapper[29097]: I0312 18:32:12.633770 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cd31e59-6cb6-42b7-8384-56a1d9d8a482-config\") pod \"console-operator-6c7fb6b958-jq5c9\" (UID: \"4cd31e59-6cb6-42b7-8384-56a1d9d8a482\") " pod="openshift-console-operator/console-operator-6c7fb6b958-jq5c9" Mar 12 18:32:12.634600 master-0 kubenswrapper[29097]: I0312 18:32:12.634394 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cd31e59-6cb6-42b7-8384-56a1d9d8a482-trusted-ca\") pod \"console-operator-6c7fb6b958-jq5c9\" (UID: \"4cd31e59-6cb6-42b7-8384-56a1d9d8a482\") " pod="openshift-console-operator/console-operator-6c7fb6b958-jq5c9" Mar 12 18:32:12.636770 master-0 kubenswrapper[29097]: I0312 18:32:12.636737 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cd31e59-6cb6-42b7-8384-56a1d9d8a482-serving-cert\") pod \"console-operator-6c7fb6b958-jq5c9\" (UID: \"4cd31e59-6cb6-42b7-8384-56a1d9d8a482\") " pod="openshift-console-operator/console-operator-6c7fb6b958-jq5c9" Mar 12 18:32:12.647841 master-0 kubenswrapper[29097]: I0312 18:32:12.647806 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqndl\" (UniqueName: \"kubernetes.io/projected/4cd31e59-6cb6-42b7-8384-56a1d9d8a482-kube-api-access-cqndl\") pod \"console-operator-6c7fb6b958-jq5c9\" (UID: \"4cd31e59-6cb6-42b7-8384-56a1d9d8a482\") " pod="openshift-console-operator/console-operator-6c7fb6b958-jq5c9" Mar 12 18:32:12.733504 master-0 kubenswrapper[29097]: I0312 18:32:12.733367 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0b73b25-16e0-4a96-99fa-c50a127bed68-cert\") pod \"ingress-canary-n2tfc\" (UID: \"a0b73b25-16e0-4a96-99fa-c50a127bed68\") " pod="openshift-ingress-canary/ingress-canary-n2tfc" Mar 12 18:32:12.733728 master-0 kubenswrapper[29097]: I0312 18:32:12.733683 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-779x5\" (UniqueName: \"kubernetes.io/projected/a0b73b25-16e0-4a96-99fa-c50a127bed68-kube-api-access-779x5\") pod \"ingress-canary-n2tfc\" (UID: \"a0b73b25-16e0-4a96-99fa-c50a127bed68\") " pod="openshift-ingress-canary/ingress-canary-n2tfc" Mar 12 18:32:12.750753 master-0 kubenswrapper[29097]: I0312 18:32:12.750708 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-6c7fb6b958-jq5c9" Mar 12 18:32:12.842525 master-0 kubenswrapper[29097]: I0312 18:32:12.837680 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-779x5\" (UniqueName: \"kubernetes.io/projected/a0b73b25-16e0-4a96-99fa-c50a127bed68-kube-api-access-779x5\") pod \"ingress-canary-n2tfc\" (UID: \"a0b73b25-16e0-4a96-99fa-c50a127bed68\") " pod="openshift-ingress-canary/ingress-canary-n2tfc" Mar 12 18:32:12.842525 master-0 kubenswrapper[29097]: I0312 18:32:12.837942 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0b73b25-16e0-4a96-99fa-c50a127bed68-cert\") pod \"ingress-canary-n2tfc\" (UID: \"a0b73b25-16e0-4a96-99fa-c50a127bed68\") " pod="openshift-ingress-canary/ingress-canary-n2tfc" Mar 12 18:32:12.858539 master-0 kubenswrapper[29097]: I0312 18:32:12.850611 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0b73b25-16e0-4a96-99fa-c50a127bed68-cert\") pod \"ingress-canary-n2tfc\" (UID: \"a0b73b25-16e0-4a96-99fa-c50a127bed68\") " pod="openshift-ingress-canary/ingress-canary-n2tfc" Mar 12 18:32:12.878541 master-0 kubenswrapper[29097]: I0312 18:32:12.875365 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-779x5\" (UniqueName: \"kubernetes.io/projected/a0b73b25-16e0-4a96-99fa-c50a127bed68-kube-api-access-779x5\") pod \"ingress-canary-n2tfc\" (UID: \"a0b73b25-16e0-4a96-99fa-c50a127bed68\") " pod="openshift-ingress-canary/ingress-canary-n2tfc" Mar 12 18:32:13.161624 master-0 kubenswrapper[29097]: I0312 18:32:13.161569 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-n2tfc" Mar 12 18:32:13.206962 master-0 kubenswrapper[29097]: I0312 18:32:13.206284 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-6c7fb6b958-jq5c9"] Mar 12 18:32:13.220470 master-0 kubenswrapper[29097]: W0312 18:32:13.220407 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cd31e59_6cb6_42b7_8384_56a1d9d8a482.slice/crio-9960cd457d22b9afe98dcd44e8bf19d624697589c1fb8ebe8020851dbf4042a8 WatchSource:0}: Error finding container 9960cd457d22b9afe98dcd44e8bf19d624697589c1fb8ebe8020851dbf4042a8: Status 404 returned error can't find the container with id 9960cd457d22b9afe98dcd44e8bf19d624697589c1fb8ebe8020851dbf4042a8 Mar 12 18:32:13.618931 master-0 kubenswrapper[29097]: I0312 18:32:13.618851 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-n2tfc"] Mar 12 18:32:13.620619 master-0 kubenswrapper[29097]: W0312 18:32:13.620558 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0b73b25_16e0_4a96_99fa_c50a127bed68.slice/crio-31687c27422860ccd1abe7866f33637706defcae2c70790471d7ef1048da2dcc WatchSource:0}: Error finding container 31687c27422860ccd1abe7866f33637706defcae2c70790471d7ef1048da2dcc: Status 404 returned error can't find the container with id 31687c27422860ccd1abe7866f33637706defcae2c70790471d7ef1048da2dcc Mar 12 18:32:13.693533 master-0 kubenswrapper[29097]: I0312 18:32:13.693243 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-6c7fb6b958-jq5c9" event={"ID":"4cd31e59-6cb6-42b7-8384-56a1d9d8a482","Type":"ContainerStarted","Data":"9960cd457d22b9afe98dcd44e8bf19d624697589c1fb8ebe8020851dbf4042a8"} Mar 12 18:32:13.695995 master-0 kubenswrapper[29097]: I0312 18:32:13.695937 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-n2tfc" event={"ID":"a0b73b25-16e0-4a96-99fa-c50a127bed68","Type":"ContainerStarted","Data":"31687c27422860ccd1abe7866f33637706defcae2c70790471d7ef1048da2dcc"} Mar 12 18:32:14.703721 master-0 kubenswrapper[29097]: I0312 18:32:14.703664 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-n2tfc" event={"ID":"a0b73b25-16e0-4a96-99fa-c50a127bed68","Type":"ContainerStarted","Data":"47b4576ac560833aee5f9500bee2ef66ddcc0021cc95385355350979cc65e962"} Mar 12 18:32:14.721434 master-0 kubenswrapper[29097]: I0312 18:32:14.721360 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-n2tfc" podStartSLOduration=2.721340236 podStartE2EDuration="2.721340236s" podCreationTimestamp="2026-03-12 18:32:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:32:14.716027512 +0000 UTC m=+174.270007609" watchObservedRunningTime="2026-03-12 18:32:14.721340236 +0000 UTC m=+174.275320333" Mar 12 18:32:16.719026 master-0 kubenswrapper[29097]: I0312 18:32:16.718874 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-6c7fb6b958-jq5c9" event={"ID":"4cd31e59-6cb6-42b7-8384-56a1d9d8a482","Type":"ContainerStarted","Data":"76d00ce7ea4b39b8e8893f2e60d4c7eef049c7529b991520384cdc0e70131b6b"} Mar 12 18:32:16.719751 master-0 kubenswrapper[29097]: I0312 18:32:16.719098 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-6c7fb6b958-jq5c9" Mar 12 18:32:16.738060 master-0 kubenswrapper[29097]: I0312 18:32:16.737979 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-6c7fb6b958-jq5c9" podStartSLOduration=1.510028443 podStartE2EDuration="4.737956804s" podCreationTimestamp="2026-03-12 18:32:12 +0000 UTC" firstStartedPulling="2026-03-12 18:32:13.225706409 +0000 UTC m=+172.779686516" lastFinishedPulling="2026-03-12 18:32:16.45363476 +0000 UTC m=+176.007614877" observedRunningTime="2026-03-12 18:32:16.734778154 +0000 UTC m=+176.288758271" watchObservedRunningTime="2026-03-12 18:32:16.737956804 +0000 UTC m=+176.291936911" Mar 12 18:32:17.012348 master-0 kubenswrapper[29097]: I0312 18:32:17.012248 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-6c7fb6b958-jq5c9" Mar 12 18:32:17.217923 master-0 kubenswrapper[29097]: I0312 18:32:17.217878 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-84f57b9877-lsc92"] Mar 12 18:32:17.218690 master-0 kubenswrapper[29097]: I0312 18:32:17.218673 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-84f57b9877-lsc92" Mar 12 18:32:17.220402 master-0 kubenswrapper[29097]: I0312 18:32:17.220364 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-vt5nq" Mar 12 18:32:17.221142 master-0 kubenswrapper[29097]: I0312 18:32:17.221122 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 12 18:32:17.221264 master-0 kubenswrapper[29097]: I0312 18:32:17.221214 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 12 18:32:17.235690 master-0 kubenswrapper[29097]: I0312 18:32:17.235649 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-84f57b9877-lsc92"] Mar 12 18:32:17.319788 master-0 kubenswrapper[29097]: I0312 18:32:17.319743 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjph6\" (UniqueName: \"kubernetes.io/projected/60ba51da-3daf-4608-9269-b10211a184e9-kube-api-access-sjph6\") pod \"downloads-84f57b9877-lsc92\" (UID: \"60ba51da-3daf-4608-9269-b10211a184e9\") " pod="openshift-console/downloads-84f57b9877-lsc92" Mar 12 18:32:17.421239 master-0 kubenswrapper[29097]: I0312 18:32:17.421186 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjph6\" (UniqueName: \"kubernetes.io/projected/60ba51da-3daf-4608-9269-b10211a184e9-kube-api-access-sjph6\") pod \"downloads-84f57b9877-lsc92\" (UID: \"60ba51da-3daf-4608-9269-b10211a184e9\") " pod="openshift-console/downloads-84f57b9877-lsc92" Mar 12 18:32:17.435584 master-0 kubenswrapper[29097]: I0312 18:32:17.435547 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjph6\" (UniqueName: \"kubernetes.io/projected/60ba51da-3daf-4608-9269-b10211a184e9-kube-api-access-sjph6\") pod \"downloads-84f57b9877-lsc92\" (UID: \"60ba51da-3daf-4608-9269-b10211a184e9\") " pod="openshift-console/downloads-84f57b9877-lsc92" Mar 12 18:32:17.509577 master-0 kubenswrapper[29097]: I0312 18:32:17.508578 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-5cbd49d755-mhzdw"] Mar 12 18:32:17.513653 master-0 kubenswrapper[29097]: I0312 18:32:17.513278 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5cbd49d755-mhzdw" Mar 12 18:32:17.522248 master-0 kubenswrapper[29097]: I0312 18:32:17.522157 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 12 18:32:17.523182 master-0 kubenswrapper[29097]: I0312 18:32:17.522696 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 12 18:32:17.529949 master-0 kubenswrapper[29097]: I0312 18:32:17.529903 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-5cbd49d755-mhzdw"] Mar 12 18:32:17.542650 master-0 kubenswrapper[29097]: I0312 18:32:17.538295 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-84f57b9877-lsc92" Mar 12 18:32:17.624044 master-0 kubenswrapper[29097]: I0312 18:32:17.623873 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/063142d4-eff6-4421-91c9-28225ddbbbf2-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-mhzdw\" (UID: \"063142d4-eff6-4421-91c9-28225ddbbbf2\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-mhzdw" Mar 12 18:32:17.624044 master-0 kubenswrapper[29097]: I0312 18:32:17.623958 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/063142d4-eff6-4421-91c9-28225ddbbbf2-nginx-conf\") pod \"networking-console-plugin-5cbd49d755-mhzdw\" (UID: \"063142d4-eff6-4421-91c9-28225ddbbbf2\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-mhzdw" Mar 12 18:32:17.725397 master-0 kubenswrapper[29097]: I0312 18:32:17.725352 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/063142d4-eff6-4421-91c9-28225ddbbbf2-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-mhzdw\" (UID: \"063142d4-eff6-4421-91c9-28225ddbbbf2\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-mhzdw" Mar 12 18:32:17.725928 master-0 kubenswrapper[29097]: I0312 18:32:17.725431 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/063142d4-eff6-4421-91c9-28225ddbbbf2-nginx-conf\") pod \"networking-console-plugin-5cbd49d755-mhzdw\" (UID: \"063142d4-eff6-4421-91c9-28225ddbbbf2\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-mhzdw" Mar 12 18:32:17.725928 master-0 kubenswrapper[29097]: E0312 18:32:17.725550 29097 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Mar 12 18:32:17.725928 master-0 kubenswrapper[29097]: E0312 18:32:17.725611 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/063142d4-eff6-4421-91c9-28225ddbbbf2-networking-console-plugin-cert podName:063142d4-eff6-4421-91c9-28225ddbbbf2 nodeName:}" failed. No retries permitted until 2026-03-12 18:32:18.225591358 +0000 UTC m=+177.779571455 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/063142d4-eff6-4421-91c9-28225ddbbbf2-networking-console-plugin-cert") pod "networking-console-plugin-5cbd49d755-mhzdw" (UID: "063142d4-eff6-4421-91c9-28225ddbbbf2") : secret "networking-console-plugin-cert" not found Mar 12 18:32:17.726768 master-0 kubenswrapper[29097]: I0312 18:32:17.726738 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/063142d4-eff6-4421-91c9-28225ddbbbf2-nginx-conf\") pod \"networking-console-plugin-5cbd49d755-mhzdw\" (UID: \"063142d4-eff6-4421-91c9-28225ddbbbf2\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-mhzdw" Mar 12 18:32:17.919049 master-0 kubenswrapper[29097]: I0312 18:32:17.918964 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-84f57b9877-lsc92"] Mar 12 18:32:17.923409 master-0 kubenswrapper[29097]: W0312 18:32:17.923376 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60ba51da_3daf_4608_9269_b10211a184e9.slice/crio-1e8143b39e3a2c83e1dbdac26ec16334b751ba4b11018a88ceb917d804a8053c WatchSource:0}: Error finding container 1e8143b39e3a2c83e1dbdac26ec16334b751ba4b11018a88ceb917d804a8053c: Status 404 returned error can't find the container with id 1e8143b39e3a2c83e1dbdac26ec16334b751ba4b11018a88ceb917d804a8053c Mar 12 18:32:18.230544 master-0 kubenswrapper[29097]: I0312 18:32:18.230424 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/063142d4-eff6-4421-91c9-28225ddbbbf2-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-mhzdw\" (UID: \"063142d4-eff6-4421-91c9-28225ddbbbf2\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-mhzdw" Mar 12 18:32:18.230710 master-0 kubenswrapper[29097]: E0312 18:32:18.230605 29097 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Mar 12 18:32:18.230710 master-0 kubenswrapper[29097]: E0312 18:32:18.230654 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/063142d4-eff6-4421-91c9-28225ddbbbf2-networking-console-plugin-cert podName:063142d4-eff6-4421-91c9-28225ddbbbf2 nodeName:}" failed. No retries permitted until 2026-03-12 18:32:19.230640167 +0000 UTC m=+178.784620264 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/063142d4-eff6-4421-91c9-28225ddbbbf2-networking-console-plugin-cert") pod "networking-console-plugin-5cbd49d755-mhzdw" (UID: "063142d4-eff6-4421-91c9-28225ddbbbf2") : secret "networking-console-plugin-cert" not found Mar 12 18:32:18.731774 master-0 kubenswrapper[29097]: I0312 18:32:18.731720 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-84f57b9877-lsc92" event={"ID":"60ba51da-3daf-4608-9269-b10211a184e9","Type":"ContainerStarted","Data":"1e8143b39e3a2c83e1dbdac26ec16334b751ba4b11018a88ceb917d804a8053c"} Mar 12 18:32:19.243732 master-0 kubenswrapper[29097]: I0312 18:32:19.243676 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/063142d4-eff6-4421-91c9-28225ddbbbf2-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-mhzdw\" (UID: \"063142d4-eff6-4421-91c9-28225ddbbbf2\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-mhzdw" Mar 12 18:32:19.243977 master-0 kubenswrapper[29097]: E0312 18:32:19.243843 29097 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Mar 12 18:32:19.243977 master-0 kubenswrapper[29097]: E0312 18:32:19.243908 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/063142d4-eff6-4421-91c9-28225ddbbbf2-networking-console-plugin-cert podName:063142d4-eff6-4421-91c9-28225ddbbbf2 nodeName:}" failed. No retries permitted until 2026-03-12 18:32:21.243894705 +0000 UTC m=+180.797874802 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/063142d4-eff6-4421-91c9-28225ddbbbf2-networking-console-plugin-cert") pod "networking-console-plugin-5cbd49d755-mhzdw" (UID: "063142d4-eff6-4421-91c9-28225ddbbbf2") : secret "networking-console-plugin-cert" not found Mar 12 18:32:21.195606 master-0 kubenswrapper[29097]: E0312 18:32:21.195194 29097 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33feec78_4592_4343_965b_aa1b7044fcf3.slice/crio-26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Error finding container 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Status 404 returned error can't find the container with id 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547 Mar 12 18:32:21.281541 master-0 kubenswrapper[29097]: I0312 18:32:21.281448 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/063142d4-eff6-4421-91c9-28225ddbbbf2-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-mhzdw\" (UID: \"063142d4-eff6-4421-91c9-28225ddbbbf2\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-mhzdw" Mar 12 18:32:21.281729 master-0 kubenswrapper[29097]: E0312 18:32:21.281632 29097 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Mar 12 18:32:21.281729 master-0 kubenswrapper[29097]: E0312 18:32:21.281695 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/063142d4-eff6-4421-91c9-28225ddbbbf2-networking-console-plugin-cert podName:063142d4-eff6-4421-91c9-28225ddbbbf2 nodeName:}" failed. No retries permitted until 2026-03-12 18:32:25.281678854 +0000 UTC m=+184.835658951 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/063142d4-eff6-4421-91c9-28225ddbbbf2-networking-console-plugin-cert") pod "networking-console-plugin-5cbd49d755-mhzdw" (UID: "063142d4-eff6-4421-91c9-28225ddbbbf2") : secret "networking-console-plugin-cert" not found Mar 12 18:32:23.127835 master-0 kubenswrapper[29097]: I0312 18:32:23.125692 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-55f4db4f7b-mxwpx"] Mar 12 18:32:23.127835 master-0 kubenswrapper[29097]: I0312 18:32:23.127273 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-55f4db4f7b-mxwpx" Mar 12 18:32:23.130188 master-0 kubenswrapper[29097]: I0312 18:32:23.130099 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 12 18:32:23.130820 master-0 kubenswrapper[29097]: I0312 18:32:23.130349 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 12 18:32:23.130820 master-0 kubenswrapper[29097]: I0312 18:32:23.130498 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-fsl62" Mar 12 18:32:23.130820 master-0 kubenswrapper[29097]: I0312 18:32:23.130672 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 12 18:32:23.131188 master-0 kubenswrapper[29097]: I0312 18:32:23.131047 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-55f4db4f7b-mxwpx"] Mar 12 18:32:23.131314 master-0 kubenswrapper[29097]: I0312 18:32:23.131289 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 12 18:32:23.131574 master-0 kubenswrapper[29097]: I0312 18:32:23.131478 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 12 18:32:23.307005 master-0 kubenswrapper[29097]: I0312 18:32:23.306906 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-console-config\") pod \"console-55f4db4f7b-mxwpx\" (UID: \"6964176f-5e1b-48ef-8e78-c2a9dbec41c7\") " pod="openshift-console/console-55f4db4f7b-mxwpx" Mar 12 18:32:23.307397 master-0 kubenswrapper[29097]: I0312 18:32:23.307347 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-service-ca\") pod \"console-55f4db4f7b-mxwpx\" (UID: \"6964176f-5e1b-48ef-8e78-c2a9dbec41c7\") " pod="openshift-console/console-55f4db4f7b-mxwpx" Mar 12 18:32:23.307493 master-0 kubenswrapper[29097]: I0312 18:32:23.307396 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsw2t\" (UniqueName: \"kubernetes.io/projected/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-kube-api-access-zsw2t\") pod \"console-55f4db4f7b-mxwpx\" (UID: \"6964176f-5e1b-48ef-8e78-c2a9dbec41c7\") " pod="openshift-console/console-55f4db4f7b-mxwpx" Mar 12 18:32:23.307493 master-0 kubenswrapper[29097]: I0312 18:32:23.307443 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-console-serving-cert\") pod \"console-55f4db4f7b-mxwpx\" (UID: \"6964176f-5e1b-48ef-8e78-c2a9dbec41c7\") " pod="openshift-console/console-55f4db4f7b-mxwpx" Mar 12 18:32:23.307669 master-0 kubenswrapper[29097]: I0312 18:32:23.307500 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-console-oauth-config\") pod \"console-55f4db4f7b-mxwpx\" (UID: \"6964176f-5e1b-48ef-8e78-c2a9dbec41c7\") " pod="openshift-console/console-55f4db4f7b-mxwpx" Mar 12 18:32:23.307669 master-0 kubenswrapper[29097]: I0312 18:32:23.307592 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-oauth-serving-cert\") pod \"console-55f4db4f7b-mxwpx\" (UID: \"6964176f-5e1b-48ef-8e78-c2a9dbec41c7\") " pod="openshift-console/console-55f4db4f7b-mxwpx" Mar 12 18:32:23.409270 master-0 kubenswrapper[29097]: I0312 18:32:23.409150 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-console-config\") pod \"console-55f4db4f7b-mxwpx\" (UID: \"6964176f-5e1b-48ef-8e78-c2a9dbec41c7\") " pod="openshift-console/console-55f4db4f7b-mxwpx" Mar 12 18:32:23.409270 master-0 kubenswrapper[29097]: I0312 18:32:23.409206 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-service-ca\") pod \"console-55f4db4f7b-mxwpx\" (UID: \"6964176f-5e1b-48ef-8e78-c2a9dbec41c7\") " pod="openshift-console/console-55f4db4f7b-mxwpx" Mar 12 18:32:23.409270 master-0 kubenswrapper[29097]: I0312 18:32:23.409232 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsw2t\" (UniqueName: \"kubernetes.io/projected/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-kube-api-access-zsw2t\") pod \"console-55f4db4f7b-mxwpx\" (UID: \"6964176f-5e1b-48ef-8e78-c2a9dbec41c7\") " pod="openshift-console/console-55f4db4f7b-mxwpx" Mar 12 18:32:23.409270 master-0 kubenswrapper[29097]: I0312 18:32:23.409265 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-console-serving-cert\") pod \"console-55f4db4f7b-mxwpx\" (UID: \"6964176f-5e1b-48ef-8e78-c2a9dbec41c7\") " pod="openshift-console/console-55f4db4f7b-mxwpx" Mar 12 18:32:23.409635 master-0 kubenswrapper[29097]: I0312 18:32:23.409284 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-console-oauth-config\") pod \"console-55f4db4f7b-mxwpx\" (UID: \"6964176f-5e1b-48ef-8e78-c2a9dbec41c7\") " pod="openshift-console/console-55f4db4f7b-mxwpx" Mar 12 18:32:23.409635 master-0 kubenswrapper[29097]: I0312 18:32:23.409328 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-oauth-serving-cert\") pod \"console-55f4db4f7b-mxwpx\" (UID: \"6964176f-5e1b-48ef-8e78-c2a9dbec41c7\") " pod="openshift-console/console-55f4db4f7b-mxwpx" Mar 12 18:32:23.410035 master-0 kubenswrapper[29097]: I0312 18:32:23.410010 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-service-ca\") pod \"console-55f4db4f7b-mxwpx\" (UID: \"6964176f-5e1b-48ef-8e78-c2a9dbec41c7\") " pod="openshift-console/console-55f4db4f7b-mxwpx" Mar 12 18:32:23.410487 master-0 kubenswrapper[29097]: I0312 18:32:23.410444 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-console-config\") pod \"console-55f4db4f7b-mxwpx\" (UID: \"6964176f-5e1b-48ef-8e78-c2a9dbec41c7\") " pod="openshift-console/console-55f4db4f7b-mxwpx" Mar 12 18:32:23.410614 master-0 kubenswrapper[29097]: I0312 18:32:23.410581 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-oauth-serving-cert\") pod \"console-55f4db4f7b-mxwpx\" (UID: \"6964176f-5e1b-48ef-8e78-c2a9dbec41c7\") " pod="openshift-console/console-55f4db4f7b-mxwpx" Mar 12 18:32:23.410765 master-0 kubenswrapper[29097]: E0312 18:32:23.410729 29097 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Mar 12 18:32:23.410765 master-0 kubenswrapper[29097]: E0312 18:32:23.410770 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-console-serving-cert podName:6964176f-5e1b-48ef-8e78-c2a9dbec41c7 nodeName:}" failed. No retries permitted until 2026-03-12 18:32:23.910756668 +0000 UTC m=+183.464736755 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-console-serving-cert") pod "console-55f4db4f7b-mxwpx" (UID: "6964176f-5e1b-48ef-8e78-c2a9dbec41c7") : secret "console-serving-cert" not found Mar 12 18:32:23.413905 master-0 kubenswrapper[29097]: I0312 18:32:23.413723 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-console-oauth-config\") pod \"console-55f4db4f7b-mxwpx\" (UID: \"6964176f-5e1b-48ef-8e78-c2a9dbec41c7\") " pod="openshift-console/console-55f4db4f7b-mxwpx" Mar 12 18:32:23.499535 master-0 kubenswrapper[29097]: I0312 18:32:23.499143 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsw2t\" (UniqueName: \"kubernetes.io/projected/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-kube-api-access-zsw2t\") pod \"console-55f4db4f7b-mxwpx\" (UID: \"6964176f-5e1b-48ef-8e78-c2a9dbec41c7\") " pod="openshift-console/console-55f4db4f7b-mxwpx" Mar 12 18:32:23.916721 master-0 kubenswrapper[29097]: I0312 18:32:23.916640 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-console-serving-cert\") pod \"console-55f4db4f7b-mxwpx\" (UID: \"6964176f-5e1b-48ef-8e78-c2a9dbec41c7\") " pod="openshift-console/console-55f4db4f7b-mxwpx" Mar 12 18:32:23.916951 master-0 kubenswrapper[29097]: E0312 18:32:23.916872 29097 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Mar 12 18:32:23.917008 master-0 kubenswrapper[29097]: E0312 18:32:23.916964 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-console-serving-cert podName:6964176f-5e1b-48ef-8e78-c2a9dbec41c7 nodeName:}" failed. No retries permitted until 2026-03-12 18:32:24.916939866 +0000 UTC m=+184.470919973 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-console-serving-cert") pod "console-55f4db4f7b-mxwpx" (UID: "6964176f-5e1b-48ef-8e78-c2a9dbec41c7") : secret "console-serving-cert" not found Mar 12 18:32:24.930582 master-0 kubenswrapper[29097]: I0312 18:32:24.930480 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-console-serving-cert\") pod \"console-55f4db4f7b-mxwpx\" (UID: \"6964176f-5e1b-48ef-8e78-c2a9dbec41c7\") " pod="openshift-console/console-55f4db4f7b-mxwpx" Mar 12 18:32:24.934259 master-0 kubenswrapper[29097]: I0312 18:32:24.934204 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-console-serving-cert\") pod \"console-55f4db4f7b-mxwpx\" (UID: \"6964176f-5e1b-48ef-8e78-c2a9dbec41c7\") " pod="openshift-console/console-55f4db4f7b-mxwpx" Mar 12 18:32:24.945072 master-0 kubenswrapper[29097]: I0312 18:32:24.944992 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-55f4db4f7b-mxwpx" Mar 12 18:32:25.335787 master-0 kubenswrapper[29097]: I0312 18:32:25.335702 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/063142d4-eff6-4421-91c9-28225ddbbbf2-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-mhzdw\" (UID: \"063142d4-eff6-4421-91c9-28225ddbbbf2\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-mhzdw" Mar 12 18:32:25.347650 master-0 kubenswrapper[29097]: I0312 18:32:25.339963 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/063142d4-eff6-4421-91c9-28225ddbbbf2-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-mhzdw\" (UID: \"063142d4-eff6-4421-91c9-28225ddbbbf2\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-mhzdw" Mar 12 18:32:25.384369 master-0 kubenswrapper[29097]: I0312 18:32:25.384309 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-55f4db4f7b-mxwpx"] Mar 12 18:32:25.385856 master-0 kubenswrapper[29097]: I0312 18:32:25.385816 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5cbd49d755-mhzdw" Mar 12 18:32:25.399802 master-0 kubenswrapper[29097]: W0312 18:32:25.399525 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6964176f_5e1b_48ef_8e78_c2a9dbec41c7.slice/crio-c495aa171b451a8e6d054d3a4ff74b78943d1ac1ea0a676ba7cb71d613038a3c WatchSource:0}: Error finding container c495aa171b451a8e6d054d3a4ff74b78943d1ac1ea0a676ba7cb71d613038a3c: Status 404 returned error can't find the container with id c495aa171b451a8e6d054d3a4ff74b78943d1ac1ea0a676ba7cb71d613038a3c Mar 12 18:32:25.793792 master-0 kubenswrapper[29097]: I0312 18:32:25.793745 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55f4db4f7b-mxwpx" event={"ID":"6964176f-5e1b-48ef-8e78-c2a9dbec41c7","Type":"ContainerStarted","Data":"c495aa171b451a8e6d054d3a4ff74b78943d1ac1ea0a676ba7cb71d613038a3c"} Mar 12 18:32:25.902155 master-0 kubenswrapper[29097]: I0312 18:32:25.902075 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-5cbd49d755-mhzdw"] Mar 12 18:32:25.910805 master-0 kubenswrapper[29097]: W0312 18:32:25.910734 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod063142d4_eff6_4421_91c9_28225ddbbbf2.slice/crio-2aa6e568a40185a84717f286dabc4323b5d09a758482ed28c82c4ae9ecf54fc7 WatchSource:0}: Error finding container 2aa6e568a40185a84717f286dabc4323b5d09a758482ed28c82c4ae9ecf54fc7: Status 404 returned error can't find the container with id 2aa6e568a40185a84717f286dabc4323b5d09a758482ed28c82c4ae9ecf54fc7 Mar 12 18:32:26.801934 master-0 kubenswrapper[29097]: I0312 18:32:26.801868 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5cbd49d755-mhzdw" event={"ID":"063142d4-eff6-4421-91c9-28225ddbbbf2","Type":"ContainerStarted","Data":"2aa6e568a40185a84717f286dabc4323b5d09a758482ed28c82c4ae9ecf54fc7"} Mar 12 18:32:27.818294 master-0 kubenswrapper[29097]: I0312 18:32:27.818225 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5cbd49d755-mhzdw" event={"ID":"063142d4-eff6-4421-91c9-28225ddbbbf2","Type":"ContainerStarted","Data":"0abc2994861a88eb40e658b17c901a3f8fe7c2dfa9f078e796fe65b323a717fa"} Mar 12 18:32:27.837493 master-0 kubenswrapper[29097]: I0312 18:32:27.837332 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-5cbd49d755-mhzdw" podStartSLOduration=9.291391873 podStartE2EDuration="10.837295254s" podCreationTimestamp="2026-03-12 18:32:17 +0000 UTC" firstStartedPulling="2026-03-12 18:32:25.913742985 +0000 UTC m=+185.467723082" lastFinishedPulling="2026-03-12 18:32:27.459646366 +0000 UTC m=+187.013626463" observedRunningTime="2026-03-12 18:32:27.837071668 +0000 UTC m=+187.391051765" watchObservedRunningTime="2026-03-12 18:32:27.837295254 +0000 UTC m=+187.391275351" Mar 12 18:32:29.838296 master-0 kubenswrapper[29097]: I0312 18:32:29.838234 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55f4db4f7b-mxwpx" event={"ID":"6964176f-5e1b-48ef-8e78-c2a9dbec41c7","Type":"ContainerStarted","Data":"dfc09a1058499eca45ac5db2b0c08d2f55021485d4f6f60ab532e35826c76d36"} Mar 12 18:32:30.850763 master-0 kubenswrapper[29097]: I0312 18:32:30.849948 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-55f4db4f7b-mxwpx_6964176f-5e1b-48ef-8e78-c2a9dbec41c7/console/0.log" Mar 12 18:32:30.850763 master-0 kubenswrapper[29097]: I0312 18:32:30.850052 29097 generic.go:334] "Generic (PLEG): container finished" podID="6964176f-5e1b-48ef-8e78-c2a9dbec41c7" containerID="dfc09a1058499eca45ac5db2b0c08d2f55021485d4f6f60ab532e35826c76d36" exitCode=255 Mar 12 18:32:30.850763 master-0 kubenswrapper[29097]: I0312 18:32:30.850091 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55f4db4f7b-mxwpx" event={"ID":"6964176f-5e1b-48ef-8e78-c2a9dbec41c7","Type":"ContainerDied","Data":"dfc09a1058499eca45ac5db2b0c08d2f55021485d4f6f60ab532e35826c76d36"} Mar 12 18:32:30.850763 master-0 kubenswrapper[29097]: I0312 18:32:30.850620 29097 scope.go:117] "RemoveContainer" containerID="dfc09a1058499eca45ac5db2b0c08d2f55021485d4f6f60ab532e35826c76d36" Mar 12 18:32:31.858678 master-0 kubenswrapper[29097]: I0312 18:32:31.857425 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-55f4db4f7b-mxwpx_6964176f-5e1b-48ef-8e78-c2a9dbec41c7/console/1.log" Mar 12 18:32:31.858678 master-0 kubenswrapper[29097]: I0312 18:32:31.858046 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-55f4db4f7b-mxwpx_6964176f-5e1b-48ef-8e78-c2a9dbec41c7/console/0.log" Mar 12 18:32:31.858678 master-0 kubenswrapper[29097]: I0312 18:32:31.858083 29097 generic.go:334] "Generic (PLEG): container finished" podID="6964176f-5e1b-48ef-8e78-c2a9dbec41c7" containerID="d267b4868c6d754bb6486f47b195e4410aa62164b0979678d635819011fb6a67" exitCode=255 Mar 12 18:32:31.858678 master-0 kubenswrapper[29097]: I0312 18:32:31.858119 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55f4db4f7b-mxwpx" event={"ID":"6964176f-5e1b-48ef-8e78-c2a9dbec41c7","Type":"ContainerDied","Data":"d267b4868c6d754bb6486f47b195e4410aa62164b0979678d635819011fb6a67"} Mar 12 18:32:31.858678 master-0 kubenswrapper[29097]: I0312 18:32:31.858156 29097 scope.go:117] "RemoveContainer" containerID="dfc09a1058499eca45ac5db2b0c08d2f55021485d4f6f60ab532e35826c76d36" Mar 12 18:32:31.859739 master-0 kubenswrapper[29097]: I0312 18:32:31.858795 29097 scope.go:117] "RemoveContainer" containerID="d267b4868c6d754bb6486f47b195e4410aa62164b0979678d635819011fb6a67" Mar 12 18:32:31.859739 master-0 kubenswrapper[29097]: E0312 18:32:31.858987 29097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console pod=console-55f4db4f7b-mxwpx_openshift-console(6964176f-5e1b-48ef-8e78-c2a9dbec41c7)\"" pod="openshift-console/console-55f4db4f7b-mxwpx" podUID="6964176f-5e1b-48ef-8e78-c2a9dbec41c7" Mar 12 18:32:31.927077 master-0 kubenswrapper[29097]: I0312 18:32:31.926902 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-76849948fc-cxrj8"] Mar 12 18:32:31.929505 master-0 kubenswrapper[29097]: I0312 18:32:31.928806 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76849948fc-cxrj8" Mar 12 18:32:31.943318 master-0 kubenswrapper[29097]: I0312 18:32:31.941386 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76849948fc-cxrj8"] Mar 12 18:32:31.943318 master-0 kubenswrapper[29097]: I0312 18:32:31.941672 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 12 18:32:32.040558 master-0 kubenswrapper[29097]: I0312 18:32:32.040407 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgdtp\" (UniqueName: \"kubernetes.io/projected/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-kube-api-access-jgdtp\") pod \"console-76849948fc-cxrj8\" (UID: \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\") " pod="openshift-console/console-76849948fc-cxrj8" Mar 12 18:32:32.041048 master-0 kubenswrapper[29097]: I0312 18:32:32.040975 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-oauth-serving-cert\") pod \"console-76849948fc-cxrj8\" (UID: \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\") " pod="openshift-console/console-76849948fc-cxrj8" Mar 12 18:32:32.041197 master-0 kubenswrapper[29097]: I0312 18:32:32.041168 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-console-config\") pod \"console-76849948fc-cxrj8\" (UID: \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\") " pod="openshift-console/console-76849948fc-cxrj8" Mar 12 18:32:32.043222 master-0 kubenswrapper[29097]: I0312 18:32:32.042853 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-service-ca\") pod \"console-76849948fc-cxrj8\" (UID: \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\") " pod="openshift-console/console-76849948fc-cxrj8" Mar 12 18:32:32.043222 master-0 kubenswrapper[29097]: I0312 18:32:32.043194 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-console-serving-cert\") pod \"console-76849948fc-cxrj8\" (UID: \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\") " pod="openshift-console/console-76849948fc-cxrj8" Mar 12 18:32:32.043754 master-0 kubenswrapper[29097]: I0312 18:32:32.043506 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-console-oauth-config\") pod \"console-76849948fc-cxrj8\" (UID: \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\") " pod="openshift-console/console-76849948fc-cxrj8" Mar 12 18:32:32.043861 master-0 kubenswrapper[29097]: I0312 18:32:32.043803 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-trusted-ca-bundle\") pod \"console-76849948fc-cxrj8\" (UID: \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\") " pod="openshift-console/console-76849948fc-cxrj8" Mar 12 18:32:32.150067 master-0 kubenswrapper[29097]: I0312 18:32:32.149278 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-service-ca\") pod \"console-76849948fc-cxrj8\" (UID: \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\") " pod="openshift-console/console-76849948fc-cxrj8" Mar 12 18:32:32.150067 master-0 kubenswrapper[29097]: I0312 18:32:32.149348 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-console-serving-cert\") pod \"console-76849948fc-cxrj8\" (UID: \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\") " pod="openshift-console/console-76849948fc-cxrj8" Mar 12 18:32:32.150067 master-0 kubenswrapper[29097]: I0312 18:32:32.149411 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-console-oauth-config\") pod \"console-76849948fc-cxrj8\" (UID: \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\") " pod="openshift-console/console-76849948fc-cxrj8" Mar 12 18:32:32.150067 master-0 kubenswrapper[29097]: I0312 18:32:32.149448 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-trusted-ca-bundle\") pod \"console-76849948fc-cxrj8\" (UID: \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\") " pod="openshift-console/console-76849948fc-cxrj8" Mar 12 18:32:32.150067 master-0 kubenswrapper[29097]: I0312 18:32:32.149494 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgdtp\" (UniqueName: \"kubernetes.io/projected/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-kube-api-access-jgdtp\") pod \"console-76849948fc-cxrj8\" (UID: \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\") " pod="openshift-console/console-76849948fc-cxrj8" Mar 12 18:32:32.150067 master-0 kubenswrapper[29097]: I0312 18:32:32.149624 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-oauth-serving-cert\") pod \"console-76849948fc-cxrj8\" (UID: \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\") " pod="openshift-console/console-76849948fc-cxrj8" Mar 12 18:32:32.150067 master-0 kubenswrapper[29097]: I0312 18:32:32.149651 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-console-config\") pod \"console-76849948fc-cxrj8\" (UID: \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\") " pod="openshift-console/console-76849948fc-cxrj8" Mar 12 18:32:32.150908 master-0 kubenswrapper[29097]: I0312 18:32:32.150723 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-console-config\") pod \"console-76849948fc-cxrj8\" (UID: \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\") " pod="openshift-console/console-76849948fc-cxrj8" Mar 12 18:32:32.151177 master-0 kubenswrapper[29097]: I0312 18:32:32.151120 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-service-ca\") pod \"console-76849948fc-cxrj8\" (UID: \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\") " pod="openshift-console/console-76849948fc-cxrj8" Mar 12 18:32:32.152442 master-0 kubenswrapper[29097]: I0312 18:32:32.152338 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-oauth-serving-cert\") pod \"console-76849948fc-cxrj8\" (UID: \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\") " pod="openshift-console/console-76849948fc-cxrj8" Mar 12 18:32:32.152690 master-0 kubenswrapper[29097]: I0312 18:32:32.152486 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-trusted-ca-bundle\") pod \"console-76849948fc-cxrj8\" (UID: \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\") " pod="openshift-console/console-76849948fc-cxrj8" Mar 12 18:32:32.154904 master-0 kubenswrapper[29097]: I0312 18:32:32.154815 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-console-oauth-config\") pod \"console-76849948fc-cxrj8\" (UID: \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\") " pod="openshift-console/console-76849948fc-cxrj8" Mar 12 18:32:32.156472 master-0 kubenswrapper[29097]: I0312 18:32:32.156316 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-console-serving-cert\") pod \"console-76849948fc-cxrj8\" (UID: \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\") " pod="openshift-console/console-76849948fc-cxrj8" Mar 12 18:32:32.169838 master-0 kubenswrapper[29097]: I0312 18:32:32.169780 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgdtp\" (UniqueName: \"kubernetes.io/projected/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-kube-api-access-jgdtp\") pod \"console-76849948fc-cxrj8\" (UID: \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\") " pod="openshift-console/console-76849948fc-cxrj8" Mar 12 18:32:32.244966 master-0 kubenswrapper[29097]: I0312 18:32:32.244670 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76849948fc-cxrj8" Mar 12 18:32:32.677065 master-0 kubenswrapper[29097]: I0312 18:32:32.676978 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76849948fc-cxrj8"] Mar 12 18:32:32.870001 master-0 kubenswrapper[29097]: I0312 18:32:32.869928 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76849948fc-cxrj8" event={"ID":"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e","Type":"ContainerStarted","Data":"845e04bdb8e41ff4848edda2b77c7487ff66ba92fb2a57df0b98abcb66fe7a92"} Mar 12 18:32:32.870495 master-0 kubenswrapper[29097]: I0312 18:32:32.870007 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76849948fc-cxrj8" event={"ID":"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e","Type":"ContainerStarted","Data":"dc69780e36aafc28db96b0b9cc22b36808c549323f580ffad1662055dfa68b47"} Mar 12 18:32:32.873336 master-0 kubenswrapper[29097]: I0312 18:32:32.873306 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-55f4db4f7b-mxwpx_6964176f-5e1b-48ef-8e78-c2a9dbec41c7/console/1.log" Mar 12 18:32:32.873702 master-0 kubenswrapper[29097]: I0312 18:32:32.873677 29097 scope.go:117] "RemoveContainer" containerID="d267b4868c6d754bb6486f47b195e4410aa62164b0979678d635819011fb6a67" Mar 12 18:32:32.873870 master-0 kubenswrapper[29097]: E0312 18:32:32.873848 29097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console pod=console-55f4db4f7b-mxwpx_openshift-console(6964176f-5e1b-48ef-8e78-c2a9dbec41c7)\"" pod="openshift-console/console-55f4db4f7b-mxwpx" podUID="6964176f-5e1b-48ef-8e78-c2a9dbec41c7" Mar 12 18:32:32.891334 master-0 kubenswrapper[29097]: I0312 18:32:32.891283 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-76849948fc-cxrj8" podStartSLOduration=1.8912640349999998 podStartE2EDuration="1.891264035s" podCreationTimestamp="2026-03-12 18:32:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:32:32.891200384 +0000 UTC m=+192.445180491" watchObservedRunningTime="2026-03-12 18:32:32.891264035 +0000 UTC m=+192.445244142" Mar 12 18:32:33.913853 master-0 kubenswrapper[29097]: I0312 18:32:33.913817 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-76849948fc-cxrj8_ee109d34-abfd-4bb8-93e6-9e9c28ccec0e/console/0.log" Mar 12 18:32:33.914476 master-0 kubenswrapper[29097]: I0312 18:32:33.913867 29097 generic.go:334] "Generic (PLEG): container finished" podID="ee109d34-abfd-4bb8-93e6-9e9c28ccec0e" containerID="845e04bdb8e41ff4848edda2b77c7487ff66ba92fb2a57df0b98abcb66fe7a92" exitCode=255 Mar 12 18:32:33.914476 master-0 kubenswrapper[29097]: I0312 18:32:33.913908 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76849948fc-cxrj8" event={"ID":"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e","Type":"ContainerDied","Data":"845e04bdb8e41ff4848edda2b77c7487ff66ba92fb2a57df0b98abcb66fe7a92"} Mar 12 18:32:33.914476 master-0 kubenswrapper[29097]: I0312 18:32:33.914370 29097 scope.go:117] "RemoveContainer" containerID="845e04bdb8e41ff4848edda2b77c7487ff66ba92fb2a57df0b98abcb66fe7a92" Mar 12 18:32:34.922902 master-0 kubenswrapper[29097]: I0312 18:32:34.922829 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-76849948fc-cxrj8_ee109d34-abfd-4bb8-93e6-9e9c28ccec0e/console/1.log" Mar 12 18:32:34.923784 master-0 kubenswrapper[29097]: I0312 18:32:34.923323 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-76849948fc-cxrj8_ee109d34-abfd-4bb8-93e6-9e9c28ccec0e/console/0.log" Mar 12 18:32:34.923784 master-0 kubenswrapper[29097]: I0312 18:32:34.923366 29097 generic.go:334] "Generic (PLEG): container finished" podID="ee109d34-abfd-4bb8-93e6-9e9c28ccec0e" containerID="94d8f6ff9e10caa5d7657d4f9070aec6d8cf9e794f65572dc75b1ee126fd32c9" exitCode=255 Mar 12 18:32:34.923784 master-0 kubenswrapper[29097]: I0312 18:32:34.923391 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76849948fc-cxrj8" event={"ID":"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e","Type":"ContainerDied","Data":"94d8f6ff9e10caa5d7657d4f9070aec6d8cf9e794f65572dc75b1ee126fd32c9"} Mar 12 18:32:34.923784 master-0 kubenswrapper[29097]: I0312 18:32:34.923421 29097 scope.go:117] "RemoveContainer" containerID="845e04bdb8e41ff4848edda2b77c7487ff66ba92fb2a57df0b98abcb66fe7a92" Mar 12 18:32:34.924033 master-0 kubenswrapper[29097]: I0312 18:32:34.923798 29097 scope.go:117] "RemoveContainer" containerID="94d8f6ff9e10caa5d7657d4f9070aec6d8cf9e794f65572dc75b1ee126fd32c9" Mar 12 18:32:34.924033 master-0 kubenswrapper[29097]: E0312 18:32:34.923954 29097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console pod=console-76849948fc-cxrj8_openshift-console(ee109d34-abfd-4bb8-93e6-9e9c28ccec0e)\"" pod="openshift-console/console-76849948fc-cxrj8" podUID="ee109d34-abfd-4bb8-93e6-9e9c28ccec0e" Mar 12 18:32:34.946184 master-0 kubenswrapper[29097]: I0312 18:32:34.945325 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-55f4db4f7b-mxwpx" Mar 12 18:32:34.946184 master-0 kubenswrapper[29097]: I0312 18:32:34.945382 29097 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/console-55f4db4f7b-mxwpx" Mar 12 18:32:34.946184 master-0 kubenswrapper[29097]: I0312 18:32:34.945401 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-55f4db4f7b-mxwpx" Mar 12 18:32:34.946184 master-0 kubenswrapper[29097]: I0312 18:32:34.945412 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-55f4db4f7b-mxwpx" Mar 12 18:32:34.946184 master-0 kubenswrapper[29097]: I0312 18:32:34.946049 29097 scope.go:117] "RemoveContainer" containerID="d267b4868c6d754bb6486f47b195e4410aa62164b0979678d635819011fb6a67" Mar 12 18:32:34.946677 master-0 kubenswrapper[29097]: E0312 18:32:34.946646 29097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console pod=console-55f4db4f7b-mxwpx_openshift-console(6964176f-5e1b-48ef-8e78-c2a9dbec41c7)\"" pod="openshift-console/console-55f4db4f7b-mxwpx" podUID="6964176f-5e1b-48ef-8e78-c2a9dbec41c7" Mar 12 18:32:35.931681 master-0 kubenswrapper[29097]: I0312 18:32:35.931631 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-76849948fc-cxrj8_ee109d34-abfd-4bb8-93e6-9e9c28ccec0e/console/1.log" Mar 12 18:32:35.932270 master-0 kubenswrapper[29097]: I0312 18:32:35.932212 29097 scope.go:117] "RemoveContainer" containerID="94d8f6ff9e10caa5d7657d4f9070aec6d8cf9e794f65572dc75b1ee126fd32c9" Mar 12 18:32:35.932461 master-0 kubenswrapper[29097]: E0312 18:32:35.932434 29097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console pod=console-76849948fc-cxrj8_openshift-console(ee109d34-abfd-4bb8-93e6-9e9c28ccec0e)\"" pod="openshift-console/console-76849948fc-cxrj8" podUID="ee109d34-abfd-4bb8-93e6-9e9c28ccec0e" Mar 12 18:32:35.932534 master-0 kubenswrapper[29097]: I0312 18:32:35.932492 29097 scope.go:117] "RemoveContainer" containerID="d267b4868c6d754bb6486f47b195e4410aa62164b0979678d635819011fb6a67" Mar 12 18:32:35.932803 master-0 kubenswrapper[29097]: E0312 18:32:35.932781 29097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console pod=console-55f4db4f7b-mxwpx_openshift-console(6964176f-5e1b-48ef-8e78-c2a9dbec41c7)\"" pod="openshift-console/console-55f4db4f7b-mxwpx" podUID="6964176f-5e1b-48ef-8e78-c2a9dbec41c7" Mar 12 18:32:42.245531 master-0 kubenswrapper[29097]: I0312 18:32:42.245444 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-76849948fc-cxrj8" Mar 12 18:32:42.245531 master-0 kubenswrapper[29097]: I0312 18:32:42.245540 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-76849948fc-cxrj8" Mar 12 18:32:42.245531 master-0 kubenswrapper[29097]: I0312 18:32:42.245563 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-76849948fc-cxrj8" Mar 12 18:32:42.245531 master-0 kubenswrapper[29097]: I0312 18:32:42.245576 29097 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/console-76849948fc-cxrj8" Mar 12 18:32:42.246572 master-0 kubenswrapper[29097]: I0312 18:32:42.246333 29097 scope.go:117] "RemoveContainer" containerID="94d8f6ff9e10caa5d7657d4f9070aec6d8cf9e794f65572dc75b1ee126fd32c9" Mar 12 18:32:42.246662 master-0 kubenswrapper[29097]: E0312 18:32:42.246637 29097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console pod=console-76849948fc-cxrj8_openshift-console(ee109d34-abfd-4bb8-93e6-9e9c28ccec0e)\"" pod="openshift-console/console-76849948fc-cxrj8" podUID="ee109d34-abfd-4bb8-93e6-9e9c28ccec0e" Mar 12 18:32:42.774487 master-0 kubenswrapper[29097]: I0312 18:32:42.773840 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-55f4db4f7b-mxwpx"] Mar 12 18:32:42.809858 master-0 kubenswrapper[29097]: I0312 18:32:42.809814 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-68799679d4-tcwkt"] Mar 12 18:32:42.810682 master-0 kubenswrapper[29097]: I0312 18:32:42.810646 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-68799679d4-tcwkt" Mar 12 18:32:42.842704 master-0 kubenswrapper[29097]: I0312 18:32:42.842600 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-68799679d4-tcwkt"] Mar 12 18:32:42.849843 master-0 kubenswrapper[29097]: I0312 18:32:42.849163 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2f8180d8-5283-409a-b36e-4786c8483171-service-ca\") pod \"console-68799679d4-tcwkt\" (UID: \"2f8180d8-5283-409a-b36e-4786c8483171\") " pod="openshift-console/console-68799679d4-tcwkt" Mar 12 18:32:42.849843 master-0 kubenswrapper[29097]: I0312 18:32:42.849202 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2f8180d8-5283-409a-b36e-4786c8483171-oauth-serving-cert\") pod \"console-68799679d4-tcwkt\" (UID: \"2f8180d8-5283-409a-b36e-4786c8483171\") " pod="openshift-console/console-68799679d4-tcwkt" Mar 12 18:32:42.849843 master-0 kubenswrapper[29097]: I0312 18:32:42.849222 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9rfs\" (UniqueName: \"kubernetes.io/projected/2f8180d8-5283-409a-b36e-4786c8483171-kube-api-access-n9rfs\") pod \"console-68799679d4-tcwkt\" (UID: \"2f8180d8-5283-409a-b36e-4786c8483171\") " pod="openshift-console/console-68799679d4-tcwkt" Mar 12 18:32:42.849843 master-0 kubenswrapper[29097]: I0312 18:32:42.849243 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f8180d8-5283-409a-b36e-4786c8483171-trusted-ca-bundle\") pod \"console-68799679d4-tcwkt\" (UID: \"2f8180d8-5283-409a-b36e-4786c8483171\") " pod="openshift-console/console-68799679d4-tcwkt" Mar 12 18:32:42.849843 master-0 kubenswrapper[29097]: I0312 18:32:42.849266 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2f8180d8-5283-409a-b36e-4786c8483171-console-serving-cert\") pod \"console-68799679d4-tcwkt\" (UID: \"2f8180d8-5283-409a-b36e-4786c8483171\") " pod="openshift-console/console-68799679d4-tcwkt" Mar 12 18:32:42.849843 master-0 kubenswrapper[29097]: I0312 18:32:42.849302 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2f8180d8-5283-409a-b36e-4786c8483171-console-oauth-config\") pod \"console-68799679d4-tcwkt\" (UID: \"2f8180d8-5283-409a-b36e-4786c8483171\") " pod="openshift-console/console-68799679d4-tcwkt" Mar 12 18:32:42.849843 master-0 kubenswrapper[29097]: I0312 18:32:42.849322 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2f8180d8-5283-409a-b36e-4786c8483171-console-config\") pod \"console-68799679d4-tcwkt\" (UID: \"2f8180d8-5283-409a-b36e-4786c8483171\") " pod="openshift-console/console-68799679d4-tcwkt" Mar 12 18:32:42.951452 master-0 kubenswrapper[29097]: I0312 18:32:42.951388 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f8180d8-5283-409a-b36e-4786c8483171-trusted-ca-bundle\") pod \"console-68799679d4-tcwkt\" (UID: \"2f8180d8-5283-409a-b36e-4786c8483171\") " pod="openshift-console/console-68799679d4-tcwkt" Mar 12 18:32:42.951637 master-0 kubenswrapper[29097]: I0312 18:32:42.951464 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2f8180d8-5283-409a-b36e-4786c8483171-console-serving-cert\") pod \"console-68799679d4-tcwkt\" (UID: \"2f8180d8-5283-409a-b36e-4786c8483171\") " pod="openshift-console/console-68799679d4-tcwkt" Mar 12 18:32:42.952816 master-0 kubenswrapper[29097]: I0312 18:32:42.951835 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2f8180d8-5283-409a-b36e-4786c8483171-console-oauth-config\") pod \"console-68799679d4-tcwkt\" (UID: \"2f8180d8-5283-409a-b36e-4786c8483171\") " pod="openshift-console/console-68799679d4-tcwkt" Mar 12 18:32:42.952816 master-0 kubenswrapper[29097]: I0312 18:32:42.951883 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2f8180d8-5283-409a-b36e-4786c8483171-console-config\") pod \"console-68799679d4-tcwkt\" (UID: \"2f8180d8-5283-409a-b36e-4786c8483171\") " pod="openshift-console/console-68799679d4-tcwkt" Mar 12 18:32:42.952816 master-0 kubenswrapper[29097]: I0312 18:32:42.951949 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2f8180d8-5283-409a-b36e-4786c8483171-service-ca\") pod \"console-68799679d4-tcwkt\" (UID: \"2f8180d8-5283-409a-b36e-4786c8483171\") " pod="openshift-console/console-68799679d4-tcwkt" Mar 12 18:32:42.952816 master-0 kubenswrapper[29097]: I0312 18:32:42.951976 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2f8180d8-5283-409a-b36e-4786c8483171-oauth-serving-cert\") pod \"console-68799679d4-tcwkt\" (UID: \"2f8180d8-5283-409a-b36e-4786c8483171\") " pod="openshift-console/console-68799679d4-tcwkt" Mar 12 18:32:42.952816 master-0 kubenswrapper[29097]: I0312 18:32:42.951996 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9rfs\" (UniqueName: \"kubernetes.io/projected/2f8180d8-5283-409a-b36e-4786c8483171-kube-api-access-n9rfs\") pod \"console-68799679d4-tcwkt\" (UID: \"2f8180d8-5283-409a-b36e-4786c8483171\") " pod="openshift-console/console-68799679d4-tcwkt" Mar 12 18:32:42.953287 master-0 kubenswrapper[29097]: I0312 18:32:42.952938 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2f8180d8-5283-409a-b36e-4786c8483171-console-config\") pod \"console-68799679d4-tcwkt\" (UID: \"2f8180d8-5283-409a-b36e-4786c8483171\") " pod="openshift-console/console-68799679d4-tcwkt" Mar 12 18:32:42.953324 master-0 kubenswrapper[29097]: I0312 18:32:42.953293 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2f8180d8-5283-409a-b36e-4786c8483171-oauth-serving-cert\") pod \"console-68799679d4-tcwkt\" (UID: \"2f8180d8-5283-409a-b36e-4786c8483171\") " pod="openshift-console/console-68799679d4-tcwkt" Mar 12 18:32:42.953667 master-0 kubenswrapper[29097]: I0312 18:32:42.953620 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f8180d8-5283-409a-b36e-4786c8483171-trusted-ca-bundle\") pod \"console-68799679d4-tcwkt\" (UID: \"2f8180d8-5283-409a-b36e-4786c8483171\") " pod="openshift-console/console-68799679d4-tcwkt" Mar 12 18:32:42.957410 master-0 kubenswrapper[29097]: I0312 18:32:42.957382 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2f8180d8-5283-409a-b36e-4786c8483171-console-serving-cert\") pod \"console-68799679d4-tcwkt\" (UID: \"2f8180d8-5283-409a-b36e-4786c8483171\") " pod="openshift-console/console-68799679d4-tcwkt" Mar 12 18:32:42.958107 master-0 kubenswrapper[29097]: I0312 18:32:42.958057 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2f8180d8-5283-409a-b36e-4786c8483171-service-ca\") pod \"console-68799679d4-tcwkt\" (UID: \"2f8180d8-5283-409a-b36e-4786c8483171\") " pod="openshift-console/console-68799679d4-tcwkt" Mar 12 18:32:42.959019 master-0 kubenswrapper[29097]: I0312 18:32:42.958864 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2f8180d8-5283-409a-b36e-4786c8483171-console-oauth-config\") pod \"console-68799679d4-tcwkt\" (UID: \"2f8180d8-5283-409a-b36e-4786c8483171\") " pod="openshift-console/console-68799679d4-tcwkt" Mar 12 18:32:42.969245 master-0 kubenswrapper[29097]: I0312 18:32:42.969218 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9rfs\" (UniqueName: \"kubernetes.io/projected/2f8180d8-5283-409a-b36e-4786c8483171-kube-api-access-n9rfs\") pod \"console-68799679d4-tcwkt\" (UID: \"2f8180d8-5283-409a-b36e-4786c8483171\") " pod="openshift-console/console-68799679d4-tcwkt" Mar 12 18:32:43.162577 master-0 kubenswrapper[29097]: I0312 18:32:43.162491 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-68799679d4-tcwkt" Mar 12 18:32:54.720977 master-0 kubenswrapper[29097]: I0312 18:32:54.720872 29097 scope.go:117] "RemoveContainer" containerID="94d8f6ff9e10caa5d7657d4f9070aec6d8cf9e794f65572dc75b1ee126fd32c9" Mar 12 18:32:54.841348 master-0 kubenswrapper[29097]: I0312 18:32:54.841074 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-55f4db4f7b-mxwpx_6964176f-5e1b-48ef-8e78-c2a9dbec41c7/console/1.log" Mar 12 18:32:54.841895 master-0 kubenswrapper[29097]: I0312 18:32:54.841855 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-55f4db4f7b-mxwpx" Mar 12 18:32:54.939589 master-0 kubenswrapper[29097]: I0312 18:32:54.939478 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-console-config\") pod \"6964176f-5e1b-48ef-8e78-c2a9dbec41c7\" (UID: \"6964176f-5e1b-48ef-8e78-c2a9dbec41c7\") " Mar 12 18:32:54.939589 master-0 kubenswrapper[29097]: I0312 18:32:54.939580 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsw2t\" (UniqueName: \"kubernetes.io/projected/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-kube-api-access-zsw2t\") pod \"6964176f-5e1b-48ef-8e78-c2a9dbec41c7\" (UID: \"6964176f-5e1b-48ef-8e78-c2a9dbec41c7\") " Mar 12 18:32:54.940010 master-0 kubenswrapper[29097]: I0312 18:32:54.939667 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-console-serving-cert\") pod \"6964176f-5e1b-48ef-8e78-c2a9dbec41c7\" (UID: \"6964176f-5e1b-48ef-8e78-c2a9dbec41c7\") " Mar 12 18:32:54.940010 master-0 kubenswrapper[29097]: I0312 18:32:54.939695 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-service-ca\") pod \"6964176f-5e1b-48ef-8e78-c2a9dbec41c7\" (UID: \"6964176f-5e1b-48ef-8e78-c2a9dbec41c7\") " Mar 12 18:32:54.940010 master-0 kubenswrapper[29097]: I0312 18:32:54.939727 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-oauth-serving-cert\") pod \"6964176f-5e1b-48ef-8e78-c2a9dbec41c7\" (UID: \"6964176f-5e1b-48ef-8e78-c2a9dbec41c7\") " Mar 12 18:32:54.940010 master-0 kubenswrapper[29097]: I0312 18:32:54.939773 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-console-oauth-config\") pod \"6964176f-5e1b-48ef-8e78-c2a9dbec41c7\" (UID: \"6964176f-5e1b-48ef-8e78-c2a9dbec41c7\") " Mar 12 18:32:54.942465 master-0 kubenswrapper[29097]: I0312 18:32:54.942040 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-service-ca" (OuterVolumeSpecName: "service-ca") pod "6964176f-5e1b-48ef-8e78-c2a9dbec41c7" (UID: "6964176f-5e1b-48ef-8e78-c2a9dbec41c7"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:32:54.942465 master-0 kubenswrapper[29097]: I0312 18:32:54.942252 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-console-config" (OuterVolumeSpecName: "console-config") pod "6964176f-5e1b-48ef-8e78-c2a9dbec41c7" (UID: "6964176f-5e1b-48ef-8e78-c2a9dbec41c7"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:32:54.942465 master-0 kubenswrapper[29097]: I0312 18:32:54.942428 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6964176f-5e1b-48ef-8e78-c2a9dbec41c7" (UID: "6964176f-5e1b-48ef-8e78-c2a9dbec41c7"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:32:54.948615 master-0 kubenswrapper[29097]: I0312 18:32:54.946137 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6964176f-5e1b-48ef-8e78-c2a9dbec41c7" (UID: "6964176f-5e1b-48ef-8e78-c2a9dbec41c7"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:32:54.950845 master-0 kubenswrapper[29097]: I0312 18:32:54.948935 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6964176f-5e1b-48ef-8e78-c2a9dbec41c7" (UID: "6964176f-5e1b-48ef-8e78-c2a9dbec41c7"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:32:54.951091 master-0 kubenswrapper[29097]: I0312 18:32:54.951053 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-kube-api-access-zsw2t" (OuterVolumeSpecName: "kube-api-access-zsw2t") pod "6964176f-5e1b-48ef-8e78-c2a9dbec41c7" (UID: "6964176f-5e1b-48ef-8e78-c2a9dbec41c7"). InnerVolumeSpecName "kube-api-access-zsw2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:32:55.042271 master-0 kubenswrapper[29097]: I0312 18:32:55.041755 29097 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 18:32:55.042271 master-0 kubenswrapper[29097]: I0312 18:32:55.041791 29097 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 18:32:55.042271 master-0 kubenswrapper[29097]: I0312 18:32:55.041802 29097 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 18:32:55.042271 master-0 kubenswrapper[29097]: I0312 18:32:55.041811 29097 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:32:55.042271 master-0 kubenswrapper[29097]: I0312 18:32:55.041820 29097 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-console-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:32:55.042271 master-0 kubenswrapper[29097]: I0312 18:32:55.041829 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsw2t\" (UniqueName: \"kubernetes.io/projected/6964176f-5e1b-48ef-8e78-c2a9dbec41c7-kube-api-access-zsw2t\") on node \"master-0\" DevicePath \"\"" Mar 12 18:32:55.097952 master-0 kubenswrapper[29097]: I0312 18:32:55.097912 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-55f4db4f7b-mxwpx_6964176f-5e1b-48ef-8e78-c2a9dbec41c7/console/1.log" Mar 12 18:32:55.098092 master-0 kubenswrapper[29097]: I0312 18:32:55.098059 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-55f4db4f7b-mxwpx" Mar 12 18:32:55.098092 master-0 kubenswrapper[29097]: I0312 18:32:55.098046 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55f4db4f7b-mxwpx" event={"ID":"6964176f-5e1b-48ef-8e78-c2a9dbec41c7","Type":"ContainerDied","Data":"c495aa171b451a8e6d054d3a4ff74b78943d1ac1ea0a676ba7cb71d613038a3c"} Mar 12 18:32:55.098194 master-0 kubenswrapper[29097]: I0312 18:32:55.098172 29097 scope.go:117] "RemoveContainer" containerID="d267b4868c6d754bb6486f47b195e4410aa62164b0979678d635819011fb6a67" Mar 12 18:32:55.101895 master-0 kubenswrapper[29097]: I0312 18:32:55.101850 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-76849948fc-cxrj8_ee109d34-abfd-4bb8-93e6-9e9c28ccec0e/console/1.log" Mar 12 18:32:55.101989 master-0 kubenswrapper[29097]: I0312 18:32:55.101914 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76849948fc-cxrj8" event={"ID":"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e","Type":"ContainerStarted","Data":"0f35f1e0bdd744c9b268ffd9fa5b41e535ae26cb0595d5c5018d98bc80b9d9d8"} Mar 12 18:32:55.144894 master-0 kubenswrapper[29097]: I0312 18:32:55.144242 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-55f4db4f7b-mxwpx"] Mar 12 18:32:55.151214 master-0 kubenswrapper[29097]: I0312 18:32:55.151159 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-55f4db4f7b-mxwpx"] Mar 12 18:32:55.248327 master-0 kubenswrapper[29097]: I0312 18:32:55.248270 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-68799679d4-tcwkt"] Mar 12 18:32:55.249034 master-0 kubenswrapper[29097]: W0312 18:32:55.248978 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f8180d8_5283_409a_b36e_4786c8483171.slice/crio-16748130ffd70bebb1c38ebc6df15f662f6ac8e77eb06b99d5842eb185edd374 WatchSource:0}: Error finding container 16748130ffd70bebb1c38ebc6df15f662f6ac8e77eb06b99d5842eb185edd374: Status 404 returned error can't find the container with id 16748130ffd70bebb1c38ebc6df15f662f6ac8e77eb06b99d5842eb185edd374 Mar 12 18:32:56.113347 master-0 kubenswrapper[29097]: I0312 18:32:56.113257 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-84f57b9877-lsc92" event={"ID":"60ba51da-3daf-4608-9269-b10211a184e9","Type":"ContainerStarted","Data":"f93afb5f2b76fc3bb8514bcac56c00bc09236e45db0fa668fc915390e210d107"} Mar 12 18:32:56.113347 master-0 kubenswrapper[29097]: I0312 18:32:56.113345 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-84f57b9877-lsc92" Mar 12 18:32:56.115093 master-0 kubenswrapper[29097]: I0312 18:32:56.115037 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-68799679d4-tcwkt" event={"ID":"2f8180d8-5283-409a-b36e-4786c8483171","Type":"ContainerStarted","Data":"92d0d3642838b699586062a9bd0f4f5d1078e1604de639f4aa04e3d8c53b3dfe"} Mar 12 18:32:56.115202 master-0 kubenswrapper[29097]: I0312 18:32:56.115097 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-68799679d4-tcwkt" event={"ID":"2f8180d8-5283-409a-b36e-4786c8483171","Type":"ContainerStarted","Data":"16748130ffd70bebb1c38ebc6df15f662f6ac8e77eb06b99d5842eb185edd374"} Mar 12 18:32:56.117032 master-0 kubenswrapper[29097]: I0312 18:32:56.116977 29097 patch_prober.go:28] interesting pod/downloads-84f57b9877-lsc92 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.94:8080/\": dial tcp 10.128.0.94:8080: connect: connection refused" start-of-body= Mar 12 18:32:56.117156 master-0 kubenswrapper[29097]: I0312 18:32:56.117048 29097 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-84f57b9877-lsc92" podUID="60ba51da-3daf-4608-9269-b10211a184e9" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.94:8080/\": dial tcp 10.128.0.94:8080: connect: connection refused" Mar 12 18:32:56.211002 master-0 kubenswrapper[29097]: I0312 18:32:56.210740 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-84f57b9877-lsc92" podStartSLOduration=2.058364073 podStartE2EDuration="39.210720396s" podCreationTimestamp="2026-03-12 18:32:17 +0000 UTC" firstStartedPulling="2026-03-12 18:32:17.92709917 +0000 UTC m=+177.481079267" lastFinishedPulling="2026-03-12 18:32:55.079455473 +0000 UTC m=+214.633435590" observedRunningTime="2026-03-12 18:32:56.204268695 +0000 UTC m=+215.758248802" watchObservedRunningTime="2026-03-12 18:32:56.210720396 +0000 UTC m=+215.764700503" Mar 12 18:32:56.735051 master-0 kubenswrapper[29097]: I0312 18:32:56.734951 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6964176f-5e1b-48ef-8e78-c2a9dbec41c7" path="/var/lib/kubelet/pods/6964176f-5e1b-48ef-8e78-c2a9dbec41c7/volumes" Mar 12 18:32:56.770899 master-0 kubenswrapper[29097]: I0312 18:32:56.770775 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-68799679d4-tcwkt" podStartSLOduration=14.770735966 podStartE2EDuration="14.770735966s" podCreationTimestamp="2026-03-12 18:32:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:32:56.76293771 +0000 UTC m=+216.316917827" watchObservedRunningTime="2026-03-12 18:32:56.770735966 +0000 UTC m=+216.324716103" Mar 12 18:32:57.124488 master-0 kubenswrapper[29097]: I0312 18:32:57.124407 29097 patch_prober.go:28] interesting pod/downloads-84f57b9877-lsc92 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.94:8080/\": dial tcp 10.128.0.94:8080: connect: connection refused" start-of-body= Mar 12 18:32:57.125103 master-0 kubenswrapper[29097]: I0312 18:32:57.124562 29097 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-84f57b9877-lsc92" podUID="60ba51da-3daf-4608-9269-b10211a184e9" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.94:8080/\": dial tcp 10.128.0.94:8080: connect: connection refused" Mar 12 18:32:57.540535 master-0 kubenswrapper[29097]: I0312 18:32:57.539476 29097 patch_prober.go:28] interesting pod/downloads-84f57b9877-lsc92 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.128.0.94:8080/\": dial tcp 10.128.0.94:8080: connect: connection refused" start-of-body= Mar 12 18:32:57.540535 master-0 kubenswrapper[29097]: I0312 18:32:57.539563 29097 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-84f57b9877-lsc92" podUID="60ba51da-3daf-4608-9269-b10211a184e9" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.94:8080/\": dial tcp 10.128.0.94:8080: connect: connection refused" Mar 12 18:32:57.540535 master-0 kubenswrapper[29097]: I0312 18:32:57.539640 29097 patch_prober.go:28] interesting pod/downloads-84f57b9877-lsc92 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.94:8080/\": dial tcp 10.128.0.94:8080: connect: connection refused" start-of-body= Mar 12 18:32:57.540535 master-0 kubenswrapper[29097]: I0312 18:32:57.539655 29097 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-84f57b9877-lsc92" podUID="60ba51da-3daf-4608-9269-b10211a184e9" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.94:8080/\": dial tcp 10.128.0.94:8080: connect: connection refused" Mar 12 18:33:02.245391 master-0 kubenswrapper[29097]: I0312 18:33:02.245295 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-76849948fc-cxrj8" Mar 12 18:33:02.246265 master-0 kubenswrapper[29097]: I0312 18:33:02.245679 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-76849948fc-cxrj8" Mar 12 18:33:02.247110 master-0 kubenswrapper[29097]: I0312 18:33:02.247050 29097 patch_prober.go:28] interesting pod/console-76849948fc-cxrj8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 12 18:33:02.247265 master-0 kubenswrapper[29097]: I0312 18:33:02.247149 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76849948fc-cxrj8" podUID="ee109d34-abfd-4bb8-93e6-9e9c28ccec0e" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 12 18:33:03.165600 master-0 kubenswrapper[29097]: I0312 18:33:03.165010 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-68799679d4-tcwkt" Mar 12 18:33:03.165600 master-0 kubenswrapper[29097]: I0312 18:33:03.165143 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-68799679d4-tcwkt" Mar 12 18:33:03.177888 master-0 kubenswrapper[29097]: I0312 18:33:03.177794 29097 patch_prober.go:28] interesting pod/console-68799679d4-tcwkt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" start-of-body= Mar 12 18:33:03.178218 master-0 kubenswrapper[29097]: I0312 18:33:03.177921 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-68799679d4-tcwkt" podUID="2f8180d8-5283-409a-b36e-4786c8483171" containerName="console" probeResult="failure" output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" Mar 12 18:33:07.547428 master-0 kubenswrapper[29097]: I0312 18:33:07.547238 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-84f57b9877-lsc92" Mar 12 18:33:12.246377 master-0 kubenswrapper[29097]: I0312 18:33:12.246296 29097 patch_prober.go:28] interesting pod/console-76849948fc-cxrj8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 12 18:33:12.247234 master-0 kubenswrapper[29097]: I0312 18:33:12.246396 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76849948fc-cxrj8" podUID="ee109d34-abfd-4bb8-93e6-9e9c28ccec0e" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 12 18:33:13.163905 master-0 kubenswrapper[29097]: I0312 18:33:13.163829 29097 patch_prober.go:28] interesting pod/console-68799679d4-tcwkt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" start-of-body= Mar 12 18:33:13.164145 master-0 kubenswrapper[29097]: I0312 18:33:13.163910 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-68799679d4-tcwkt" podUID="2f8180d8-5283-409a-b36e-4786c8483171" containerName="console" probeResult="failure" output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" Mar 12 18:33:21.190657 master-0 kubenswrapper[29097]: E0312 18:33:21.190123 29097 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33feec78_4592_4343_965b_aa1b7044fcf3.slice/crio-26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Error finding container 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Status 404 returned error can't find the container with id 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547 Mar 12 18:33:22.245437 master-0 kubenswrapper[29097]: I0312 18:33:22.245375 29097 patch_prober.go:28] interesting pod/console-76849948fc-cxrj8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 12 18:33:22.246045 master-0 kubenswrapper[29097]: I0312 18:33:22.245458 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76849948fc-cxrj8" podUID="ee109d34-abfd-4bb8-93e6-9e9c28ccec0e" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 12 18:33:23.163972 master-0 kubenswrapper[29097]: I0312 18:33:23.163896 29097 patch_prober.go:28] interesting pod/console-68799679d4-tcwkt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" start-of-body= Mar 12 18:33:23.163972 master-0 kubenswrapper[29097]: I0312 18:33:23.163958 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-68799679d4-tcwkt" podUID="2f8180d8-5283-409a-b36e-4786c8483171" containerName="console" probeResult="failure" output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" Mar 12 18:33:32.084671 master-0 kubenswrapper[29097]: I0312 18:33:32.084613 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 12 18:33:32.085495 master-0 kubenswrapper[29097]: E0312 18:33:32.084894 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6964176f-5e1b-48ef-8e78-c2a9dbec41c7" containerName="console" Mar 12 18:33:32.085495 master-0 kubenswrapper[29097]: I0312 18:33:32.084910 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="6964176f-5e1b-48ef-8e78-c2a9dbec41c7" containerName="console" Mar 12 18:33:32.085495 master-0 kubenswrapper[29097]: E0312 18:33:32.084939 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6964176f-5e1b-48ef-8e78-c2a9dbec41c7" containerName="console" Mar 12 18:33:32.085495 master-0 kubenswrapper[29097]: I0312 18:33:32.084945 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="6964176f-5e1b-48ef-8e78-c2a9dbec41c7" containerName="console" Mar 12 18:33:32.085495 master-0 kubenswrapper[29097]: I0312 18:33:32.085076 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="6964176f-5e1b-48ef-8e78-c2a9dbec41c7" containerName="console" Mar 12 18:33:32.085495 master-0 kubenswrapper[29097]: I0312 18:33:32.085109 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="6964176f-5e1b-48ef-8e78-c2a9dbec41c7" containerName="console" Mar 12 18:33:32.086816 master-0 kubenswrapper[29097]: I0312 18:33:32.086790 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.096231 master-0 kubenswrapper[29097]: I0312 18:33:32.096178 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 12 18:33:32.096579 master-0 kubenswrapper[29097]: I0312 18:33:32.096279 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 12 18:33:32.096579 master-0 kubenswrapper[29097]: I0312 18:33:32.096435 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 12 18:33:32.096668 master-0 kubenswrapper[29097]: I0312 18:33:32.096447 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 12 18:33:32.097124 master-0 kubenswrapper[29097]: I0312 18:33:32.097090 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 12 18:33:32.097242 master-0 kubenswrapper[29097]: I0312 18:33:32.097206 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 12 18:33:32.097532 master-0 kubenswrapper[29097]: I0312 18:33:32.097479 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 12 18:33:32.104067 master-0 kubenswrapper[29097]: I0312 18:33:32.103994 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 12 18:33:32.151797 master-0 kubenswrapper[29097]: I0312 18:33:32.150904 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 12 18:33:32.202884 master-0 kubenswrapper[29097]: I0312 18:33:32.199678 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/221d9bf3-99e3-4397-994b-0bef619f6177-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.202884 master-0 kubenswrapper[29097]: I0312 18:33:32.199741 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/221d9bf3-99e3-4397-994b-0bef619f6177-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.202884 master-0 kubenswrapper[29097]: I0312 18:33:32.199776 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/221d9bf3-99e3-4397-994b-0bef619f6177-web-config\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.202884 master-0 kubenswrapper[29097]: I0312 18:33:32.199809 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/221d9bf3-99e3-4397-994b-0bef619f6177-config-out\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.202884 master-0 kubenswrapper[29097]: I0312 18:33:32.199833 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p7mc\" (UniqueName: \"kubernetes.io/projected/221d9bf3-99e3-4397-994b-0bef619f6177-kube-api-access-7p7mc\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.202884 master-0 kubenswrapper[29097]: I0312 18:33:32.199871 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/221d9bf3-99e3-4397-994b-0bef619f6177-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.202884 master-0 kubenswrapper[29097]: I0312 18:33:32.199898 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/221d9bf3-99e3-4397-994b-0bef619f6177-config-volume\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.202884 master-0 kubenswrapper[29097]: I0312 18:33:32.199927 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/221d9bf3-99e3-4397-994b-0bef619f6177-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.202884 master-0 kubenswrapper[29097]: I0312 18:33:32.199956 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/221d9bf3-99e3-4397-994b-0bef619f6177-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.202884 master-0 kubenswrapper[29097]: I0312 18:33:32.199987 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/221d9bf3-99e3-4397-994b-0bef619f6177-tls-assets\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.202884 master-0 kubenswrapper[29097]: I0312 18:33:32.200013 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/221d9bf3-99e3-4397-994b-0bef619f6177-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.202884 master-0 kubenswrapper[29097]: I0312 18:33:32.200057 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/221d9bf3-99e3-4397-994b-0bef619f6177-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.246088 master-0 kubenswrapper[29097]: I0312 18:33:32.246044 29097 patch_prober.go:28] interesting pod/console-76849948fc-cxrj8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 12 18:33:32.246363 master-0 kubenswrapper[29097]: I0312 18:33:32.246108 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-76849948fc-cxrj8" podUID="ee109d34-abfd-4bb8-93e6-9e9c28ccec0e" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 12 18:33:32.301393 master-0 kubenswrapper[29097]: I0312 18:33:32.301335 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/221d9bf3-99e3-4397-994b-0bef619f6177-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.301393 master-0 kubenswrapper[29097]: I0312 18:33:32.301390 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/221d9bf3-99e3-4397-994b-0bef619f6177-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.301677 master-0 kubenswrapper[29097]: I0312 18:33:32.301407 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/221d9bf3-99e3-4397-994b-0bef619f6177-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.301677 master-0 kubenswrapper[29097]: I0312 18:33:32.301426 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/221d9bf3-99e3-4397-994b-0bef619f6177-web-config\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.301677 master-0 kubenswrapper[29097]: I0312 18:33:32.301448 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/221d9bf3-99e3-4397-994b-0bef619f6177-config-out\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.301677 master-0 kubenswrapper[29097]: I0312 18:33:32.301468 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p7mc\" (UniqueName: \"kubernetes.io/projected/221d9bf3-99e3-4397-994b-0bef619f6177-kube-api-access-7p7mc\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.301677 master-0 kubenswrapper[29097]: I0312 18:33:32.301500 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/221d9bf3-99e3-4397-994b-0bef619f6177-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.301677 master-0 kubenswrapper[29097]: I0312 18:33:32.301544 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/221d9bf3-99e3-4397-994b-0bef619f6177-config-volume\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.301677 master-0 kubenswrapper[29097]: I0312 18:33:32.301566 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/221d9bf3-99e3-4397-994b-0bef619f6177-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.301677 master-0 kubenswrapper[29097]: I0312 18:33:32.301587 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/221d9bf3-99e3-4397-994b-0bef619f6177-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.301677 master-0 kubenswrapper[29097]: I0312 18:33:32.301604 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/221d9bf3-99e3-4397-994b-0bef619f6177-tls-assets\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.301677 master-0 kubenswrapper[29097]: I0312 18:33:32.301619 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/221d9bf3-99e3-4397-994b-0bef619f6177-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.302887 master-0 kubenswrapper[29097]: I0312 18:33:32.302844 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/221d9bf3-99e3-4397-994b-0bef619f6177-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.303813 master-0 kubenswrapper[29097]: I0312 18:33:32.303777 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/221d9bf3-99e3-4397-994b-0bef619f6177-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.304027 master-0 kubenswrapper[29097]: I0312 18:33:32.303992 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/221d9bf3-99e3-4397-994b-0bef619f6177-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.305903 master-0 kubenswrapper[29097]: I0312 18:33:32.305862 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/221d9bf3-99e3-4397-994b-0bef619f6177-tls-assets\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.306694 master-0 kubenswrapper[29097]: I0312 18:33:32.306635 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/221d9bf3-99e3-4397-994b-0bef619f6177-web-config\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.306835 master-0 kubenswrapper[29097]: I0312 18:33:32.306794 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/221d9bf3-99e3-4397-994b-0bef619f6177-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.307648 master-0 kubenswrapper[29097]: I0312 18:33:32.307617 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/221d9bf3-99e3-4397-994b-0bef619f6177-config-out\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.308036 master-0 kubenswrapper[29097]: I0312 18:33:32.307997 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/221d9bf3-99e3-4397-994b-0bef619f6177-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.308477 master-0 kubenswrapper[29097]: I0312 18:33:32.308421 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/221d9bf3-99e3-4397-994b-0bef619f6177-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.308655 master-0 kubenswrapper[29097]: I0312 18:33:32.308616 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/221d9bf3-99e3-4397-994b-0bef619f6177-config-volume\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.308929 master-0 kubenswrapper[29097]: I0312 18:33:32.308885 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/221d9bf3-99e3-4397-994b-0bef619f6177-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.328036 master-0 kubenswrapper[29097]: I0312 18:33:32.326896 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-99547cb8-z9gq2"] Mar 12 18:33:32.328725 master-0 kubenswrapper[29097]: I0312 18:33:32.328696 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" Mar 12 18:33:32.336641 master-0 kubenswrapper[29097]: I0312 18:33:32.331036 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 12 18:33:32.336641 master-0 kubenswrapper[29097]: I0312 18:33:32.331257 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 12 18:33:32.336641 master-0 kubenswrapper[29097]: I0312 18:33:32.331357 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 12 18:33:32.336641 master-0 kubenswrapper[29097]: I0312 18:33:32.331456 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 12 18:33:32.336641 master-0 kubenswrapper[29097]: I0312 18:33:32.331592 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 12 18:33:32.336641 master-0 kubenswrapper[29097]: I0312 18:33:32.331687 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-4kvgb1ceuoc51" Mar 12 18:33:32.336641 master-0 kubenswrapper[29097]: I0312 18:33:32.333879 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p7mc\" (UniqueName: \"kubernetes.io/projected/221d9bf3-99e3-4397-994b-0bef619f6177-kube-api-access-7p7mc\") pod \"alertmanager-main-0\" (UID: \"221d9bf3-99e3-4397-994b-0bef619f6177\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.402766 master-0 kubenswrapper[29097]: I0312 18:33:32.402715 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/988abf9f-43bc-4440-865e-b60d248eeaaa-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-99547cb8-z9gq2\" (UID: \"988abf9f-43bc-4440-865e-b60d248eeaaa\") " pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" Mar 12 18:33:32.403083 master-0 kubenswrapper[29097]: I0312 18:33:32.403060 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/988abf9f-43bc-4440-865e-b60d248eeaaa-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-99547cb8-z9gq2\" (UID: \"988abf9f-43bc-4440-865e-b60d248eeaaa\") " pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" Mar 12 18:33:32.403189 master-0 kubenswrapper[29097]: I0312 18:33:32.403172 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/988abf9f-43bc-4440-865e-b60d248eeaaa-secret-thanos-querier-tls\") pod \"thanos-querier-99547cb8-z9gq2\" (UID: \"988abf9f-43bc-4440-865e-b60d248eeaaa\") " pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" Mar 12 18:33:32.403494 master-0 kubenswrapper[29097]: I0312 18:33:32.403420 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/988abf9f-43bc-4440-865e-b60d248eeaaa-secret-grpc-tls\") pod \"thanos-querier-99547cb8-z9gq2\" (UID: \"988abf9f-43bc-4440-865e-b60d248eeaaa\") " pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" Mar 12 18:33:32.403616 master-0 kubenswrapper[29097]: I0312 18:33:32.403585 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/988abf9f-43bc-4440-865e-b60d248eeaaa-metrics-client-ca\") pod \"thanos-querier-99547cb8-z9gq2\" (UID: \"988abf9f-43bc-4440-865e-b60d248eeaaa\") " pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" Mar 12 18:33:32.403673 master-0 kubenswrapper[29097]: I0312 18:33:32.403632 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4zwc\" (UniqueName: \"kubernetes.io/projected/988abf9f-43bc-4440-865e-b60d248eeaaa-kube-api-access-k4zwc\") pod \"thanos-querier-99547cb8-z9gq2\" (UID: \"988abf9f-43bc-4440-865e-b60d248eeaaa\") " pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" Mar 12 18:33:32.403760 master-0 kubenswrapper[29097]: I0312 18:33:32.403714 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/988abf9f-43bc-4440-865e-b60d248eeaaa-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-99547cb8-z9gq2\" (UID: \"988abf9f-43bc-4440-865e-b60d248eeaaa\") " pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" Mar 12 18:33:32.403856 master-0 kubenswrapper[29097]: I0312 18:33:32.403827 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/988abf9f-43bc-4440-865e-b60d248eeaaa-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-99547cb8-z9gq2\" (UID: \"988abf9f-43bc-4440-865e-b60d248eeaaa\") " pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" Mar 12 18:33:32.404472 master-0 kubenswrapper[29097]: I0312 18:33:32.404421 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-99547cb8-z9gq2"] Mar 12 18:33:32.417215 master-0 kubenswrapper[29097]: I0312 18:33:32.417170 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 12 18:33:32.508030 master-0 kubenswrapper[29097]: I0312 18:33:32.505416 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/988abf9f-43bc-4440-865e-b60d248eeaaa-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-99547cb8-z9gq2\" (UID: \"988abf9f-43bc-4440-865e-b60d248eeaaa\") " pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" Mar 12 18:33:32.508030 master-0 kubenswrapper[29097]: I0312 18:33:32.505472 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/988abf9f-43bc-4440-865e-b60d248eeaaa-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-99547cb8-z9gq2\" (UID: \"988abf9f-43bc-4440-865e-b60d248eeaaa\") " pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" Mar 12 18:33:32.508030 master-0 kubenswrapper[29097]: I0312 18:33:32.505959 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/988abf9f-43bc-4440-865e-b60d248eeaaa-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-99547cb8-z9gq2\" (UID: \"988abf9f-43bc-4440-865e-b60d248eeaaa\") " pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" Mar 12 18:33:32.508030 master-0 kubenswrapper[29097]: I0312 18:33:32.506186 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/988abf9f-43bc-4440-865e-b60d248eeaaa-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-99547cb8-z9gq2\" (UID: \"988abf9f-43bc-4440-865e-b60d248eeaaa\") " pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" Mar 12 18:33:32.508030 master-0 kubenswrapper[29097]: I0312 18:33:32.506270 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/988abf9f-43bc-4440-865e-b60d248eeaaa-secret-thanos-querier-tls\") pod \"thanos-querier-99547cb8-z9gq2\" (UID: \"988abf9f-43bc-4440-865e-b60d248eeaaa\") " pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" Mar 12 18:33:32.508030 master-0 kubenswrapper[29097]: I0312 18:33:32.506376 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/988abf9f-43bc-4440-865e-b60d248eeaaa-secret-grpc-tls\") pod \"thanos-querier-99547cb8-z9gq2\" (UID: \"988abf9f-43bc-4440-865e-b60d248eeaaa\") " pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" Mar 12 18:33:32.508030 master-0 kubenswrapper[29097]: I0312 18:33:32.506428 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/988abf9f-43bc-4440-865e-b60d248eeaaa-metrics-client-ca\") pod \"thanos-querier-99547cb8-z9gq2\" (UID: \"988abf9f-43bc-4440-865e-b60d248eeaaa\") " pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" Mar 12 18:33:32.508030 master-0 kubenswrapper[29097]: I0312 18:33:32.506460 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4zwc\" (UniqueName: \"kubernetes.io/projected/988abf9f-43bc-4440-865e-b60d248eeaaa-kube-api-access-k4zwc\") pod \"thanos-querier-99547cb8-z9gq2\" (UID: \"988abf9f-43bc-4440-865e-b60d248eeaaa\") " pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" Mar 12 18:33:32.511925 master-0 kubenswrapper[29097]: I0312 18:33:32.508997 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/988abf9f-43bc-4440-865e-b60d248eeaaa-metrics-client-ca\") pod \"thanos-querier-99547cb8-z9gq2\" (UID: \"988abf9f-43bc-4440-865e-b60d248eeaaa\") " pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" Mar 12 18:33:32.511925 master-0 kubenswrapper[29097]: I0312 18:33:32.510637 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/988abf9f-43bc-4440-865e-b60d248eeaaa-secret-thanos-querier-tls\") pod \"thanos-querier-99547cb8-z9gq2\" (UID: \"988abf9f-43bc-4440-865e-b60d248eeaaa\") " pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" Mar 12 18:33:32.511925 master-0 kubenswrapper[29097]: I0312 18:33:32.510920 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/988abf9f-43bc-4440-865e-b60d248eeaaa-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-99547cb8-z9gq2\" (UID: \"988abf9f-43bc-4440-865e-b60d248eeaaa\") " pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" Mar 12 18:33:32.511925 master-0 kubenswrapper[29097]: I0312 18:33:32.510999 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/988abf9f-43bc-4440-865e-b60d248eeaaa-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-99547cb8-z9gq2\" (UID: \"988abf9f-43bc-4440-865e-b60d248eeaaa\") " pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" Mar 12 18:33:32.512526 master-0 kubenswrapper[29097]: I0312 18:33:32.512457 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/988abf9f-43bc-4440-865e-b60d248eeaaa-secret-grpc-tls\") pod \"thanos-querier-99547cb8-z9gq2\" (UID: \"988abf9f-43bc-4440-865e-b60d248eeaaa\") " pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" Mar 12 18:33:32.521542 master-0 kubenswrapper[29097]: I0312 18:33:32.521471 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/988abf9f-43bc-4440-865e-b60d248eeaaa-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-99547cb8-z9gq2\" (UID: \"988abf9f-43bc-4440-865e-b60d248eeaaa\") " pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" Mar 12 18:33:32.538816 master-0 kubenswrapper[29097]: I0312 18:33:32.536887 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4zwc\" (UniqueName: \"kubernetes.io/projected/988abf9f-43bc-4440-865e-b60d248eeaaa-kube-api-access-k4zwc\") pod \"thanos-querier-99547cb8-z9gq2\" (UID: \"988abf9f-43bc-4440-865e-b60d248eeaaa\") " pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" Mar 12 18:33:32.571130 master-0 kubenswrapper[29097]: I0312 18:33:32.570917 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/988abf9f-43bc-4440-865e-b60d248eeaaa-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-99547cb8-z9gq2\" (UID: \"988abf9f-43bc-4440-865e-b60d248eeaaa\") " pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" Mar 12 18:33:32.679397 master-0 kubenswrapper[29097]: I0312 18:33:32.679269 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" Mar 12 18:33:32.874568 master-0 kubenswrapper[29097]: I0312 18:33:32.874268 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 12 18:33:32.877106 master-0 kubenswrapper[29097]: W0312 18:33:32.877002 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod221d9bf3_99e3_4397_994b_0bef619f6177.slice/crio-dc9ea99015e59b710155a5ab1e67e39ca9a4e221b7fef10b8a0045e83ecbd392 WatchSource:0}: Error finding container dc9ea99015e59b710155a5ab1e67e39ca9a4e221b7fef10b8a0045e83ecbd392: Status 404 returned error can't find the container with id dc9ea99015e59b710155a5ab1e67e39ca9a4e221b7fef10b8a0045e83ecbd392 Mar 12 18:33:33.114462 master-0 kubenswrapper[29097]: I0312 18:33:33.114395 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-99547cb8-z9gq2"] Mar 12 18:33:33.128301 master-0 kubenswrapper[29097]: W0312 18:33:33.128247 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod988abf9f_43bc_4440_865e_b60d248eeaaa.slice/crio-15652ae000a6bd3b979b16dffef8ac80779fdeab52484542e33118de15fac40e WatchSource:0}: Error finding container 15652ae000a6bd3b979b16dffef8ac80779fdeab52484542e33118de15fac40e: Status 404 returned error can't find the container with id 15652ae000a6bd3b979b16dffef8ac80779fdeab52484542e33118de15fac40e Mar 12 18:33:33.165141 master-0 kubenswrapper[29097]: I0312 18:33:33.164504 29097 patch_prober.go:28] interesting pod/console-68799679d4-tcwkt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" start-of-body= Mar 12 18:33:33.165141 master-0 kubenswrapper[29097]: I0312 18:33:33.164628 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-68799679d4-tcwkt" podUID="2f8180d8-5283-409a-b36e-4786c8483171" containerName="console" probeResult="failure" output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" Mar 12 18:33:33.676105 master-0 kubenswrapper[29097]: I0312 18:33:33.676052 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" event={"ID":"988abf9f-43bc-4440-865e-b60d248eeaaa","Type":"ContainerStarted","Data":"15652ae000a6bd3b979b16dffef8ac80779fdeab52484542e33118de15fac40e"} Mar 12 18:33:33.685263 master-0 kubenswrapper[29097]: I0312 18:33:33.685204 29097 generic.go:334] "Generic (PLEG): container finished" podID="221d9bf3-99e3-4397-994b-0bef619f6177" containerID="18267443d47b4d50082e003febeff3438b35d69c9469cb39a348a74d3121fd35" exitCode=0 Mar 12 18:33:33.685263 master-0 kubenswrapper[29097]: I0312 18:33:33.685258 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"221d9bf3-99e3-4397-994b-0bef619f6177","Type":"ContainerDied","Data":"18267443d47b4d50082e003febeff3438b35d69c9469cb39a348a74d3121fd35"} Mar 12 18:33:33.685499 master-0 kubenswrapper[29097]: I0312 18:33:33.685285 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"221d9bf3-99e3-4397-994b-0bef619f6177","Type":"ContainerStarted","Data":"dc9ea99015e59b710155a5ab1e67e39ca9a4e221b7fef10b8a0045e83ecbd392"} Mar 12 18:33:33.970389 master-0 kubenswrapper[29097]: I0312 18:33:33.970064 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-799595bb6c-b9xsw"] Mar 12 18:33:33.974373 master-0 kubenswrapper[29097]: I0312 18:33:33.973986 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-799595bb6c-b9xsw" Mar 12 18:33:34.021604 master-0 kubenswrapper[29097]: I0312 18:33:33.978943 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-fvbintakd9ghl" Mar 12 18:33:34.038690 master-0 kubenswrapper[29097]: I0312 18:33:34.038506 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/c7e1bc75-30e4-418d-a685-70a2a5d80472-secret-metrics-server-tls\") pod \"metrics-server-799595bb6c-b9xsw\" (UID: \"c7e1bc75-30e4-418d-a685-70a2a5d80472\") " pod="openshift-monitoring/metrics-server-799595bb6c-b9xsw" Mar 12 18:33:34.038690 master-0 kubenswrapper[29097]: I0312 18:33:34.038622 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv8cz\" (UniqueName: \"kubernetes.io/projected/c7e1bc75-30e4-418d-a685-70a2a5d80472-kube-api-access-rv8cz\") pod \"metrics-server-799595bb6c-b9xsw\" (UID: \"c7e1bc75-30e4-418d-a685-70a2a5d80472\") " pod="openshift-monitoring/metrics-server-799595bb6c-b9xsw" Mar 12 18:33:34.038690 master-0 kubenswrapper[29097]: I0312 18:33:34.038654 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/c7e1bc75-30e4-418d-a685-70a2a5d80472-secret-metrics-client-certs\") pod \"metrics-server-799595bb6c-b9xsw\" (UID: \"c7e1bc75-30e4-418d-a685-70a2a5d80472\") " pod="openshift-monitoring/metrics-server-799595bb6c-b9xsw" Mar 12 18:33:34.038690 master-0 kubenswrapper[29097]: I0312 18:33:34.038685 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/c7e1bc75-30e4-418d-a685-70a2a5d80472-audit-log\") pod \"metrics-server-799595bb6c-b9xsw\" (UID: \"c7e1bc75-30e4-418d-a685-70a2a5d80472\") " pod="openshift-monitoring/metrics-server-799595bb6c-b9xsw" Mar 12 18:33:34.039038 master-0 kubenswrapper[29097]: I0312 18:33:34.038742 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7e1bc75-30e4-418d-a685-70a2a5d80472-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-799595bb6c-b9xsw\" (UID: \"c7e1bc75-30e4-418d-a685-70a2a5d80472\") " pod="openshift-monitoring/metrics-server-799595bb6c-b9xsw" Mar 12 18:33:34.039038 master-0 kubenswrapper[29097]: I0312 18:33:34.038790 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/c7e1bc75-30e4-418d-a685-70a2a5d80472-metrics-server-audit-profiles\") pod \"metrics-server-799595bb6c-b9xsw\" (UID: \"c7e1bc75-30e4-418d-a685-70a2a5d80472\") " pod="openshift-monitoring/metrics-server-799595bb6c-b9xsw" Mar 12 18:33:34.039038 master-0 kubenswrapper[29097]: I0312 18:33:34.038816 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e1bc75-30e4-418d-a685-70a2a5d80472-client-ca-bundle\") pod \"metrics-server-799595bb6c-b9xsw\" (UID: \"c7e1bc75-30e4-418d-a685-70a2a5d80472\") " pod="openshift-monitoring/metrics-server-799595bb6c-b9xsw" Mar 12 18:33:34.050253 master-0 kubenswrapper[29097]: I0312 18:33:34.049062 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-5784dff469-l5d64"] Mar 12 18:33:34.050253 master-0 kubenswrapper[29097]: I0312 18:33:34.049301 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-5784dff469-l5d64" podUID="9f1f60fa-d79d-4f31-b5bf-2ad333151537" containerName="metrics-server" containerID="cri-o://e90331cb678c8e153c33f95cb18612384f7ac4bbc46e3e49ca8de188de41f79a" gracePeriod=170 Mar 12 18:33:34.058395 master-0 kubenswrapper[29097]: I0312 18:33:34.058334 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-799595bb6c-b9xsw"] Mar 12 18:33:34.140024 master-0 kubenswrapper[29097]: I0312 18:33:34.139978 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/c7e1bc75-30e4-418d-a685-70a2a5d80472-secret-metrics-server-tls\") pod \"metrics-server-799595bb6c-b9xsw\" (UID: \"c7e1bc75-30e4-418d-a685-70a2a5d80472\") " pod="openshift-monitoring/metrics-server-799595bb6c-b9xsw" Mar 12 18:33:34.140575 master-0 kubenswrapper[29097]: I0312 18:33:34.140557 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv8cz\" (UniqueName: \"kubernetes.io/projected/c7e1bc75-30e4-418d-a685-70a2a5d80472-kube-api-access-rv8cz\") pod \"metrics-server-799595bb6c-b9xsw\" (UID: \"c7e1bc75-30e4-418d-a685-70a2a5d80472\") " pod="openshift-monitoring/metrics-server-799595bb6c-b9xsw" Mar 12 18:33:34.140657 master-0 kubenswrapper[29097]: I0312 18:33:34.140644 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/c7e1bc75-30e4-418d-a685-70a2a5d80472-secret-metrics-client-certs\") pod \"metrics-server-799595bb6c-b9xsw\" (UID: \"c7e1bc75-30e4-418d-a685-70a2a5d80472\") " pod="openshift-monitoring/metrics-server-799595bb6c-b9xsw" Mar 12 18:33:34.140805 master-0 kubenswrapper[29097]: I0312 18:33:34.140791 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/c7e1bc75-30e4-418d-a685-70a2a5d80472-audit-log\") pod \"metrics-server-799595bb6c-b9xsw\" (UID: \"c7e1bc75-30e4-418d-a685-70a2a5d80472\") " pod="openshift-monitoring/metrics-server-799595bb6c-b9xsw" Mar 12 18:33:34.140893 master-0 kubenswrapper[29097]: I0312 18:33:34.140880 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7e1bc75-30e4-418d-a685-70a2a5d80472-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-799595bb6c-b9xsw\" (UID: \"c7e1bc75-30e4-418d-a685-70a2a5d80472\") " pod="openshift-monitoring/metrics-server-799595bb6c-b9xsw" Mar 12 18:33:34.140979 master-0 kubenswrapper[29097]: I0312 18:33:34.140967 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/c7e1bc75-30e4-418d-a685-70a2a5d80472-metrics-server-audit-profiles\") pod \"metrics-server-799595bb6c-b9xsw\" (UID: \"c7e1bc75-30e4-418d-a685-70a2a5d80472\") " pod="openshift-monitoring/metrics-server-799595bb6c-b9xsw" Mar 12 18:33:34.141404 master-0 kubenswrapper[29097]: I0312 18:33:34.141364 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e1bc75-30e4-418d-a685-70a2a5d80472-client-ca-bundle\") pod \"metrics-server-799595bb6c-b9xsw\" (UID: \"c7e1bc75-30e4-418d-a685-70a2a5d80472\") " pod="openshift-monitoring/metrics-server-799595bb6c-b9xsw" Mar 12 18:33:34.141851 master-0 kubenswrapper[29097]: I0312 18:33:34.141807 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/c7e1bc75-30e4-418d-a685-70a2a5d80472-audit-log\") pod \"metrics-server-799595bb6c-b9xsw\" (UID: \"c7e1bc75-30e4-418d-a685-70a2a5d80472\") " pod="openshift-monitoring/metrics-server-799595bb6c-b9xsw" Mar 12 18:33:34.141978 master-0 kubenswrapper[29097]: I0312 18:33:34.141942 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7e1bc75-30e4-418d-a685-70a2a5d80472-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-799595bb6c-b9xsw\" (UID: \"c7e1bc75-30e4-418d-a685-70a2a5d80472\") " pod="openshift-monitoring/metrics-server-799595bb6c-b9xsw" Mar 12 18:33:34.142681 master-0 kubenswrapper[29097]: I0312 18:33:34.142646 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/c7e1bc75-30e4-418d-a685-70a2a5d80472-metrics-server-audit-profiles\") pod \"metrics-server-799595bb6c-b9xsw\" (UID: \"c7e1bc75-30e4-418d-a685-70a2a5d80472\") " pod="openshift-monitoring/metrics-server-799595bb6c-b9xsw" Mar 12 18:33:34.145061 master-0 kubenswrapper[29097]: I0312 18:33:34.145007 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/c7e1bc75-30e4-418d-a685-70a2a5d80472-secret-metrics-server-tls\") pod \"metrics-server-799595bb6c-b9xsw\" (UID: \"c7e1bc75-30e4-418d-a685-70a2a5d80472\") " pod="openshift-monitoring/metrics-server-799595bb6c-b9xsw" Mar 12 18:33:34.145540 master-0 kubenswrapper[29097]: I0312 18:33:34.145494 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e1bc75-30e4-418d-a685-70a2a5d80472-client-ca-bundle\") pod \"metrics-server-799595bb6c-b9xsw\" (UID: \"c7e1bc75-30e4-418d-a685-70a2a5d80472\") " pod="openshift-monitoring/metrics-server-799595bb6c-b9xsw" Mar 12 18:33:34.147297 master-0 kubenswrapper[29097]: I0312 18:33:34.147252 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/c7e1bc75-30e4-418d-a685-70a2a5d80472-secret-metrics-client-certs\") pod \"metrics-server-799595bb6c-b9xsw\" (UID: \"c7e1bc75-30e4-418d-a685-70a2a5d80472\") " pod="openshift-monitoring/metrics-server-799595bb6c-b9xsw" Mar 12 18:33:34.159062 master-0 kubenswrapper[29097]: I0312 18:33:34.159020 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv8cz\" (UniqueName: \"kubernetes.io/projected/c7e1bc75-30e4-418d-a685-70a2a5d80472-kube-api-access-rv8cz\") pod \"metrics-server-799595bb6c-b9xsw\" (UID: \"c7e1bc75-30e4-418d-a685-70a2a5d80472\") " pod="openshift-monitoring/metrics-server-799595bb6c-b9xsw" Mar 12 18:33:34.349090 master-0 kubenswrapper[29097]: I0312 18:33:34.348622 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-799595bb6c-b9xsw" Mar 12 18:33:34.564338 master-0 kubenswrapper[29097]: I0312 18:33:34.564268 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-d597fb65b-4ls97"] Mar 12 18:33:34.566085 master-0 kubenswrapper[29097]: I0312 18:33:34.566055 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-d597fb65b-4ls97" Mar 12 18:33:34.568710 master-0 kubenswrapper[29097]: I0312 18:33:34.568680 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 12 18:33:34.568778 master-0 kubenswrapper[29097]: I0312 18:33:34.568714 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 12 18:33:34.568778 master-0 kubenswrapper[29097]: I0312 18:33:34.568734 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 12 18:33:34.568860 master-0 kubenswrapper[29097]: I0312 18:33:34.568844 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 12 18:33:34.569454 master-0 kubenswrapper[29097]: I0312 18:33:34.569435 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 12 18:33:34.572735 master-0 kubenswrapper[29097]: I0312 18:33:34.572708 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-d597fb65b-4ls97"] Mar 12 18:33:34.578662 master-0 kubenswrapper[29097]: I0312 18:33:34.575930 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 12 18:33:34.660705 master-0 kubenswrapper[29097]: I0312 18:33:34.660585 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/402126ab-fc17-48bc-ab20-7f4d1f6868ee-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-d597fb65b-4ls97\" (UID: \"402126ab-fc17-48bc-ab20-7f4d1f6868ee\") " pod="openshift-monitoring/telemeter-client-d597fb65b-4ls97" Mar 12 18:33:34.660895 master-0 kubenswrapper[29097]: I0312 18:33:34.660721 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/402126ab-fc17-48bc-ab20-7f4d1f6868ee-telemeter-trusted-ca-bundle\") pod \"telemeter-client-d597fb65b-4ls97\" (UID: \"402126ab-fc17-48bc-ab20-7f4d1f6868ee\") " pod="openshift-monitoring/telemeter-client-d597fb65b-4ls97" Mar 12 18:33:34.660895 master-0 kubenswrapper[29097]: I0312 18:33:34.660764 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxkff\" (UniqueName: \"kubernetes.io/projected/402126ab-fc17-48bc-ab20-7f4d1f6868ee-kube-api-access-nxkff\") pod \"telemeter-client-d597fb65b-4ls97\" (UID: \"402126ab-fc17-48bc-ab20-7f4d1f6868ee\") " pod="openshift-monitoring/telemeter-client-d597fb65b-4ls97" Mar 12 18:33:34.660895 master-0 kubenswrapper[29097]: I0312 18:33:34.660851 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/402126ab-fc17-48bc-ab20-7f4d1f6868ee-telemeter-client-tls\") pod \"telemeter-client-d597fb65b-4ls97\" (UID: \"402126ab-fc17-48bc-ab20-7f4d1f6868ee\") " pod="openshift-monitoring/telemeter-client-d597fb65b-4ls97" Mar 12 18:33:34.661104 master-0 kubenswrapper[29097]: I0312 18:33:34.661048 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/402126ab-fc17-48bc-ab20-7f4d1f6868ee-serving-certs-ca-bundle\") pod \"telemeter-client-d597fb65b-4ls97\" (UID: \"402126ab-fc17-48bc-ab20-7f4d1f6868ee\") " pod="openshift-monitoring/telemeter-client-d597fb65b-4ls97" Mar 12 18:33:34.661160 master-0 kubenswrapper[29097]: I0312 18:33:34.661106 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/402126ab-fc17-48bc-ab20-7f4d1f6868ee-secret-telemeter-client\") pod \"telemeter-client-d597fb65b-4ls97\" (UID: \"402126ab-fc17-48bc-ab20-7f4d1f6868ee\") " pod="openshift-monitoring/telemeter-client-d597fb65b-4ls97" Mar 12 18:33:34.661193 master-0 kubenswrapper[29097]: I0312 18:33:34.661163 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/402126ab-fc17-48bc-ab20-7f4d1f6868ee-metrics-client-ca\") pod \"telemeter-client-d597fb65b-4ls97\" (UID: \"402126ab-fc17-48bc-ab20-7f4d1f6868ee\") " pod="openshift-monitoring/telemeter-client-d597fb65b-4ls97" Mar 12 18:33:34.661322 master-0 kubenswrapper[29097]: I0312 18:33:34.661296 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/402126ab-fc17-48bc-ab20-7f4d1f6868ee-federate-client-tls\") pod \"telemeter-client-d597fb65b-4ls97\" (UID: \"402126ab-fc17-48bc-ab20-7f4d1f6868ee\") " pod="openshift-monitoring/telemeter-client-d597fb65b-4ls97" Mar 12 18:33:34.762809 master-0 kubenswrapper[29097]: I0312 18:33:34.762755 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/402126ab-fc17-48bc-ab20-7f4d1f6868ee-federate-client-tls\") pod \"telemeter-client-d597fb65b-4ls97\" (UID: \"402126ab-fc17-48bc-ab20-7f4d1f6868ee\") " pod="openshift-monitoring/telemeter-client-d597fb65b-4ls97" Mar 12 18:33:34.763055 master-0 kubenswrapper[29097]: I0312 18:33:34.762832 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/402126ab-fc17-48bc-ab20-7f4d1f6868ee-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-d597fb65b-4ls97\" (UID: \"402126ab-fc17-48bc-ab20-7f4d1f6868ee\") " pod="openshift-monitoring/telemeter-client-d597fb65b-4ls97" Mar 12 18:33:34.763055 master-0 kubenswrapper[29097]: I0312 18:33:34.762863 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/402126ab-fc17-48bc-ab20-7f4d1f6868ee-telemeter-trusted-ca-bundle\") pod \"telemeter-client-d597fb65b-4ls97\" (UID: \"402126ab-fc17-48bc-ab20-7f4d1f6868ee\") " pod="openshift-monitoring/telemeter-client-d597fb65b-4ls97" Mar 12 18:33:34.763055 master-0 kubenswrapper[29097]: I0312 18:33:34.762887 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxkff\" (UniqueName: \"kubernetes.io/projected/402126ab-fc17-48bc-ab20-7f4d1f6868ee-kube-api-access-nxkff\") pod \"telemeter-client-d597fb65b-4ls97\" (UID: \"402126ab-fc17-48bc-ab20-7f4d1f6868ee\") " pod="openshift-monitoring/telemeter-client-d597fb65b-4ls97" Mar 12 18:33:34.763055 master-0 kubenswrapper[29097]: I0312 18:33:34.762947 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/402126ab-fc17-48bc-ab20-7f4d1f6868ee-telemeter-client-tls\") pod \"telemeter-client-d597fb65b-4ls97\" (UID: \"402126ab-fc17-48bc-ab20-7f4d1f6868ee\") " pod="openshift-monitoring/telemeter-client-d597fb65b-4ls97" Mar 12 18:33:34.763055 master-0 kubenswrapper[29097]: I0312 18:33:34.763005 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/402126ab-fc17-48bc-ab20-7f4d1f6868ee-serving-certs-ca-bundle\") pod \"telemeter-client-d597fb65b-4ls97\" (UID: \"402126ab-fc17-48bc-ab20-7f4d1f6868ee\") " pod="openshift-monitoring/telemeter-client-d597fb65b-4ls97" Mar 12 18:33:34.763293 master-0 kubenswrapper[29097]: I0312 18:33:34.763026 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/402126ab-fc17-48bc-ab20-7f4d1f6868ee-secret-telemeter-client\") pod \"telemeter-client-d597fb65b-4ls97\" (UID: \"402126ab-fc17-48bc-ab20-7f4d1f6868ee\") " pod="openshift-monitoring/telemeter-client-d597fb65b-4ls97" Mar 12 18:33:34.763293 master-0 kubenswrapper[29097]: I0312 18:33:34.763103 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/402126ab-fc17-48bc-ab20-7f4d1f6868ee-metrics-client-ca\") pod \"telemeter-client-d597fb65b-4ls97\" (UID: \"402126ab-fc17-48bc-ab20-7f4d1f6868ee\") " pod="openshift-monitoring/telemeter-client-d597fb65b-4ls97" Mar 12 18:33:34.764169 master-0 kubenswrapper[29097]: I0312 18:33:34.764144 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/402126ab-fc17-48bc-ab20-7f4d1f6868ee-metrics-client-ca\") pod \"telemeter-client-d597fb65b-4ls97\" (UID: \"402126ab-fc17-48bc-ab20-7f4d1f6868ee\") " pod="openshift-monitoring/telemeter-client-d597fb65b-4ls97" Mar 12 18:33:34.764296 master-0 kubenswrapper[29097]: I0312 18:33:34.764262 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/402126ab-fc17-48bc-ab20-7f4d1f6868ee-serving-certs-ca-bundle\") pod \"telemeter-client-d597fb65b-4ls97\" (UID: \"402126ab-fc17-48bc-ab20-7f4d1f6868ee\") " pod="openshift-monitoring/telemeter-client-d597fb65b-4ls97" Mar 12 18:33:34.764993 master-0 kubenswrapper[29097]: I0312 18:33:34.764964 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/402126ab-fc17-48bc-ab20-7f4d1f6868ee-telemeter-trusted-ca-bundle\") pod \"telemeter-client-d597fb65b-4ls97\" (UID: \"402126ab-fc17-48bc-ab20-7f4d1f6868ee\") " pod="openshift-monitoring/telemeter-client-d597fb65b-4ls97" Mar 12 18:33:34.766918 master-0 kubenswrapper[29097]: I0312 18:33:34.766854 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/402126ab-fc17-48bc-ab20-7f4d1f6868ee-telemeter-client-tls\") pod \"telemeter-client-d597fb65b-4ls97\" (UID: \"402126ab-fc17-48bc-ab20-7f4d1f6868ee\") " pod="openshift-monitoring/telemeter-client-d597fb65b-4ls97" Mar 12 18:33:34.767819 master-0 kubenswrapper[29097]: I0312 18:33:34.767757 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/402126ab-fc17-48bc-ab20-7f4d1f6868ee-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-d597fb65b-4ls97\" (UID: \"402126ab-fc17-48bc-ab20-7f4d1f6868ee\") " pod="openshift-monitoring/telemeter-client-d597fb65b-4ls97" Mar 12 18:33:34.768195 master-0 kubenswrapper[29097]: I0312 18:33:34.768159 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/402126ab-fc17-48bc-ab20-7f4d1f6868ee-federate-client-tls\") pod \"telemeter-client-d597fb65b-4ls97\" (UID: \"402126ab-fc17-48bc-ab20-7f4d1f6868ee\") " pod="openshift-monitoring/telemeter-client-d597fb65b-4ls97" Mar 12 18:33:34.769560 master-0 kubenswrapper[29097]: I0312 18:33:34.769512 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/402126ab-fc17-48bc-ab20-7f4d1f6868ee-secret-telemeter-client\") pod \"telemeter-client-d597fb65b-4ls97\" (UID: \"402126ab-fc17-48bc-ab20-7f4d1f6868ee\") " pod="openshift-monitoring/telemeter-client-d597fb65b-4ls97" Mar 12 18:33:34.783608 master-0 kubenswrapper[29097]: I0312 18:33:34.783560 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxkff\" (UniqueName: \"kubernetes.io/projected/402126ab-fc17-48bc-ab20-7f4d1f6868ee-kube-api-access-nxkff\") pod \"telemeter-client-d597fb65b-4ls97\" (UID: \"402126ab-fc17-48bc-ab20-7f4d1f6868ee\") " pod="openshift-monitoring/telemeter-client-d597fb65b-4ls97" Mar 12 18:33:34.815378 master-0 kubenswrapper[29097]: I0312 18:33:34.815339 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-799595bb6c-b9xsw"] Mar 12 18:33:34.816126 master-0 kubenswrapper[29097]: W0312 18:33:34.816079 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7e1bc75_30e4_418d_a685_70a2a5d80472.slice/crio-20544741e53fcce04120eacd2e2d26d17c1470a8de3d323778c9986af3335fe7 WatchSource:0}: Error finding container 20544741e53fcce04120eacd2e2d26d17c1470a8de3d323778c9986af3335fe7: Status 404 returned error can't find the container with id 20544741e53fcce04120eacd2e2d26d17c1470a8de3d323778c9986af3335fe7 Mar 12 18:33:34.911870 master-0 kubenswrapper[29097]: I0312 18:33:34.911743 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-d597fb65b-4ls97" Mar 12 18:33:36.202765 master-0 kubenswrapper[29097]: I0312 18:33:36.201971 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-799595bb6c-b9xsw" event={"ID":"c7e1bc75-30e4-418d-a685-70a2a5d80472","Type":"ContainerStarted","Data":"20544741e53fcce04120eacd2e2d26d17c1470a8de3d323778c9986af3335fe7"} Mar 12 18:33:36.504106 master-0 kubenswrapper[29097]: I0312 18:33:36.503998 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-d597fb65b-4ls97"] Mar 12 18:33:36.662723 master-0 kubenswrapper[29097]: W0312 18:33:36.662678 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod402126ab_fc17_48bc_ab20_7f4d1f6868ee.slice/crio-105612e89f3c3f1b59b19777427d60a41efc3ed71baf3e8d18fc662679371ebb WatchSource:0}: Error finding container 105612e89f3c3f1b59b19777427d60a41efc3ed71baf3e8d18fc662679371ebb: Status 404 returned error can't find the container with id 105612e89f3c3f1b59b19777427d60a41efc3ed71baf3e8d18fc662679371ebb Mar 12 18:33:37.214437 master-0 kubenswrapper[29097]: I0312 18:33:37.214363 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-d597fb65b-4ls97" event={"ID":"402126ab-fc17-48bc-ab20-7f4d1f6868ee","Type":"ContainerStarted","Data":"29f82478b60e954267814664e894ce58c66d20f58a977bc06e20616de1bec466"} Mar 12 18:33:37.214437 master-0 kubenswrapper[29097]: I0312 18:33:37.214428 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-d597fb65b-4ls97" event={"ID":"402126ab-fc17-48bc-ab20-7f4d1f6868ee","Type":"ContainerStarted","Data":"6e734bdd5a8a8688bfa07d70a66e3aa32af347c84a78c8a20e93f28b5a31431e"} Mar 12 18:33:37.214437 master-0 kubenswrapper[29097]: I0312 18:33:37.214442 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-d597fb65b-4ls97" event={"ID":"402126ab-fc17-48bc-ab20-7f4d1f6868ee","Type":"ContainerStarted","Data":"e15dcb7025c3d9d96a7d781753872514fea729a4bc9d21358a18a55533e1c9c8"} Mar 12 18:33:37.214437 master-0 kubenswrapper[29097]: I0312 18:33:37.214452 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-d597fb65b-4ls97" event={"ID":"402126ab-fc17-48bc-ab20-7f4d1f6868ee","Type":"ContainerStarted","Data":"105612e89f3c3f1b59b19777427d60a41efc3ed71baf3e8d18fc662679371ebb"} Mar 12 18:33:37.218203 master-0 kubenswrapper[29097]: I0312 18:33:37.218095 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" event={"ID":"988abf9f-43bc-4440-865e-b60d248eeaaa","Type":"ContainerStarted","Data":"6b856538839f58a0d9db2f5900206d36ebd646ed709dd04fbec76362f9bfaaaa"} Mar 12 18:33:37.218203 master-0 kubenswrapper[29097]: I0312 18:33:37.218138 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" event={"ID":"988abf9f-43bc-4440-865e-b60d248eeaaa","Type":"ContainerStarted","Data":"dc03c6d8734c371144b0b90a27e73c906c245f96bfa9010cd5870fa4b516f6d5"} Mar 12 18:33:37.218203 master-0 kubenswrapper[29097]: I0312 18:33:37.218147 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" event={"ID":"988abf9f-43bc-4440-865e-b60d248eeaaa","Type":"ContainerStarted","Data":"76b734e795e560949742973eb45a23da96600944ccd1f49e5986a0bee399b435"} Mar 12 18:33:37.230356 master-0 kubenswrapper[29097]: I0312 18:33:37.230274 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"221d9bf3-99e3-4397-994b-0bef619f6177","Type":"ContainerStarted","Data":"5a98c23e1bd22359dc35549837843a1d5a32e71fd095d2addfd76a0ce961eeab"} Mar 12 18:33:37.230356 master-0 kubenswrapper[29097]: I0312 18:33:37.230354 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"221d9bf3-99e3-4397-994b-0bef619f6177","Type":"ContainerStarted","Data":"128e290bfc4110860913f991b1c7f1bc767514413f6777fb85e7336d721d1b20"} Mar 12 18:33:37.230833 master-0 kubenswrapper[29097]: I0312 18:33:37.230369 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"221d9bf3-99e3-4397-994b-0bef619f6177","Type":"ContainerStarted","Data":"775c8aa2d39f98d89cc8c880b179258f56a1524e1ad139f4dcdd15b983912c0b"} Mar 12 18:33:37.237434 master-0 kubenswrapper[29097]: I0312 18:33:37.237342 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-799595bb6c-b9xsw" event={"ID":"c7e1bc75-30e4-418d-a685-70a2a5d80472","Type":"ContainerStarted","Data":"d5ea7b5d019868986017e47fbbe6910075aaaa26be9adf5354a202e1a4f7d12a"} Mar 12 18:33:37.246281 master-0 kubenswrapper[29097]: I0312 18:33:37.245803 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-d597fb65b-4ls97" podStartSLOduration=3.2457789679999998 podStartE2EDuration="3.245778968s" podCreationTimestamp="2026-03-12 18:33:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:33:37.2430445 +0000 UTC m=+256.797024617" watchObservedRunningTime="2026-03-12 18:33:37.245778968 +0000 UTC m=+256.799759065" Mar 12 18:33:37.284209 master-0 kubenswrapper[29097]: I0312 18:33:37.281502 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-799595bb6c-b9xsw" podStartSLOduration=4.281476367 podStartE2EDuration="4.281476367s" podCreationTimestamp="2026-03-12 18:33:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:33:37.278319909 +0000 UTC m=+256.832300006" watchObservedRunningTime="2026-03-12 18:33:37.281476367 +0000 UTC m=+256.835456474" Mar 12 18:33:37.648165 master-0 kubenswrapper[29097]: I0312 18:33:37.648099 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 12 18:33:37.650612 master-0 kubenswrapper[29097]: I0312 18:33:37.650502 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.653090 master-0 kubenswrapper[29097]: I0312 18:33:37.653016 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 12 18:33:37.653743 master-0 kubenswrapper[29097]: I0312 18:33:37.653711 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 12 18:33:37.653869 master-0 kubenswrapper[29097]: I0312 18:33:37.653834 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 12 18:33:37.653984 master-0 kubenswrapper[29097]: I0312 18:33:37.653951 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 12 18:33:37.655939 master-0 kubenswrapper[29097]: I0312 18:33:37.655863 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 12 18:33:37.656163 master-0 kubenswrapper[29097]: I0312 18:33:37.656112 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 12 18:33:37.656861 master-0 kubenswrapper[29097]: I0312 18:33:37.656814 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-270bttnd3r3m0" Mar 12 18:33:37.656995 master-0 kubenswrapper[29097]: I0312 18:33:37.656888 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 12 18:33:37.656995 master-0 kubenswrapper[29097]: I0312 18:33:37.656948 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 12 18:33:37.658599 master-0 kubenswrapper[29097]: I0312 18:33:37.658573 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 12 18:33:37.668944 master-0 kubenswrapper[29097]: I0312 18:33:37.668873 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 12 18:33:37.671614 master-0 kubenswrapper[29097]: I0312 18:33:37.671569 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 12 18:33:37.674350 master-0 kubenswrapper[29097]: I0312 18:33:37.674292 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 12 18:33:37.795271 master-0 kubenswrapper[29097]: I0312 18:33:37.795206 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.795271 master-0 kubenswrapper[29097]: I0312 18:33:37.795264 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.795926 master-0 kubenswrapper[29097]: I0312 18:33:37.795872 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.795990 master-0 kubenswrapper[29097]: I0312 18:33:37.795934 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.796035 master-0 kubenswrapper[29097]: I0312 18:33:37.796004 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.796079 master-0 kubenswrapper[29097]: I0312 18:33:37.796055 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-config-out\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.796180 master-0 kubenswrapper[29097]: I0312 18:33:37.796126 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.796586 master-0 kubenswrapper[29097]: I0312 18:33:37.796548 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.796800 master-0 kubenswrapper[29097]: I0312 18:33:37.796751 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.796850 master-0 kubenswrapper[29097]: I0312 18:33:37.796805 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.796893 master-0 kubenswrapper[29097]: I0312 18:33:37.796846 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.796893 master-0 kubenswrapper[29097]: I0312 18:33:37.796878 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-config\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.796975 master-0 kubenswrapper[29097]: I0312 18:33:37.796904 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.796975 master-0 kubenswrapper[29097]: I0312 18:33:37.796930 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.796975 master-0 kubenswrapper[29097]: I0312 18:33:37.796951 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-web-config\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.797088 master-0 kubenswrapper[29097]: I0312 18:33:37.796986 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwvl2\" (UniqueName: \"kubernetes.io/projected/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-kube-api-access-pwvl2\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.798876 master-0 kubenswrapper[29097]: I0312 18:33:37.798841 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.798980 master-0 kubenswrapper[29097]: I0312 18:33:37.798947 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.901469 master-0 kubenswrapper[29097]: I0312 18:33:37.900894 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.901469 master-0 kubenswrapper[29097]: I0312 18:33:37.900976 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.901469 master-0 kubenswrapper[29097]: I0312 18:33:37.901042 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.901469 master-0 kubenswrapper[29097]: I0312 18:33:37.901081 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.901469 master-0 kubenswrapper[29097]: I0312 18:33:37.901121 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.901469 master-0 kubenswrapper[29097]: I0312 18:33:37.901149 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-config-out\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.901469 master-0 kubenswrapper[29097]: I0312 18:33:37.901219 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.901469 master-0 kubenswrapper[29097]: I0312 18:33:37.901253 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.901469 master-0 kubenswrapper[29097]: I0312 18:33:37.901313 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.901469 master-0 kubenswrapper[29097]: I0312 18:33:37.901350 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.901469 master-0 kubenswrapper[29097]: I0312 18:33:37.901398 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.901469 master-0 kubenswrapper[29097]: I0312 18:33:37.901434 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-config\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.901469 master-0 kubenswrapper[29097]: I0312 18:33:37.901470 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.904077 master-0 kubenswrapper[29097]: I0312 18:33:37.901534 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.904077 master-0 kubenswrapper[29097]: I0312 18:33:37.901570 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-web-config\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.904077 master-0 kubenswrapper[29097]: I0312 18:33:37.901615 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwvl2\" (UniqueName: \"kubernetes.io/projected/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-kube-api-access-pwvl2\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.904077 master-0 kubenswrapper[29097]: I0312 18:33:37.901673 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.904077 master-0 kubenswrapper[29097]: I0312 18:33:37.901730 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.904077 master-0 kubenswrapper[29097]: I0312 18:33:37.902980 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.905933 master-0 kubenswrapper[29097]: I0312 18:33:37.904369 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.905933 master-0 kubenswrapper[29097]: I0312 18:33:37.905376 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.906470 master-0 kubenswrapper[29097]: I0312 18:33:37.906237 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.909736 master-0 kubenswrapper[29097]: I0312 18:33:37.907788 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.909736 master-0 kubenswrapper[29097]: I0312 18:33:37.908569 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-config-out\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.910158 master-0 kubenswrapper[29097]: I0312 18:33:37.909913 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.910158 master-0 kubenswrapper[29097]: I0312 18:33:37.910297 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.910158 master-0 kubenswrapper[29097]: I0312 18:33:37.910417 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.910158 master-0 kubenswrapper[29097]: I0312 18:33:37.910480 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-web-config\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.910892 master-0 kubenswrapper[29097]: I0312 18:33:37.910838 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.911182 master-0 kubenswrapper[29097]: I0312 18:33:37.911146 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.912446 master-0 kubenswrapper[29097]: I0312 18:33:37.912410 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.914919 master-0 kubenswrapper[29097]: I0312 18:33:37.914017 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.918641 master-0 kubenswrapper[29097]: I0312 18:33:37.918601 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.923625 master-0 kubenswrapper[29097]: I0312 18:33:37.923565 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-config\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.927043 master-0 kubenswrapper[29097]: I0312 18:33:37.924841 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.941694 master-0 kubenswrapper[29097]: I0312 18:33:37.938083 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwvl2\" (UniqueName: \"kubernetes.io/projected/4b7a4ad7-3732-4c75-a6f6-57e83d6db837-kube-api-access-pwvl2\") pod \"prometheus-k8s-0\" (UID: \"4b7a4ad7-3732-4c75-a6f6-57e83d6db837\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:37.996377 master-0 kubenswrapper[29097]: I0312 18:33:37.992693 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-76849948fc-cxrj8"] Mar 12 18:33:38.015031 master-0 kubenswrapper[29097]: I0312 18:33:38.010367 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:38.039985 master-0 kubenswrapper[29097]: I0312 18:33:38.039933 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6b98bc4d-xfxc9"] Mar 12 18:33:38.040879 master-0 kubenswrapper[29097]: I0312 18:33:38.040854 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b98bc4d-xfxc9" Mar 12 18:33:38.050461 master-0 kubenswrapper[29097]: I0312 18:33:38.050405 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b98bc4d-xfxc9"] Mar 12 18:33:38.110409 master-0 kubenswrapper[29097]: I0312 18:33:38.110337 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-console-oauth-config\") pod \"console-6b98bc4d-xfxc9\" (UID: \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\") " pod="openshift-console/console-6b98bc4d-xfxc9" Mar 12 18:33:38.110644 master-0 kubenswrapper[29097]: I0312 18:33:38.110604 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-trusted-ca-bundle\") pod \"console-6b98bc4d-xfxc9\" (UID: \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\") " pod="openshift-console/console-6b98bc4d-xfxc9" Mar 12 18:33:38.110753 master-0 kubenswrapper[29097]: I0312 18:33:38.110706 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g94cv\" (UniqueName: \"kubernetes.io/projected/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-kube-api-access-g94cv\") pod \"console-6b98bc4d-xfxc9\" (UID: \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\") " pod="openshift-console/console-6b98bc4d-xfxc9" Mar 12 18:33:38.110901 master-0 kubenswrapper[29097]: I0312 18:33:38.110880 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-console-config\") pod \"console-6b98bc4d-xfxc9\" (UID: \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\") " pod="openshift-console/console-6b98bc4d-xfxc9" Mar 12 18:33:38.111008 master-0 kubenswrapper[29097]: I0312 18:33:38.110974 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-service-ca\") pod \"console-6b98bc4d-xfxc9\" (UID: \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\") " pod="openshift-console/console-6b98bc4d-xfxc9" Mar 12 18:33:38.111246 master-0 kubenswrapper[29097]: I0312 18:33:38.111124 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-console-serving-cert\") pod \"console-6b98bc4d-xfxc9\" (UID: \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\") " pod="openshift-console/console-6b98bc4d-xfxc9" Mar 12 18:33:38.111246 master-0 kubenswrapper[29097]: I0312 18:33:38.111186 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-oauth-serving-cert\") pod \"console-6b98bc4d-xfxc9\" (UID: \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\") " pod="openshift-console/console-6b98bc4d-xfxc9" Mar 12 18:33:38.212858 master-0 kubenswrapper[29097]: I0312 18:33:38.212735 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-console-config\") pod \"console-6b98bc4d-xfxc9\" (UID: \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\") " pod="openshift-console/console-6b98bc4d-xfxc9" Mar 12 18:33:38.212858 master-0 kubenswrapper[29097]: I0312 18:33:38.212802 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-service-ca\") pod \"console-6b98bc4d-xfxc9\" (UID: \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\") " pod="openshift-console/console-6b98bc4d-xfxc9" Mar 12 18:33:38.213119 master-0 kubenswrapper[29097]: I0312 18:33:38.213045 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-console-serving-cert\") pod \"console-6b98bc4d-xfxc9\" (UID: \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\") " pod="openshift-console/console-6b98bc4d-xfxc9" Mar 12 18:33:38.213234 master-0 kubenswrapper[29097]: I0312 18:33:38.213209 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-oauth-serving-cert\") pod \"console-6b98bc4d-xfxc9\" (UID: \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\") " pod="openshift-console/console-6b98bc4d-xfxc9" Mar 12 18:33:38.213303 master-0 kubenswrapper[29097]: I0312 18:33:38.213282 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-console-oauth-config\") pod \"console-6b98bc4d-xfxc9\" (UID: \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\") " pod="openshift-console/console-6b98bc4d-xfxc9" Mar 12 18:33:38.213432 master-0 kubenswrapper[29097]: I0312 18:33:38.213407 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-trusted-ca-bundle\") pod \"console-6b98bc4d-xfxc9\" (UID: \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\") " pod="openshift-console/console-6b98bc4d-xfxc9" Mar 12 18:33:38.213480 master-0 kubenswrapper[29097]: I0312 18:33:38.213455 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g94cv\" (UniqueName: \"kubernetes.io/projected/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-kube-api-access-g94cv\") pod \"console-6b98bc4d-xfxc9\" (UID: \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\") " pod="openshift-console/console-6b98bc4d-xfxc9" Mar 12 18:33:38.213755 master-0 kubenswrapper[29097]: I0312 18:33:38.213720 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-console-config\") pod \"console-6b98bc4d-xfxc9\" (UID: \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\") " pod="openshift-console/console-6b98bc4d-xfxc9" Mar 12 18:33:38.214236 master-0 kubenswrapper[29097]: I0312 18:33:38.214194 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-oauth-serving-cert\") pod \"console-6b98bc4d-xfxc9\" (UID: \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\") " pod="openshift-console/console-6b98bc4d-xfxc9" Mar 12 18:33:38.214546 master-0 kubenswrapper[29097]: I0312 18:33:38.214494 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-service-ca\") pod \"console-6b98bc4d-xfxc9\" (UID: \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\") " pod="openshift-console/console-6b98bc4d-xfxc9" Mar 12 18:33:38.215002 master-0 kubenswrapper[29097]: I0312 18:33:38.214709 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-trusted-ca-bundle\") pod \"console-6b98bc4d-xfxc9\" (UID: \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\") " pod="openshift-console/console-6b98bc4d-xfxc9" Mar 12 18:33:38.216895 master-0 kubenswrapper[29097]: I0312 18:33:38.216856 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-console-oauth-config\") pod \"console-6b98bc4d-xfxc9\" (UID: \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\") " pod="openshift-console/console-6b98bc4d-xfxc9" Mar 12 18:33:38.218920 master-0 kubenswrapper[29097]: I0312 18:33:38.218838 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-console-serving-cert\") pod \"console-6b98bc4d-xfxc9\" (UID: \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\") " pod="openshift-console/console-6b98bc4d-xfxc9" Mar 12 18:33:38.230092 master-0 kubenswrapper[29097]: I0312 18:33:38.230051 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g94cv\" (UniqueName: \"kubernetes.io/projected/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-kube-api-access-g94cv\") pod \"console-6b98bc4d-xfxc9\" (UID: \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\") " pod="openshift-console/console-6b98bc4d-xfxc9" Mar 12 18:33:38.249816 master-0 kubenswrapper[29097]: I0312 18:33:38.249742 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"221d9bf3-99e3-4397-994b-0bef619f6177","Type":"ContainerStarted","Data":"067dee086702b4dc17141653c6bc75cf4f66fadf29428e8d75bacfa1e9d1ba5c"} Mar 12 18:33:38.249816 master-0 kubenswrapper[29097]: I0312 18:33:38.249806 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"221d9bf3-99e3-4397-994b-0bef619f6177","Type":"ContainerStarted","Data":"85178453bc471d234ba7d61708f560636ce0599c84794e0be9c30806071f9762"} Mar 12 18:33:38.363157 master-0 kubenswrapper[29097]: I0312 18:33:38.363056 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b98bc4d-xfxc9" Mar 12 18:33:38.697709 master-0 kubenswrapper[29097]: I0312 18:33:38.697631 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 12 18:33:38.704207 master-0 kubenswrapper[29097]: W0312 18:33:38.704148 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b7a4ad7_3732_4c75_a6f6_57e83d6db837.slice/crio-b0bc751f929f48cb430fcfe90086ed785a9bd8a0780f4dc2a24c67f4c9317784 WatchSource:0}: Error finding container b0bc751f929f48cb430fcfe90086ed785a9bd8a0780f4dc2a24c67f4c9317784: Status 404 returned error can't find the container with id b0bc751f929f48cb430fcfe90086ed785a9bd8a0780f4dc2a24c67f4c9317784 Mar 12 18:33:38.819290 master-0 kubenswrapper[29097]: I0312 18:33:38.819244 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b98bc4d-xfxc9"] Mar 12 18:33:38.820792 master-0 kubenswrapper[29097]: W0312 18:33:38.820741 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5dd5dc7_7bc4_4154_8aac_876d2a0ae565.slice/crio-8e244a931ddc4ee85f82052bb7168c2225b1fc274389de87fe03a130410d8991 WatchSource:0}: Error finding container 8e244a931ddc4ee85f82052bb7168c2225b1fc274389de87fe03a130410d8991: Status 404 returned error can't find the container with id 8e244a931ddc4ee85f82052bb7168c2225b1fc274389de87fe03a130410d8991 Mar 12 18:33:39.268147 master-0 kubenswrapper[29097]: I0312 18:33:39.268072 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" event={"ID":"988abf9f-43bc-4440-865e-b60d248eeaaa","Type":"ContainerStarted","Data":"aaf0ad01511ae20798ab474216e328b3645af82a2f39642d66a4e18e064f0ffb"} Mar 12 18:33:39.268147 master-0 kubenswrapper[29097]: I0312 18:33:39.268144 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" event={"ID":"988abf9f-43bc-4440-865e-b60d248eeaaa","Type":"ContainerStarted","Data":"3b6f1d58875a8a3600e1d689a9cc9785f0ab45d75bb04af0d84c0fb30dda3b46"} Mar 12 18:33:39.269221 master-0 kubenswrapper[29097]: I0312 18:33:39.268164 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" event={"ID":"988abf9f-43bc-4440-865e-b60d248eeaaa","Type":"ContainerStarted","Data":"f97469d10cd3d8415f3e5ea6874669627c7a93ab25a5b10abb9696043368a59c"} Mar 12 18:33:39.269221 master-0 kubenswrapper[29097]: I0312 18:33:39.268281 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" Mar 12 18:33:39.272355 master-0 kubenswrapper[29097]: I0312 18:33:39.272296 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"221d9bf3-99e3-4397-994b-0bef619f6177","Type":"ContainerStarted","Data":"fd27896d03f92f6a6389f5196488ed32e0a684cdb89df4424b415f2279c49508"} Mar 12 18:33:39.274817 master-0 kubenswrapper[29097]: I0312 18:33:39.274752 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b98bc4d-xfxc9" event={"ID":"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565","Type":"ContainerStarted","Data":"8b1c5296f22b65b4ce024e0502c4e4b80cc4d40a1ff3097484acb7dcf8d96015"} Mar 12 18:33:39.274817 master-0 kubenswrapper[29097]: I0312 18:33:39.274807 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b98bc4d-xfxc9" event={"ID":"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565","Type":"ContainerStarted","Data":"8e244a931ddc4ee85f82052bb7168c2225b1fc274389de87fe03a130410d8991"} Mar 12 18:33:39.276280 master-0 kubenswrapper[29097]: I0312 18:33:39.276217 29097 generic.go:334] "Generic (PLEG): container finished" podID="4b7a4ad7-3732-4c75-a6f6-57e83d6db837" containerID="c522b1e0f6edc44cdbe8d85f5b10ddbc2e83ad99a8ceda9912682743010c4bc4" exitCode=0 Mar 12 18:33:39.276280 master-0 kubenswrapper[29097]: I0312 18:33:39.276265 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4b7a4ad7-3732-4c75-a6f6-57e83d6db837","Type":"ContainerDied","Data":"c522b1e0f6edc44cdbe8d85f5b10ddbc2e83ad99a8ceda9912682743010c4bc4"} Mar 12 18:33:39.276483 master-0 kubenswrapper[29097]: I0312 18:33:39.276291 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4b7a4ad7-3732-4c75-a6f6-57e83d6db837","Type":"ContainerStarted","Data":"b0bc751f929f48cb430fcfe90086ed785a9bd8a0780f4dc2a24c67f4c9317784"} Mar 12 18:33:39.291418 master-0 kubenswrapper[29097]: I0312 18:33:39.291036 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" podStartSLOduration=2.15641585 podStartE2EDuration="7.291013909s" podCreationTimestamp="2026-03-12 18:33:32 +0000 UTC" firstStartedPulling="2026-03-12 18:33:33.130793995 +0000 UTC m=+252.684774092" lastFinishedPulling="2026-03-12 18:33:38.265392064 +0000 UTC m=+257.819372151" observedRunningTime="2026-03-12 18:33:39.289663775 +0000 UTC m=+258.843643892" watchObservedRunningTime="2026-03-12 18:33:39.291013909 +0000 UTC m=+258.844994006" Mar 12 18:33:39.321502 master-0 kubenswrapper[29097]: I0312 18:33:39.321379 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=3.741567735 podStartE2EDuration="8.321360525s" podCreationTimestamp="2026-03-12 18:33:31 +0000 UTC" firstStartedPulling="2026-03-12 18:33:33.688547787 +0000 UTC m=+253.242527884" lastFinishedPulling="2026-03-12 18:33:38.268340577 +0000 UTC m=+257.822320674" observedRunningTime="2026-03-12 18:33:39.318169375 +0000 UTC m=+258.872149492" watchObservedRunningTime="2026-03-12 18:33:39.321360525 +0000 UTC m=+258.875340622" Mar 12 18:33:39.375840 master-0 kubenswrapper[29097]: I0312 18:33:39.375759 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6b98bc4d-xfxc9" podStartSLOduration=2.375732989 podStartE2EDuration="2.375732989s" podCreationTimestamp="2026-03-12 18:33:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:33:39.361063603 +0000 UTC m=+258.915043790" watchObservedRunningTime="2026-03-12 18:33:39.375732989 +0000 UTC m=+258.929713086" Mar 12 18:33:42.686666 master-0 kubenswrapper[29097]: I0312 18:33:42.686599 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-99547cb8-z9gq2" Mar 12 18:33:43.163896 master-0 kubenswrapper[29097]: I0312 18:33:43.163831 29097 patch_prober.go:28] interesting pod/console-68799679d4-tcwkt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" start-of-body= Mar 12 18:33:43.164106 master-0 kubenswrapper[29097]: I0312 18:33:43.163967 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-68799679d4-tcwkt" podUID="2f8180d8-5283-409a-b36e-4786c8483171" containerName="console" probeResult="failure" output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" Mar 12 18:33:43.310851 master-0 kubenswrapper[29097]: I0312 18:33:43.310794 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4b7a4ad7-3732-4c75-a6f6-57e83d6db837","Type":"ContainerStarted","Data":"c88f76a70cf27c3b41e18e042478d67ab5c138fcbf9818cbbb39aa8a96ca6090"} Mar 12 18:33:43.311031 master-0 kubenswrapper[29097]: I0312 18:33:43.310871 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4b7a4ad7-3732-4c75-a6f6-57e83d6db837","Type":"ContainerStarted","Data":"4cacdfbd8360d47bb00ee1ecc6137e06b32bf4595712aea1077bdfe2888ee223"} Mar 12 18:33:43.311031 master-0 kubenswrapper[29097]: I0312 18:33:43.310887 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4b7a4ad7-3732-4c75-a6f6-57e83d6db837","Type":"ContainerStarted","Data":"475594b714328217b07af0309b67872de006ee551015efe926bcf40cd3adcc4c"} Mar 12 18:33:43.311031 master-0 kubenswrapper[29097]: I0312 18:33:43.310899 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4b7a4ad7-3732-4c75-a6f6-57e83d6db837","Type":"ContainerStarted","Data":"5a23e14ec85f426caeda659673425e4d4bacfcd3e094299d876db870a55e9fed"} Mar 12 18:33:43.311031 master-0 kubenswrapper[29097]: I0312 18:33:43.310910 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4b7a4ad7-3732-4c75-a6f6-57e83d6db837","Type":"ContainerStarted","Data":"c3f5699e08fd89e9a09757cbdcd45392f1b1c855b94874d95db5edabea4c11da"} Mar 12 18:33:44.325378 master-0 kubenswrapper[29097]: I0312 18:33:44.325288 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4b7a4ad7-3732-4c75-a6f6-57e83d6db837","Type":"ContainerStarted","Data":"226f865a2534f7896119aaeb39254d18cc37125c9224424554b34251494678c8"} Mar 12 18:33:44.402632 master-0 kubenswrapper[29097]: I0312 18:33:44.402467 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=4.285557806 podStartE2EDuration="7.4024332s" podCreationTimestamp="2026-03-12 18:33:37 +0000 UTC" firstStartedPulling="2026-03-12 18:33:39.277992684 +0000 UTC m=+258.831972781" lastFinishedPulling="2026-03-12 18:33:42.394868078 +0000 UTC m=+261.948848175" observedRunningTime="2026-03-12 18:33:44.396472212 +0000 UTC m=+263.950452349" watchObservedRunningTime="2026-03-12 18:33:44.4024332 +0000 UTC m=+263.956413337" Mar 12 18:33:48.011300 master-0 kubenswrapper[29097]: I0312 18:33:48.011151 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:33:48.363753 master-0 kubenswrapper[29097]: I0312 18:33:48.363667 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6b98bc4d-xfxc9" Mar 12 18:33:48.363753 master-0 kubenswrapper[29097]: I0312 18:33:48.363737 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6b98bc4d-xfxc9" Mar 12 18:33:48.365408 master-0 kubenswrapper[29097]: I0312 18:33:48.365339 29097 patch_prober.go:28] interesting pod/console-6b98bc4d-xfxc9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 12 18:33:48.365642 master-0 kubenswrapper[29097]: I0312 18:33:48.365403 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6b98bc4d-xfxc9" podUID="f5dd5dc7-7bc4-4154-8aac-876d2a0ae565" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 12 18:33:53.163447 master-0 kubenswrapper[29097]: I0312 18:33:53.163361 29097 patch_prober.go:28] interesting pod/console-68799679d4-tcwkt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" start-of-body= Mar 12 18:33:53.164239 master-0 kubenswrapper[29097]: I0312 18:33:53.163445 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-68799679d4-tcwkt" podUID="2f8180d8-5283-409a-b36e-4786c8483171" containerName="console" probeResult="failure" output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" Mar 12 18:33:54.349241 master-0 kubenswrapper[29097]: I0312 18:33:54.349127 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-799595bb6c-b9xsw" Mar 12 18:33:54.349241 master-0 kubenswrapper[29097]: I0312 18:33:54.349250 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-799595bb6c-b9xsw" Mar 12 18:33:58.364917 master-0 kubenswrapper[29097]: I0312 18:33:58.364825 29097 patch_prober.go:28] interesting pod/console-6b98bc4d-xfxc9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 12 18:33:58.365637 master-0 kubenswrapper[29097]: I0312 18:33:58.364937 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6b98bc4d-xfxc9" podUID="f5dd5dc7-7bc4-4154-8aac-876d2a0ae565" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 12 18:34:03.039963 master-0 kubenswrapper[29097]: I0312 18:34:03.039840 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-76849948fc-cxrj8" podUID="ee109d34-abfd-4bb8-93e6-9e9c28ccec0e" containerName="console" containerID="cri-o://0f35f1e0bdd744c9b268ffd9fa5b41e535ae26cb0595d5c5018d98bc80b9d9d8" gracePeriod=15 Mar 12 18:34:03.163557 master-0 kubenswrapper[29097]: I0312 18:34:03.163496 29097 patch_prober.go:28] interesting pod/console-68799679d4-tcwkt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" start-of-body= Mar 12 18:34:03.163765 master-0 kubenswrapper[29097]: I0312 18:34:03.163574 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-68799679d4-tcwkt" podUID="2f8180d8-5283-409a-b36e-4786c8483171" containerName="console" probeResult="failure" output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" Mar 12 18:34:03.493690 master-0 kubenswrapper[29097]: I0312 18:34:03.493660 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-76849948fc-cxrj8_ee109d34-abfd-4bb8-93e6-9e9c28ccec0e/console/2.log" Mar 12 18:34:03.494499 master-0 kubenswrapper[29097]: I0312 18:34:03.494474 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-76849948fc-cxrj8_ee109d34-abfd-4bb8-93e6-9e9c28ccec0e/console/1.log" Mar 12 18:34:03.494642 master-0 kubenswrapper[29097]: I0312 18:34:03.494562 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76849948fc-cxrj8" Mar 12 18:34:03.527434 master-0 kubenswrapper[29097]: I0312 18:34:03.527370 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-76849948fc-cxrj8_ee109d34-abfd-4bb8-93e6-9e9c28ccec0e/console/2.log" Mar 12 18:34:03.528418 master-0 kubenswrapper[29097]: I0312 18:34:03.528397 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-76849948fc-cxrj8_ee109d34-abfd-4bb8-93e6-9e9c28ccec0e/console/1.log" Mar 12 18:34:03.528549 master-0 kubenswrapper[29097]: I0312 18:34:03.528440 29097 generic.go:334] "Generic (PLEG): container finished" podID="ee109d34-abfd-4bb8-93e6-9e9c28ccec0e" containerID="0f35f1e0bdd744c9b268ffd9fa5b41e535ae26cb0595d5c5018d98bc80b9d9d8" exitCode=2 Mar 12 18:34:03.528549 master-0 kubenswrapper[29097]: I0312 18:34:03.528475 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76849948fc-cxrj8" event={"ID":"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e","Type":"ContainerDied","Data":"0f35f1e0bdd744c9b268ffd9fa5b41e535ae26cb0595d5c5018d98bc80b9d9d8"} Mar 12 18:34:03.528549 master-0 kubenswrapper[29097]: I0312 18:34:03.528508 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76849948fc-cxrj8" event={"ID":"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e","Type":"ContainerDied","Data":"dc69780e36aafc28db96b0b9cc22b36808c549323f580ffad1662055dfa68b47"} Mar 12 18:34:03.528832 master-0 kubenswrapper[29097]: I0312 18:34:03.528550 29097 scope.go:117] "RemoveContainer" containerID="0f35f1e0bdd744c9b268ffd9fa5b41e535ae26cb0595d5c5018d98bc80b9d9d8" Mar 12 18:34:03.528832 master-0 kubenswrapper[29097]: I0312 18:34:03.528701 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76849948fc-cxrj8" Mar 12 18:34:03.546987 master-0 kubenswrapper[29097]: I0312 18:34:03.546951 29097 scope.go:117] "RemoveContainer" containerID="94d8f6ff9e10caa5d7657d4f9070aec6d8cf9e794f65572dc75b1ee126fd32c9" Mar 12 18:34:03.562586 master-0 kubenswrapper[29097]: I0312 18:34:03.562557 29097 scope.go:117] "RemoveContainer" containerID="0f35f1e0bdd744c9b268ffd9fa5b41e535ae26cb0595d5c5018d98bc80b9d9d8" Mar 12 18:34:03.563379 master-0 kubenswrapper[29097]: E0312 18:34:03.563337 29097 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f35f1e0bdd744c9b268ffd9fa5b41e535ae26cb0595d5c5018d98bc80b9d9d8\": container with ID starting with 0f35f1e0bdd744c9b268ffd9fa5b41e535ae26cb0595d5c5018d98bc80b9d9d8 not found: ID does not exist" containerID="0f35f1e0bdd744c9b268ffd9fa5b41e535ae26cb0595d5c5018d98bc80b9d9d8" Mar 12 18:34:03.563451 master-0 kubenswrapper[29097]: I0312 18:34:03.563396 29097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f35f1e0bdd744c9b268ffd9fa5b41e535ae26cb0595d5c5018d98bc80b9d9d8"} err="failed to get container status \"0f35f1e0bdd744c9b268ffd9fa5b41e535ae26cb0595d5c5018d98bc80b9d9d8\": rpc error: code = NotFound desc = could not find container \"0f35f1e0bdd744c9b268ffd9fa5b41e535ae26cb0595d5c5018d98bc80b9d9d8\": container with ID starting with 0f35f1e0bdd744c9b268ffd9fa5b41e535ae26cb0595d5c5018d98bc80b9d9d8 not found: ID does not exist" Mar 12 18:34:03.563451 master-0 kubenswrapper[29097]: I0312 18:34:03.563429 29097 scope.go:117] "RemoveContainer" containerID="94d8f6ff9e10caa5d7657d4f9070aec6d8cf9e794f65572dc75b1ee126fd32c9" Mar 12 18:34:03.563947 master-0 kubenswrapper[29097]: E0312 18:34:03.563915 29097 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94d8f6ff9e10caa5d7657d4f9070aec6d8cf9e794f65572dc75b1ee126fd32c9\": container with ID starting with 94d8f6ff9e10caa5d7657d4f9070aec6d8cf9e794f65572dc75b1ee126fd32c9 not found: ID does not exist" containerID="94d8f6ff9e10caa5d7657d4f9070aec6d8cf9e794f65572dc75b1ee126fd32c9" Mar 12 18:34:03.564056 master-0 kubenswrapper[29097]: I0312 18:34:03.564026 29097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94d8f6ff9e10caa5d7657d4f9070aec6d8cf9e794f65572dc75b1ee126fd32c9"} err="failed to get container status \"94d8f6ff9e10caa5d7657d4f9070aec6d8cf9e794f65572dc75b1ee126fd32c9\": rpc error: code = NotFound desc = could not find container \"94d8f6ff9e10caa5d7657d4f9070aec6d8cf9e794f65572dc75b1ee126fd32c9\": container with ID starting with 94d8f6ff9e10caa5d7657d4f9070aec6d8cf9e794f65572dc75b1ee126fd32c9 not found: ID does not exist" Mar 12 18:34:03.601423 master-0 kubenswrapper[29097]: I0312 18:34:03.601393 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-console-oauth-config\") pod \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\" (UID: \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\") " Mar 12 18:34:03.601586 master-0 kubenswrapper[29097]: I0312 18:34:03.601449 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-service-ca\") pod \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\" (UID: \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\") " Mar 12 18:34:03.601586 master-0 kubenswrapper[29097]: I0312 18:34:03.601485 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-trusted-ca-bundle\") pod \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\" (UID: \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\") " Mar 12 18:34:03.601586 master-0 kubenswrapper[29097]: I0312 18:34:03.601538 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-console-serving-cert\") pod \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\" (UID: \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\") " Mar 12 18:34:03.601863 master-0 kubenswrapper[29097]: I0312 18:34:03.601808 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgdtp\" (UniqueName: \"kubernetes.io/projected/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-kube-api-access-jgdtp\") pod \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\" (UID: \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\") " Mar 12 18:34:03.601980 master-0 kubenswrapper[29097]: I0312 18:34:03.601954 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-console-config\") pod \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\" (UID: \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\") " Mar 12 18:34:03.602100 master-0 kubenswrapper[29097]: I0312 18:34:03.602076 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-oauth-serving-cert\") pod \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\" (UID: \"ee109d34-abfd-4bb8-93e6-9e9c28ccec0e\") " Mar 12 18:34:03.602241 master-0 kubenswrapper[29097]: I0312 18:34:03.602195 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-service-ca" (OuterVolumeSpecName: "service-ca") pod "ee109d34-abfd-4bb8-93e6-9e9c28ccec0e" (UID: "ee109d34-abfd-4bb8-93e6-9e9c28ccec0e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:34:03.602541 master-0 kubenswrapper[29097]: I0312 18:34:03.602497 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-console-config" (OuterVolumeSpecName: "console-config") pod "ee109d34-abfd-4bb8-93e6-9e9c28ccec0e" (UID: "ee109d34-abfd-4bb8-93e6-9e9c28ccec0e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:34:03.602605 master-0 kubenswrapper[29097]: I0312 18:34:03.602576 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ee109d34-abfd-4bb8-93e6-9e9c28ccec0e" (UID: "ee109d34-abfd-4bb8-93e6-9e9c28ccec0e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:34:03.602653 master-0 kubenswrapper[29097]: I0312 18:34:03.602610 29097 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 18:34:03.602653 master-0 kubenswrapper[29097]: I0312 18:34:03.602632 29097 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-console-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:34:03.602882 master-0 kubenswrapper[29097]: I0312 18:34:03.602842 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ee109d34-abfd-4bb8-93e6-9e9c28ccec0e" (UID: "ee109d34-abfd-4bb8-93e6-9e9c28ccec0e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:34:03.604496 master-0 kubenswrapper[29097]: I0312 18:34:03.604456 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ee109d34-abfd-4bb8-93e6-9e9c28ccec0e" (UID: "ee109d34-abfd-4bb8-93e6-9e9c28ccec0e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:34:03.605009 master-0 kubenswrapper[29097]: I0312 18:34:03.604954 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ee109d34-abfd-4bb8-93e6-9e9c28ccec0e" (UID: "ee109d34-abfd-4bb8-93e6-9e9c28ccec0e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:34:03.607183 master-0 kubenswrapper[29097]: I0312 18:34:03.607162 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-kube-api-access-jgdtp" (OuterVolumeSpecName: "kube-api-access-jgdtp") pod "ee109d34-abfd-4bb8-93e6-9e9c28ccec0e" (UID: "ee109d34-abfd-4bb8-93e6-9e9c28ccec0e"). InnerVolumeSpecName "kube-api-access-jgdtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:34:03.704054 master-0 kubenswrapper[29097]: I0312 18:34:03.703896 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgdtp\" (UniqueName: \"kubernetes.io/projected/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-kube-api-access-jgdtp\") on node \"master-0\" DevicePath \"\"" Mar 12 18:34:03.704054 master-0 kubenswrapper[29097]: I0312 18:34:03.703924 29097 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 18:34:03.704054 master-0 kubenswrapper[29097]: I0312 18:34:03.703933 29097 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:34:03.704054 master-0 kubenswrapper[29097]: I0312 18:34:03.703942 29097 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:34:03.704054 master-0 kubenswrapper[29097]: I0312 18:34:03.703950 29097 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 18:34:03.904048 master-0 kubenswrapper[29097]: I0312 18:34:03.903951 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-76849948fc-cxrj8"] Mar 12 18:34:03.913746 master-0 kubenswrapper[29097]: I0312 18:34:03.913696 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-76849948fc-cxrj8"] Mar 12 18:34:04.737468 master-0 kubenswrapper[29097]: I0312 18:34:04.737402 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee109d34-abfd-4bb8-93e6-9e9c28ccec0e" path="/var/lib/kubelet/pods/ee109d34-abfd-4bb8-93e6-9e9c28ccec0e/volumes" Mar 12 18:34:08.365034 master-0 kubenswrapper[29097]: I0312 18:34:08.364944 29097 patch_prober.go:28] interesting pod/console-6b98bc4d-xfxc9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 12 18:34:08.365982 master-0 kubenswrapper[29097]: I0312 18:34:08.365202 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6b98bc4d-xfxc9" podUID="f5dd5dc7-7bc4-4154-8aac-876d2a0ae565" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 12 18:34:13.163923 master-0 kubenswrapper[29097]: I0312 18:34:13.163831 29097 patch_prober.go:28] interesting pod/console-68799679d4-tcwkt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" start-of-body= Mar 12 18:34:13.165000 master-0 kubenswrapper[29097]: I0312 18:34:13.163919 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-68799679d4-tcwkt" podUID="2f8180d8-5283-409a-b36e-4786c8483171" containerName="console" probeResult="failure" output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" Mar 12 18:34:14.360196 master-0 kubenswrapper[29097]: I0312 18:34:14.360078 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-799595bb6c-b9xsw" Mar 12 18:34:14.368381 master-0 kubenswrapper[29097]: I0312 18:34:14.368283 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-799595bb6c-b9xsw" Mar 12 18:34:18.364756 master-0 kubenswrapper[29097]: I0312 18:34:18.364618 29097 patch_prober.go:28] interesting pod/console-6b98bc4d-xfxc9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 12 18:34:18.364756 master-0 kubenswrapper[29097]: I0312 18:34:18.364708 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6b98bc4d-xfxc9" podUID="f5dd5dc7-7bc4-4154-8aac-876d2a0ae565" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 12 18:34:20.713081 master-0 kubenswrapper[29097]: I0312 18:34:20.712948 29097 kubelet.go:1505] "Image garbage collection succeeded" Mar 12 18:34:20.954856 master-0 kubenswrapper[29097]: I0312 18:34:20.954779 29097 scope.go:117] "RemoveContainer" containerID="feb7a0602e16521ca8f037d98e053563e5dfd7b3fed109ded127b4e56a4c158c" Mar 12 18:34:20.975476 master-0 kubenswrapper[29097]: I0312 18:34:20.975428 29097 scope.go:117] "RemoveContainer" containerID="6013ae8778b6f3db082ecdee07bf998643391f13699e5ddf7a85c9b9ddf833c3" Mar 12 18:34:21.201886 master-0 kubenswrapper[29097]: E0312 18:34:21.201796 29097 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33feec78_4592_4343_965b_aa1b7044fcf3.slice/crio-26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Error finding container 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Status 404 returned error can't find the container with id 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547 Mar 12 18:34:22.131886 master-0 kubenswrapper[29097]: I0312 18:34:22.131823 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-d798d76b8-2crc8"] Mar 12 18:34:22.132604 master-0 kubenswrapper[29097]: E0312 18:34:22.132095 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee109d34-abfd-4bb8-93e6-9e9c28ccec0e" containerName="console" Mar 12 18:34:22.132604 master-0 kubenswrapper[29097]: I0312 18:34:22.132108 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee109d34-abfd-4bb8-93e6-9e9c28ccec0e" containerName="console" Mar 12 18:34:22.132604 master-0 kubenswrapper[29097]: E0312 18:34:22.132133 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee109d34-abfd-4bb8-93e6-9e9c28ccec0e" containerName="console" Mar 12 18:34:22.132604 master-0 kubenswrapper[29097]: I0312 18:34:22.132139 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee109d34-abfd-4bb8-93e6-9e9c28ccec0e" containerName="console" Mar 12 18:34:22.132604 master-0 kubenswrapper[29097]: I0312 18:34:22.132299 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee109d34-abfd-4bb8-93e6-9e9c28ccec0e" containerName="console" Mar 12 18:34:22.132604 master-0 kubenswrapper[29097]: I0312 18:34:22.132323 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee109d34-abfd-4bb8-93e6-9e9c28ccec0e" containerName="console" Mar 12 18:34:22.132880 master-0 kubenswrapper[29097]: I0312 18:34:22.132804 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.141434 master-0 kubenswrapper[29097]: I0312 18:34:22.141369 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 12 18:34:22.141851 master-0 kubenswrapper[29097]: I0312 18:34:22.141821 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 12 18:34:22.141939 master-0 kubenswrapper[29097]: I0312 18:34:22.141859 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-8jt4h" Mar 12 18:34:22.142003 master-0 kubenswrapper[29097]: I0312 18:34:22.141960 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 12 18:34:22.142003 master-0 kubenswrapper[29097]: I0312 18:34:22.141975 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 12 18:34:22.142122 master-0 kubenswrapper[29097]: I0312 18:34:22.142019 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 12 18:34:22.142198 master-0 kubenswrapper[29097]: I0312 18:34:22.142135 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 12 18:34:22.142198 master-0 kubenswrapper[29097]: I0312 18:34:22.142186 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 12 18:34:22.142312 master-0 kubenswrapper[29097]: I0312 18:34:22.142254 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 12 18:34:22.142312 master-0 kubenswrapper[29097]: I0312 18:34:22.142286 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 12 18:34:22.142417 master-0 kubenswrapper[29097]: I0312 18:34:22.142324 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 12 18:34:22.142417 master-0 kubenswrapper[29097]: I0312 18:34:22.142290 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 12 18:34:22.157323 master-0 kubenswrapper[29097]: I0312 18:34:22.157263 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 12 18:34:22.167995 master-0 kubenswrapper[29097]: I0312 18:34:22.161421 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 12 18:34:22.170552 master-0 kubenswrapper[29097]: I0312 18:34:22.170482 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-d798d76b8-2crc8"] Mar 12 18:34:22.245417 master-0 kubenswrapper[29097]: I0312 18:34:22.245368 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-user-template-error\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.246321 master-0 kubenswrapper[29097]: I0312 18:34:22.246298 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.246417 master-0 kubenswrapper[29097]: I0312 18:34:22.246404 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-user-template-login\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.246925 master-0 kubenswrapper[29097]: I0312 18:34:22.246733 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7bcb667e-8dae-4765-801c-20be827ea2be-audit-policies\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.246925 master-0 kubenswrapper[29097]: I0312 18:34:22.246816 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-service-ca\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.246925 master-0 kubenswrapper[29097]: I0312 18:34:22.246863 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbtzl\" (UniqueName: \"kubernetes.io/projected/7bcb667e-8dae-4765-801c-20be827ea2be-kube-api-access-jbtzl\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.246925 master-0 kubenswrapper[29097]: I0312 18:34:22.246899 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.247406 master-0 kubenswrapper[29097]: I0312 18:34:22.247009 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-session\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.247406 master-0 kubenswrapper[29097]: I0312 18:34:22.247059 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.247406 master-0 kubenswrapper[29097]: I0312 18:34:22.247117 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7bcb667e-8dae-4765-801c-20be827ea2be-audit-dir\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.247406 master-0 kubenswrapper[29097]: I0312 18:34:22.247192 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.247406 master-0 kubenswrapper[29097]: I0312 18:34:22.247217 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-router-certs\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.247406 master-0 kubenswrapper[29097]: I0312 18:34:22.247239 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.349100 master-0 kubenswrapper[29097]: I0312 18:34:22.349057 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-user-template-error\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.349378 master-0 kubenswrapper[29097]: I0312 18:34:22.349363 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.349466 master-0 kubenswrapper[29097]: I0312 18:34:22.349452 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-user-template-login\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.349569 master-0 kubenswrapper[29097]: I0312 18:34:22.349555 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7bcb667e-8dae-4765-801c-20be827ea2be-audit-policies\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.349664 master-0 kubenswrapper[29097]: I0312 18:34:22.349645 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-service-ca\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.349786 master-0 kubenswrapper[29097]: I0312 18:34:22.349766 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbtzl\" (UniqueName: \"kubernetes.io/projected/7bcb667e-8dae-4765-801c-20be827ea2be-kube-api-access-jbtzl\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.349903 master-0 kubenswrapper[29097]: I0312 18:34:22.349886 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.350030 master-0 kubenswrapper[29097]: I0312 18:34:22.350011 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-session\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.350153 master-0 kubenswrapper[29097]: E0312 18:34:22.350113 29097 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: secret "v4-0-config-system-session" not found Mar 12 18:34:22.350212 master-0 kubenswrapper[29097]: E0312 18:34:22.350200 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-session podName:7bcb667e-8dae-4765-801c-20be827ea2be nodeName:}" failed. No retries permitted until 2026-03-12 18:34:22.850183019 +0000 UTC m=+302.404163116 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-session") pod "oauth-openshift-d798d76b8-2crc8" (UID: "7bcb667e-8dae-4765-801c-20be827ea2be") : secret "v4-0-config-system-session" not found Mar 12 18:34:22.350271 master-0 kubenswrapper[29097]: E0312 18:34:22.350256 29097 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 12 18:34:22.350313 master-0 kubenswrapper[29097]: E0312 18:34:22.350305 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-cliconfig podName:7bcb667e-8dae-4765-801c-20be827ea2be nodeName:}" failed. No retries permitted until 2026-03-12 18:34:22.850293052 +0000 UTC m=+302.404273149 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-cliconfig") pod "oauth-openshift-d798d76b8-2crc8" (UID: "7bcb667e-8dae-4765-801c-20be827ea2be") : configmap "v4-0-config-system-cliconfig" not found Mar 12 18:34:22.350358 master-0 kubenswrapper[29097]: I0312 18:34:22.350332 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7bcb667e-8dae-4765-801c-20be827ea2be-audit-policies\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.350428 master-0 kubenswrapper[29097]: I0312 18:34:22.350407 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.350573 master-0 kubenswrapper[29097]: I0312 18:34:22.350555 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7bcb667e-8dae-4765-801c-20be827ea2be-audit-dir\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.350732 master-0 kubenswrapper[29097]: I0312 18:34:22.350711 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.350852 master-0 kubenswrapper[29097]: I0312 18:34:22.350833 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-router-certs\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.350941 master-0 kubenswrapper[29097]: I0312 18:34:22.350610 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7bcb667e-8dae-4765-801c-20be827ea2be-audit-dir\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.351084 master-0 kubenswrapper[29097]: I0312 18:34:22.351027 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-service-ca\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.351148 master-0 kubenswrapper[29097]: I0312 18:34:22.351041 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.352406 master-0 kubenswrapper[29097]: I0312 18:34:22.352357 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.353167 master-0 kubenswrapper[29097]: I0312 18:34:22.353121 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-user-template-error\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.354210 master-0 kubenswrapper[29097]: I0312 18:34:22.353924 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-user-template-login\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.354210 master-0 kubenswrapper[29097]: I0312 18:34:22.354089 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.354210 master-0 kubenswrapper[29097]: I0312 18:34:22.354170 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-router-certs\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.356145 master-0 kubenswrapper[29097]: I0312 18:34:22.355880 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.356145 master-0 kubenswrapper[29097]: I0312 18:34:22.355971 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.368012 master-0 kubenswrapper[29097]: I0312 18:34:22.367936 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbtzl\" (UniqueName: \"kubernetes.io/projected/7bcb667e-8dae-4765-801c-20be827ea2be-kube-api-access-jbtzl\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.860570 master-0 kubenswrapper[29097]: I0312 18:34:22.860430 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-session\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.860923 master-0 kubenswrapper[29097]: I0312 18:34:22.860620 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:22.860923 master-0 kubenswrapper[29097]: E0312 18:34:22.860647 29097 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: secret "v4-0-config-system-session" not found Mar 12 18:34:22.860923 master-0 kubenswrapper[29097]: E0312 18:34:22.860745 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-session podName:7bcb667e-8dae-4765-801c-20be827ea2be nodeName:}" failed. No retries permitted until 2026-03-12 18:34:23.860712335 +0000 UTC m=+303.414692472 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-session") pod "oauth-openshift-d798d76b8-2crc8" (UID: "7bcb667e-8dae-4765-801c-20be827ea2be") : secret "v4-0-config-system-session" not found Mar 12 18:34:22.861241 master-0 kubenswrapper[29097]: E0312 18:34:22.860976 29097 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 12 18:34:22.861241 master-0 kubenswrapper[29097]: E0312 18:34:22.861100 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-cliconfig podName:7bcb667e-8dae-4765-801c-20be827ea2be nodeName:}" failed. No retries permitted until 2026-03-12 18:34:23.861072444 +0000 UTC m=+303.415052581 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-cliconfig") pod "oauth-openshift-d798d76b8-2crc8" (UID: "7bcb667e-8dae-4765-801c-20be827ea2be") : configmap "v4-0-config-system-cliconfig" not found Mar 12 18:34:23.163421 master-0 kubenswrapper[29097]: I0312 18:34:23.163256 29097 patch_prober.go:28] interesting pod/console-68799679d4-tcwkt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" start-of-body= Mar 12 18:34:23.163421 master-0 kubenswrapper[29097]: I0312 18:34:23.163322 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-68799679d4-tcwkt" podUID="2f8180d8-5283-409a-b36e-4786c8483171" containerName="console" probeResult="failure" output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" Mar 12 18:34:23.879622 master-0 kubenswrapper[29097]: I0312 18:34:23.879552 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-session\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:23.879927 master-0 kubenswrapper[29097]: I0312 18:34:23.879660 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:23.879927 master-0 kubenswrapper[29097]: E0312 18:34:23.879762 29097 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 12 18:34:23.879927 master-0 kubenswrapper[29097]: E0312 18:34:23.879833 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-cliconfig podName:7bcb667e-8dae-4765-801c-20be827ea2be nodeName:}" failed. No retries permitted until 2026-03-12 18:34:25.879808108 +0000 UTC m=+305.433788245 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-cliconfig") pod "oauth-openshift-d798d76b8-2crc8" (UID: "7bcb667e-8dae-4765-801c-20be827ea2be") : configmap "v4-0-config-system-cliconfig" not found Mar 12 18:34:23.883911 master-0 kubenswrapper[29097]: I0312 18:34:23.883854 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-session\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:25.915126 master-0 kubenswrapper[29097]: I0312 18:34:25.915049 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d798d76b8-2crc8\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:25.915724 master-0 kubenswrapper[29097]: E0312 18:34:25.915260 29097 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 12 18:34:25.915724 master-0 kubenswrapper[29097]: E0312 18:34:25.915401 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-cliconfig podName:7bcb667e-8dae-4765-801c-20be827ea2be nodeName:}" failed. No retries permitted until 2026-03-12 18:34:29.915360098 +0000 UTC m=+309.469340235 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-cliconfig") pod "oauth-openshift-d798d76b8-2crc8" (UID: "7bcb667e-8dae-4765-801c-20be827ea2be") : configmap "v4-0-config-system-cliconfig" not found Mar 12 18:34:28.364164 master-0 kubenswrapper[29097]: I0312 18:34:28.364053 29097 patch_prober.go:28] interesting pod/console-6b98bc4d-xfxc9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 12 18:34:28.364164 master-0 kubenswrapper[29097]: I0312 18:34:28.364141 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6b98bc4d-xfxc9" podUID="f5dd5dc7-7bc4-4154-8aac-876d2a0ae565" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 12 18:34:28.559163 master-0 kubenswrapper[29097]: I0312 18:34:28.559047 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-d798d76b8-2crc8"] Mar 12 18:34:28.559984 master-0 kubenswrapper[29097]: E0312 18:34:28.559902 29097 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[v4-0-config-system-cliconfig], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" podUID="7bcb667e-8dae-4765-801c-20be827ea2be" Mar 12 18:34:28.745886 master-0 kubenswrapper[29097]: I0312 18:34:28.745745 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:28.757475 master-0 kubenswrapper[29097]: I0312 18:34:28.757430 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:28.879408 master-0 kubenswrapper[29097]: I0312 18:34:28.879367 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-ocp-branding-template\") pod \"7bcb667e-8dae-4765-801c-20be827ea2be\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " Mar 12 18:34:28.879635 master-0 kubenswrapper[29097]: I0312 18:34:28.879426 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbtzl\" (UniqueName: \"kubernetes.io/projected/7bcb667e-8dae-4765-801c-20be827ea2be-kube-api-access-jbtzl\") pod \"7bcb667e-8dae-4765-801c-20be827ea2be\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " Mar 12 18:34:28.879635 master-0 kubenswrapper[29097]: I0312 18:34:28.879480 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-trusted-ca-bundle\") pod \"7bcb667e-8dae-4765-801c-20be827ea2be\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " Mar 12 18:34:28.879635 master-0 kubenswrapper[29097]: I0312 18:34:28.879528 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-service-ca\") pod \"7bcb667e-8dae-4765-801c-20be827ea2be\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " Mar 12 18:34:28.879635 master-0 kubenswrapper[29097]: I0312 18:34:28.879576 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-user-template-provider-selection\") pod \"7bcb667e-8dae-4765-801c-20be827ea2be\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " Mar 12 18:34:28.879635 master-0 kubenswrapper[29097]: I0312 18:34:28.879613 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-user-template-error\") pod \"7bcb667e-8dae-4765-801c-20be827ea2be\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " Mar 12 18:34:28.879876 master-0 kubenswrapper[29097]: I0312 18:34:28.879640 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-session\") pod \"7bcb667e-8dae-4765-801c-20be827ea2be\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " Mar 12 18:34:28.879876 master-0 kubenswrapper[29097]: I0312 18:34:28.879666 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-router-certs\") pod \"7bcb667e-8dae-4765-801c-20be827ea2be\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " Mar 12 18:34:28.879876 master-0 kubenswrapper[29097]: I0312 18:34:28.879693 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-serving-cert\") pod \"7bcb667e-8dae-4765-801c-20be827ea2be\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " Mar 12 18:34:28.879876 master-0 kubenswrapper[29097]: I0312 18:34:28.879723 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7bcb667e-8dae-4765-801c-20be827ea2be-audit-dir\") pod \"7bcb667e-8dae-4765-801c-20be827ea2be\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " Mar 12 18:34:28.879876 master-0 kubenswrapper[29097]: I0312 18:34:28.879764 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-user-template-login\") pod \"7bcb667e-8dae-4765-801c-20be827ea2be\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " Mar 12 18:34:28.879876 master-0 kubenswrapper[29097]: I0312 18:34:28.879796 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7bcb667e-8dae-4765-801c-20be827ea2be-audit-policies\") pod \"7bcb667e-8dae-4765-801c-20be827ea2be\" (UID: \"7bcb667e-8dae-4765-801c-20be827ea2be\") " Mar 12 18:34:28.880700 master-0 kubenswrapper[29097]: I0312 18:34:28.880304 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bcb667e-8dae-4765-801c-20be827ea2be-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "7bcb667e-8dae-4765-801c-20be827ea2be" (UID: "7bcb667e-8dae-4765-801c-20be827ea2be"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:34:28.880700 master-0 kubenswrapper[29097]: I0312 18:34:28.880436 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "7bcb667e-8dae-4765-801c-20be827ea2be" (UID: "7bcb667e-8dae-4765-801c-20be827ea2be"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:34:28.880700 master-0 kubenswrapper[29097]: I0312 18:34:28.880623 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bcb667e-8dae-4765-801c-20be827ea2be-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "7bcb667e-8dae-4765-801c-20be827ea2be" (UID: "7bcb667e-8dae-4765-801c-20be827ea2be"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:34:28.880700 master-0 kubenswrapper[29097]: I0312 18:34:28.880552 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "7bcb667e-8dae-4765-801c-20be827ea2be" (UID: "7bcb667e-8dae-4765-801c-20be827ea2be"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:34:28.883566 master-0 kubenswrapper[29097]: I0312 18:34:28.883487 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "7bcb667e-8dae-4765-801c-20be827ea2be" (UID: "7bcb667e-8dae-4765-801c-20be827ea2be"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:34:28.883666 master-0 kubenswrapper[29097]: I0312 18:34:28.883607 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "7bcb667e-8dae-4765-801c-20be827ea2be" (UID: "7bcb667e-8dae-4765-801c-20be827ea2be"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:34:28.883810 master-0 kubenswrapper[29097]: I0312 18:34:28.883768 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bcb667e-8dae-4765-801c-20be827ea2be-kube-api-access-jbtzl" (OuterVolumeSpecName: "kube-api-access-jbtzl") pod "7bcb667e-8dae-4765-801c-20be827ea2be" (UID: "7bcb667e-8dae-4765-801c-20be827ea2be"). InnerVolumeSpecName "kube-api-access-jbtzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:34:28.884004 master-0 kubenswrapper[29097]: I0312 18:34:28.883981 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "7bcb667e-8dae-4765-801c-20be827ea2be" (UID: "7bcb667e-8dae-4765-801c-20be827ea2be"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:34:28.884341 master-0 kubenswrapper[29097]: I0312 18:34:28.884249 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "7bcb667e-8dae-4765-801c-20be827ea2be" (UID: "7bcb667e-8dae-4765-801c-20be827ea2be"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:34:28.884720 master-0 kubenswrapper[29097]: I0312 18:34:28.884671 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "7bcb667e-8dae-4765-801c-20be827ea2be" (UID: "7bcb667e-8dae-4765-801c-20be827ea2be"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:34:28.884901 master-0 kubenswrapper[29097]: I0312 18:34:28.884865 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "7bcb667e-8dae-4765-801c-20be827ea2be" (UID: "7bcb667e-8dae-4765-801c-20be827ea2be"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:34:28.885314 master-0 kubenswrapper[29097]: I0312 18:34:28.885273 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "7bcb667e-8dae-4765-801c-20be827ea2be" (UID: "7bcb667e-8dae-4765-801c-20be827ea2be"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:34:28.981439 master-0 kubenswrapper[29097]: I0312 18:34:28.981237 29097 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Mar 12 18:34:28.981439 master-0 kubenswrapper[29097]: I0312 18:34:28.981278 29097 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Mar 12 18:34:28.981439 master-0 kubenswrapper[29097]: I0312 18:34:28.981293 29097 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Mar 12 18:34:28.981439 master-0 kubenswrapper[29097]: I0312 18:34:28.981307 29097 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:34:28.981439 master-0 kubenswrapper[29097]: I0312 18:34:28.981320 29097 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 18:34:28.981439 master-0 kubenswrapper[29097]: I0312 18:34:28.981335 29097 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7bcb667e-8dae-4765-801c-20be827ea2be-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:34:28.981439 master-0 kubenswrapper[29097]: I0312 18:34:28.981349 29097 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Mar 12 18:34:28.981439 master-0 kubenswrapper[29097]: I0312 18:34:28.981362 29097 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7bcb667e-8dae-4765-801c-20be827ea2be-audit-policies\") on node \"master-0\" DevicePath \"\"" Mar 12 18:34:28.981439 master-0 kubenswrapper[29097]: I0312 18:34:28.981376 29097 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Mar 12 18:34:28.981439 master-0 kubenswrapper[29097]: I0312 18:34:28.981390 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbtzl\" (UniqueName: \"kubernetes.io/projected/7bcb667e-8dae-4765-801c-20be827ea2be-kube-api-access-jbtzl\") on node \"master-0\" DevicePath \"\"" Mar 12 18:34:28.981439 master-0 kubenswrapper[29097]: I0312 18:34:28.981402 29097 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:34:28.981439 master-0 kubenswrapper[29097]: I0312 18:34:28.981416 29097 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 18:34:29.753920 master-0 kubenswrapper[29097]: I0312 18:34:29.753857 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-d798d76b8-2crc8" Mar 12 18:34:29.807946 master-0 kubenswrapper[29097]: I0312 18:34:29.807879 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn"] Mar 12 18:34:29.808499 master-0 kubenswrapper[29097]: E0312 18:34:29.808461 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee109d34-abfd-4bb8-93e6-9e9c28ccec0e" containerName="console" Mar 12 18:34:29.808619 master-0 kubenswrapper[29097]: I0312 18:34:29.808505 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee109d34-abfd-4bb8-93e6-9e9c28ccec0e" containerName="console" Mar 12 18:34:29.809032 master-0 kubenswrapper[29097]: I0312 18:34:29.809002 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee109d34-abfd-4bb8-93e6-9e9c28ccec0e" containerName="console" Mar 12 18:34:29.811215 master-0 kubenswrapper[29097]: I0312 18:34:29.811170 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:29.815143 master-0 kubenswrapper[29097]: I0312 18:34:29.815092 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 12 18:34:29.815937 master-0 kubenswrapper[29097]: I0312 18:34:29.815901 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 12 18:34:29.816275 master-0 kubenswrapper[29097]: I0312 18:34:29.816241 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 12 18:34:29.816592 master-0 kubenswrapper[29097]: I0312 18:34:29.816553 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-d798d76b8-2crc8"] Mar 12 18:34:29.821745 master-0 kubenswrapper[29097]: I0312 18:34:29.821705 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 12 18:34:29.822082 master-0 kubenswrapper[29097]: I0312 18:34:29.822059 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 12 18:34:29.822193 master-0 kubenswrapper[29097]: I0312 18:34:29.822177 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 12 18:34:29.822317 master-0 kubenswrapper[29097]: I0312 18:34:29.822300 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 12 18:34:29.822474 master-0 kubenswrapper[29097]: I0312 18:34:29.822457 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 12 18:34:29.822609 master-0 kubenswrapper[29097]: I0312 18:34:29.822593 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 12 18:34:29.822741 master-0 kubenswrapper[29097]: I0312 18:34:29.822722 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 12 18:34:29.823594 master-0 kubenswrapper[29097]: I0312 18:34:29.823566 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-d798d76b8-2crc8"] Mar 12 18:34:29.823701 master-0 kubenswrapper[29097]: I0312 18:34:29.823682 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 12 18:34:29.824028 master-0 kubenswrapper[29097]: I0312 18:34:29.823986 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-8jt4h" Mar 12 18:34:29.832079 master-0 kubenswrapper[29097]: I0312 18:34:29.832035 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 12 18:34:29.838624 master-0 kubenswrapper[29097]: I0312 18:34:29.838590 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn"] Mar 12 18:34:29.845945 master-0 kubenswrapper[29097]: I0312 18:34:29.845886 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 12 18:34:29.895874 master-0 kubenswrapper[29097]: I0312 18:34:29.895797 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-router-certs\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:29.895874 master-0 kubenswrapper[29097]: I0312 18:34:29.895872 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-user-template-error\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:29.896156 master-0 kubenswrapper[29097]: I0312 18:34:29.895905 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-session\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:29.896156 master-0 kubenswrapper[29097]: I0312 18:34:29.895934 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:29.896156 master-0 kubenswrapper[29097]: I0312 18:34:29.895963 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-service-ca\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:29.896156 master-0 kubenswrapper[29097]: I0312 18:34:29.895995 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:29.896156 master-0 kubenswrapper[29097]: I0312 18:34:29.896032 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:29.896156 master-0 kubenswrapper[29097]: I0312 18:34:29.896079 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-user-template-login\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:29.896156 master-0 kubenswrapper[29097]: I0312 18:34:29.896099 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv5l6\" (UniqueName: \"kubernetes.io/projected/dd1c9272-702a-4134-b476-91ee66dc43dc-kube-api-access-kv5l6\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:29.896156 master-0 kubenswrapper[29097]: I0312 18:34:29.896128 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:29.897266 master-0 kubenswrapper[29097]: I0312 18:34:29.896168 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dd1c9272-702a-4134-b476-91ee66dc43dc-audit-policies\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:29.897266 master-0 kubenswrapper[29097]: I0312 18:34:29.896194 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dd1c9272-702a-4134-b476-91ee66dc43dc-audit-dir\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:29.897266 master-0 kubenswrapper[29097]: I0312 18:34:29.896237 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:29.897266 master-0 kubenswrapper[29097]: I0312 18:34:29.896287 29097 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7bcb667e-8dae-4765-801c-20be827ea2be-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Mar 12 18:34:29.997233 master-0 kubenswrapper[29097]: I0312 18:34:29.997138 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-session\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:29.997233 master-0 kubenswrapper[29097]: I0312 18:34:29.997222 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:29.997680 master-0 kubenswrapper[29097]: I0312 18:34:29.997271 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-service-ca\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:29.997680 master-0 kubenswrapper[29097]: I0312 18:34:29.997314 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:29.997680 master-0 kubenswrapper[29097]: I0312 18:34:29.997368 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:29.997680 master-0 kubenswrapper[29097]: I0312 18:34:29.997413 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-user-template-login\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:29.997680 master-0 kubenswrapper[29097]: I0312 18:34:29.997445 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv5l6\" (UniqueName: \"kubernetes.io/projected/dd1c9272-702a-4134-b476-91ee66dc43dc-kube-api-access-kv5l6\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:29.997680 master-0 kubenswrapper[29097]: I0312 18:34:29.997487 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:29.997680 master-0 kubenswrapper[29097]: I0312 18:34:29.997564 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dd1c9272-702a-4134-b476-91ee66dc43dc-audit-policies\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:29.997680 master-0 kubenswrapper[29097]: I0312 18:34:29.997608 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dd1c9272-702a-4134-b476-91ee66dc43dc-audit-dir\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:29.997680 master-0 kubenswrapper[29097]: I0312 18:34:29.997672 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:29.998301 master-0 kubenswrapper[29097]: I0312 18:34:29.997736 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-router-certs\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:29.998301 master-0 kubenswrapper[29097]: I0312 18:34:29.997784 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-user-template-error\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:29.998301 master-0 kubenswrapper[29097]: I0312 18:34:29.998071 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dd1c9272-702a-4134-b476-91ee66dc43dc-audit-dir\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:30.000643 master-0 kubenswrapper[29097]: I0312 18:34:30.000580 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dd1c9272-702a-4134-b476-91ee66dc43dc-audit-policies\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:30.001295 master-0 kubenswrapper[29097]: I0312 18:34:30.001228 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:30.002641 master-0 kubenswrapper[29097]: I0312 18:34:30.002145 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:30.002641 master-0 kubenswrapper[29097]: I0312 18:34:30.002329 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-service-ca\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:30.002897 master-0 kubenswrapper[29097]: I0312 18:34:30.002845 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-user-template-error\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:30.003795 master-0 kubenswrapper[29097]: I0312 18:34:30.003757 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:30.005338 master-0 kubenswrapper[29097]: I0312 18:34:30.005237 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-router-certs\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:30.005625 master-0 kubenswrapper[29097]: I0312 18:34:30.005499 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:30.006297 master-0 kubenswrapper[29097]: I0312 18:34:30.006237 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:30.008493 master-0 kubenswrapper[29097]: I0312 18:34:30.008418 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-user-template-login\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:30.009139 master-0 kubenswrapper[29097]: I0312 18:34:30.009075 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-session\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:30.018466 master-0 kubenswrapper[29097]: I0312 18:34:30.018405 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv5l6\" (UniqueName: \"kubernetes.io/projected/dd1c9272-702a-4134-b476-91ee66dc43dc-kube-api-access-kv5l6\") pod \"oauth-openshift-7c9f57fd64-zpdxn\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:30.142169 master-0 kubenswrapper[29097]: I0312 18:34:30.142076 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:30.601231 master-0 kubenswrapper[29097]: I0312 18:34:30.601180 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn"] Mar 12 18:34:30.609297 master-0 kubenswrapper[29097]: W0312 18:34:30.609240 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd1c9272_702a_4134_b476_91ee66dc43dc.slice/crio-39c1a12a17beecbab75c81b6d4e44c42daad16690939a63c1ab604700c07f752 WatchSource:0}: Error finding container 39c1a12a17beecbab75c81b6d4e44c42daad16690939a63c1ab604700c07f752: Status 404 returned error can't find the container with id 39c1a12a17beecbab75c81b6d4e44c42daad16690939a63c1ab604700c07f752 Mar 12 18:34:30.736341 master-0 kubenswrapper[29097]: I0312 18:34:30.736246 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bcb667e-8dae-4765-801c-20be827ea2be" path="/var/lib/kubelet/pods/7bcb667e-8dae-4765-801c-20be827ea2be/volumes" Mar 12 18:34:30.761190 master-0 kubenswrapper[29097]: I0312 18:34:30.761128 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" event={"ID":"dd1c9272-702a-4134-b476-91ee66dc43dc","Type":"ContainerStarted","Data":"39c1a12a17beecbab75c81b6d4e44c42daad16690939a63c1ab604700c07f752"} Mar 12 18:34:33.163330 master-0 kubenswrapper[29097]: I0312 18:34:33.163234 29097 patch_prober.go:28] interesting pod/console-68799679d4-tcwkt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" start-of-body= Mar 12 18:34:33.163330 master-0 kubenswrapper[29097]: I0312 18:34:33.163300 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-68799679d4-tcwkt" podUID="2f8180d8-5283-409a-b36e-4786c8483171" containerName="console" probeResult="failure" output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" Mar 12 18:34:33.788696 master-0 kubenswrapper[29097]: I0312 18:34:33.788549 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" event={"ID":"dd1c9272-702a-4134-b476-91ee66dc43dc","Type":"ContainerStarted","Data":"7975f652d90e3f50e0487660fee053da7f9566334eafd0f9ddd007f615d8d957"} Mar 12 18:34:33.789333 master-0 kubenswrapper[29097]: I0312 18:34:33.789282 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:33.827383 master-0 kubenswrapper[29097]: I0312 18:34:33.827267 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" podStartSLOduration=3.267672129 podStartE2EDuration="5.82722112s" podCreationTimestamp="2026-03-12 18:34:28 +0000 UTC" firstStartedPulling="2026-03-12 18:34:30.612826719 +0000 UTC m=+310.166806816" lastFinishedPulling="2026-03-12 18:34:33.17237571 +0000 UTC m=+312.726355807" observedRunningTime="2026-03-12 18:34:33.816021991 +0000 UTC m=+313.370002108" watchObservedRunningTime="2026-03-12 18:34:33.82722112 +0000 UTC m=+313.381201227" Mar 12 18:34:33.934006 master-0 kubenswrapper[29097]: I0312 18:34:33.933953 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:34:38.011130 master-0 kubenswrapper[29097]: I0312 18:34:38.011064 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:34:38.039480 master-0 kubenswrapper[29097]: I0312 18:34:38.039406 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:34:38.364470 master-0 kubenswrapper[29097]: I0312 18:34:38.364376 29097 patch_prober.go:28] interesting pod/console-6b98bc4d-xfxc9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 12 18:34:38.364762 master-0 kubenswrapper[29097]: I0312 18:34:38.364482 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6b98bc4d-xfxc9" podUID="f5dd5dc7-7bc4-4154-8aac-876d2a0ae565" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 12 18:34:38.874924 master-0 kubenswrapper[29097]: I0312 18:34:38.874873 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 18:34:43.163735 master-0 kubenswrapper[29097]: I0312 18:34:43.163649 29097 patch_prober.go:28] interesting pod/console-68799679d4-tcwkt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" start-of-body= Mar 12 18:34:43.165377 master-0 kubenswrapper[29097]: I0312 18:34:43.164755 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-68799679d4-tcwkt" podUID="2f8180d8-5283-409a-b36e-4786c8483171" containerName="console" probeResult="failure" output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" Mar 12 18:34:47.959215 master-0 kubenswrapper[29097]: I0312 18:34:47.959101 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 12 18:34:47.960654 master-0 kubenswrapper[29097]: I0312 18:34:47.960604 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 18:34:47.963365 master-0 kubenswrapper[29097]: I0312 18:34:47.963045 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 12 18:34:47.971207 master-0 kubenswrapper[29097]: I0312 18:34:47.970925 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-h72pz" Mar 12 18:34:48.013353 master-0 kubenswrapper[29097]: I0312 18:34:48.013287 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 12 18:34:48.034606 master-0 kubenswrapper[29097]: I0312 18:34:48.033929 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a514abd-72d2-4281-a679-77d4e6158c9f-kube-api-access\") pod \"installer-5-master-0\" (UID: \"0a514abd-72d2-4281-a679-77d4e6158c9f\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 18:34:48.034606 master-0 kubenswrapper[29097]: I0312 18:34:48.034082 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0a514abd-72d2-4281-a679-77d4e6158c9f-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"0a514abd-72d2-4281-a679-77d4e6158c9f\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 18:34:48.034606 master-0 kubenswrapper[29097]: I0312 18:34:48.034192 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0a514abd-72d2-4281-a679-77d4e6158c9f-var-lock\") pod \"installer-5-master-0\" (UID: \"0a514abd-72d2-4281-a679-77d4e6158c9f\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 18:34:48.136003 master-0 kubenswrapper[29097]: I0312 18:34:48.135926 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0a514abd-72d2-4281-a679-77d4e6158c9f-var-lock\") pod \"installer-5-master-0\" (UID: \"0a514abd-72d2-4281-a679-77d4e6158c9f\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 18:34:48.136234 master-0 kubenswrapper[29097]: I0312 18:34:48.136035 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0a514abd-72d2-4281-a679-77d4e6158c9f-var-lock\") pod \"installer-5-master-0\" (UID: \"0a514abd-72d2-4281-a679-77d4e6158c9f\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 18:34:48.136234 master-0 kubenswrapper[29097]: I0312 18:34:48.136055 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a514abd-72d2-4281-a679-77d4e6158c9f-kube-api-access\") pod \"installer-5-master-0\" (UID: \"0a514abd-72d2-4281-a679-77d4e6158c9f\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 18:34:48.136341 master-0 kubenswrapper[29097]: I0312 18:34:48.136280 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0a514abd-72d2-4281-a679-77d4e6158c9f-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"0a514abd-72d2-4281-a679-77d4e6158c9f\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 18:34:48.136487 master-0 kubenswrapper[29097]: I0312 18:34:48.136415 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0a514abd-72d2-4281-a679-77d4e6158c9f-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"0a514abd-72d2-4281-a679-77d4e6158c9f\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 18:34:48.160477 master-0 kubenswrapper[29097]: I0312 18:34:48.160399 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a514abd-72d2-4281-a679-77d4e6158c9f-kube-api-access\") pod \"installer-5-master-0\" (UID: \"0a514abd-72d2-4281-a679-77d4e6158c9f\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 18:34:48.336686 master-0 kubenswrapper[29097]: I0312 18:34:48.336614 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 18:34:48.365860 master-0 kubenswrapper[29097]: I0312 18:34:48.365806 29097 patch_prober.go:28] interesting pod/console-6b98bc4d-xfxc9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 12 18:34:48.365957 master-0 kubenswrapper[29097]: I0312 18:34:48.365881 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6b98bc4d-xfxc9" podUID="f5dd5dc7-7bc4-4154-8aac-876d2a0ae565" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 12 18:34:48.829599 master-0 kubenswrapper[29097]: I0312 18:34:48.829503 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 12 18:34:48.923988 master-0 kubenswrapper[29097]: I0312 18:34:48.923895 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"0a514abd-72d2-4281-a679-77d4e6158c9f","Type":"ContainerStarted","Data":"9b0f3ca5a42544c6cfe6a6d6429d18f20fe2743e2955ad68dcb4476fe176e3ff"} Mar 12 18:34:49.938777 master-0 kubenswrapper[29097]: I0312 18:34:49.938540 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"0a514abd-72d2-4281-a679-77d4e6158c9f","Type":"ContainerStarted","Data":"f5e2db81b19f6700c2cdac16d21f40471e8bf77eb3d3a8f65c149848f2446d45"} Mar 12 18:34:49.970233 master-0 kubenswrapper[29097]: I0312 18:34:49.970125 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-5-master-0" podStartSLOduration=2.970100344 podStartE2EDuration="2.970100344s" podCreationTimestamp="2026-03-12 18:34:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:34:49.961284245 +0000 UTC m=+329.515264352" watchObservedRunningTime="2026-03-12 18:34:49.970100344 +0000 UTC m=+329.524080501" Mar 12 18:34:53.164208 master-0 kubenswrapper[29097]: I0312 18:34:53.164128 29097 patch_prober.go:28] interesting pod/console-68799679d4-tcwkt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" start-of-body= Mar 12 18:34:53.164888 master-0 kubenswrapper[29097]: I0312 18:34:53.164224 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-68799679d4-tcwkt" podUID="2f8180d8-5283-409a-b36e-4786c8483171" containerName="console" probeResult="failure" output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" Mar 12 18:34:58.364857 master-0 kubenswrapper[29097]: I0312 18:34:58.364775 29097 patch_prober.go:28] interesting pod/console-6b98bc4d-xfxc9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 12 18:34:58.365918 master-0 kubenswrapper[29097]: I0312 18:34:58.364884 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6b98bc4d-xfxc9" podUID="f5dd5dc7-7bc4-4154-8aac-876d2a0ae565" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 12 18:35:00.170913 master-0 kubenswrapper[29097]: I0312 18:35:00.170856 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 12 18:35:00.171975 master-0 kubenswrapper[29097]: I0312 18:35:00.171948 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 18:35:00.176736 master-0 kubenswrapper[29097]: I0312 18:35:00.176689 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 12 18:35:00.176976 master-0 kubenswrapper[29097]: I0312 18:35:00.176929 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-tr8hr" Mar 12 18:35:00.190219 master-0 kubenswrapper[29097]: I0312 18:35:00.190156 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 12 18:35:00.263786 master-0 kubenswrapper[29097]: I0312 18:35:00.263718 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa84cbbb-e30c-4630-9d4a-8e64b207d4bd-kube-api-access\") pod \"installer-4-master-0\" (UID: \"fa84cbbb-e30c-4630-9d4a-8e64b207d4bd\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 18:35:00.263786 master-0 kubenswrapper[29097]: I0312 18:35:00.263780 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fa84cbbb-e30c-4630-9d4a-8e64b207d4bd-var-lock\") pod \"installer-4-master-0\" (UID: \"fa84cbbb-e30c-4630-9d4a-8e64b207d4bd\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 18:35:00.264024 master-0 kubenswrapper[29097]: I0312 18:35:00.263880 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa84cbbb-e30c-4630-9d4a-8e64b207d4bd-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"fa84cbbb-e30c-4630-9d4a-8e64b207d4bd\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 18:35:00.365657 master-0 kubenswrapper[29097]: I0312 18:35:00.365594 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa84cbbb-e30c-4630-9d4a-8e64b207d4bd-kube-api-access\") pod \"installer-4-master-0\" (UID: \"fa84cbbb-e30c-4630-9d4a-8e64b207d4bd\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 18:35:00.365848 master-0 kubenswrapper[29097]: I0312 18:35:00.365689 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fa84cbbb-e30c-4630-9d4a-8e64b207d4bd-var-lock\") pod \"installer-4-master-0\" (UID: \"fa84cbbb-e30c-4630-9d4a-8e64b207d4bd\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 18:35:00.365848 master-0 kubenswrapper[29097]: I0312 18:35:00.365782 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fa84cbbb-e30c-4630-9d4a-8e64b207d4bd-var-lock\") pod \"installer-4-master-0\" (UID: \"fa84cbbb-e30c-4630-9d4a-8e64b207d4bd\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 18:35:00.365974 master-0 kubenswrapper[29097]: I0312 18:35:00.365938 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa84cbbb-e30c-4630-9d4a-8e64b207d4bd-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"fa84cbbb-e30c-4630-9d4a-8e64b207d4bd\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 18:35:00.366209 master-0 kubenswrapper[29097]: I0312 18:35:00.366176 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa84cbbb-e30c-4630-9d4a-8e64b207d4bd-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"fa84cbbb-e30c-4630-9d4a-8e64b207d4bd\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 18:35:00.384818 master-0 kubenswrapper[29097]: I0312 18:35:00.384765 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa84cbbb-e30c-4630-9d4a-8e64b207d4bd-kube-api-access\") pod \"installer-4-master-0\" (UID: \"fa84cbbb-e30c-4630-9d4a-8e64b207d4bd\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 18:35:00.513208 master-0 kubenswrapper[29097]: I0312 18:35:00.513103 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 18:35:00.939709 master-0 kubenswrapper[29097]: I0312 18:35:00.938653 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 12 18:35:00.941781 master-0 kubenswrapper[29097]: W0312 18:35:00.941736 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podfa84cbbb_e30c_4630_9d4a_8e64b207d4bd.slice/crio-578eb61079a92a5b32c2d92d97ffd6d2f96dcb1b31c40587a1d185c5ba27e96e WatchSource:0}: Error finding container 578eb61079a92a5b32c2d92d97ffd6d2f96dcb1b31c40587a1d185c5ba27e96e: Status 404 returned error can't find the container with id 578eb61079a92a5b32c2d92d97ffd6d2f96dcb1b31c40587a1d185c5ba27e96e Mar 12 18:35:01.036620 master-0 kubenswrapper[29097]: I0312 18:35:01.036560 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"fa84cbbb-e30c-4630-9d4a-8e64b207d4bd","Type":"ContainerStarted","Data":"578eb61079a92a5b32c2d92d97ffd6d2f96dcb1b31c40587a1d185c5ba27e96e"} Mar 12 18:35:02.048407 master-0 kubenswrapper[29097]: I0312 18:35:02.048148 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"fa84cbbb-e30c-4630-9d4a-8e64b207d4bd","Type":"ContainerStarted","Data":"587738a3b14acb38f92478918dd11ba02514019556206015b7687a6f4db85543"} Mar 12 18:35:02.089812 master-0 kubenswrapper[29097]: I0312 18:35:02.089712 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-0" podStartSLOduration=2.089685469 podStartE2EDuration="2.089685469s" podCreationTimestamp="2026-03-12 18:35:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:35:02.076717006 +0000 UTC m=+341.630697183" watchObservedRunningTime="2026-03-12 18:35:02.089685469 +0000 UTC m=+341.643665606" Mar 12 18:35:03.163411 master-0 kubenswrapper[29097]: I0312 18:35:03.163326 29097 patch_prober.go:28] interesting pod/console-68799679d4-tcwkt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" start-of-body= Mar 12 18:35:03.164254 master-0 kubenswrapper[29097]: I0312 18:35:03.163413 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-68799679d4-tcwkt" podUID="2f8180d8-5283-409a-b36e-4786c8483171" containerName="console" probeResult="failure" output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" Mar 12 18:35:08.363927 master-0 kubenswrapper[29097]: I0312 18:35:08.363827 29097 patch_prober.go:28] interesting pod/console-6b98bc4d-xfxc9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 12 18:35:08.364562 master-0 kubenswrapper[29097]: I0312 18:35:08.363930 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6b98bc4d-xfxc9" podUID="f5dd5dc7-7bc4-4154-8aac-876d2a0ae565" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 12 18:35:13.163150 master-0 kubenswrapper[29097]: I0312 18:35:13.163081 29097 patch_prober.go:28] interesting pod/console-68799679d4-tcwkt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" start-of-body= Mar 12 18:35:13.163150 master-0 kubenswrapper[29097]: I0312 18:35:13.163161 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-68799679d4-tcwkt" podUID="2f8180d8-5283-409a-b36e-4786c8483171" containerName="console" probeResult="failure" output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" Mar 12 18:35:17.949079 master-0 kubenswrapper[29097]: I0312 18:35:17.948995 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-68799679d4-tcwkt"] Mar 12 18:35:18.005566 master-0 kubenswrapper[29097]: I0312 18:35:18.002282 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-fb478b976-xhpmp"] Mar 12 18:35:18.005566 master-0 kubenswrapper[29097]: I0312 18:35:18.003443 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-fb478b976-xhpmp" Mar 12 18:35:18.012976 master-0 kubenswrapper[29097]: I0312 18:35:18.012930 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-fb478b976-xhpmp"] Mar 12 18:35:18.179835 master-0 kubenswrapper[29097]: I0312 18:35:18.179737 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-console-config\") pod \"console-fb478b976-xhpmp\" (UID: \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\") " pod="openshift-console/console-fb478b976-xhpmp" Mar 12 18:35:18.180050 master-0 kubenswrapper[29097]: I0312 18:35:18.179883 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-oauth-serving-cert\") pod \"console-fb478b976-xhpmp\" (UID: \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\") " pod="openshift-console/console-fb478b976-xhpmp" Mar 12 18:35:18.180050 master-0 kubenswrapper[29097]: I0312 18:35:18.179994 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-console-serving-cert\") pod \"console-fb478b976-xhpmp\" (UID: \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\") " pod="openshift-console/console-fb478b976-xhpmp" Mar 12 18:35:18.180123 master-0 kubenswrapper[29097]: I0312 18:35:18.180064 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-trusted-ca-bundle\") pod \"console-fb478b976-xhpmp\" (UID: \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\") " pod="openshift-console/console-fb478b976-xhpmp" Mar 12 18:35:18.180161 master-0 kubenswrapper[29097]: I0312 18:35:18.180144 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-service-ca\") pod \"console-fb478b976-xhpmp\" (UID: \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\") " pod="openshift-console/console-fb478b976-xhpmp" Mar 12 18:35:18.180203 master-0 kubenswrapper[29097]: I0312 18:35:18.180179 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9tx4\" (UniqueName: \"kubernetes.io/projected/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-kube-api-access-l9tx4\") pod \"console-fb478b976-xhpmp\" (UID: \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\") " pod="openshift-console/console-fb478b976-xhpmp" Mar 12 18:35:18.180262 master-0 kubenswrapper[29097]: I0312 18:35:18.180235 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-console-oauth-config\") pod \"console-fb478b976-xhpmp\" (UID: \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\") " pod="openshift-console/console-fb478b976-xhpmp" Mar 12 18:35:18.282261 master-0 kubenswrapper[29097]: I0312 18:35:18.282109 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-console-serving-cert\") pod \"console-fb478b976-xhpmp\" (UID: \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\") " pod="openshift-console/console-fb478b976-xhpmp" Mar 12 18:35:18.282261 master-0 kubenswrapper[29097]: I0312 18:35:18.282208 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-trusted-ca-bundle\") pod \"console-fb478b976-xhpmp\" (UID: \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\") " pod="openshift-console/console-fb478b976-xhpmp" Mar 12 18:35:18.282261 master-0 kubenswrapper[29097]: I0312 18:35:18.282257 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-service-ca\") pod \"console-fb478b976-xhpmp\" (UID: \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\") " pod="openshift-console/console-fb478b976-xhpmp" Mar 12 18:35:18.282644 master-0 kubenswrapper[29097]: I0312 18:35:18.282301 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9tx4\" (UniqueName: \"kubernetes.io/projected/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-kube-api-access-l9tx4\") pod \"console-fb478b976-xhpmp\" (UID: \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\") " pod="openshift-console/console-fb478b976-xhpmp" Mar 12 18:35:18.282644 master-0 kubenswrapper[29097]: I0312 18:35:18.282341 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-console-oauth-config\") pod \"console-fb478b976-xhpmp\" (UID: \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\") " pod="openshift-console/console-fb478b976-xhpmp" Mar 12 18:35:18.282644 master-0 kubenswrapper[29097]: I0312 18:35:18.282390 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-console-config\") pod \"console-fb478b976-xhpmp\" (UID: \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\") " pod="openshift-console/console-fb478b976-xhpmp" Mar 12 18:35:18.282644 master-0 kubenswrapper[29097]: I0312 18:35:18.282451 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-oauth-serving-cert\") pod \"console-fb478b976-xhpmp\" (UID: \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\") " pod="openshift-console/console-fb478b976-xhpmp" Mar 12 18:35:18.283729 master-0 kubenswrapper[29097]: I0312 18:35:18.283688 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-oauth-serving-cert\") pod \"console-fb478b976-xhpmp\" (UID: \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\") " pod="openshift-console/console-fb478b976-xhpmp" Mar 12 18:35:18.285363 master-0 kubenswrapper[29097]: I0312 18:35:18.285330 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-console-config\") pod \"console-fb478b976-xhpmp\" (UID: \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\") " pod="openshift-console/console-fb478b976-xhpmp" Mar 12 18:35:18.285497 master-0 kubenswrapper[29097]: I0312 18:35:18.285470 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-trusted-ca-bundle\") pod \"console-fb478b976-xhpmp\" (UID: \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\") " pod="openshift-console/console-fb478b976-xhpmp" Mar 12 18:35:18.285823 master-0 kubenswrapper[29097]: I0312 18:35:18.285780 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-service-ca\") pod \"console-fb478b976-xhpmp\" (UID: \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\") " pod="openshift-console/console-fb478b976-xhpmp" Mar 12 18:35:18.289601 master-0 kubenswrapper[29097]: I0312 18:35:18.286911 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-console-serving-cert\") pod \"console-fb478b976-xhpmp\" (UID: \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\") " pod="openshift-console/console-fb478b976-xhpmp" Mar 12 18:35:18.289601 master-0 kubenswrapper[29097]: I0312 18:35:18.289165 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-console-oauth-config\") pod \"console-fb478b976-xhpmp\" (UID: \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\") " pod="openshift-console/console-fb478b976-xhpmp" Mar 12 18:35:18.306937 master-0 kubenswrapper[29097]: I0312 18:35:18.306879 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9tx4\" (UniqueName: \"kubernetes.io/projected/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-kube-api-access-l9tx4\") pod \"console-fb478b976-xhpmp\" (UID: \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\") " pod="openshift-console/console-fb478b976-xhpmp" Mar 12 18:35:18.331290 master-0 kubenswrapper[29097]: I0312 18:35:18.331231 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-fb478b976-xhpmp" Mar 12 18:35:18.364157 master-0 kubenswrapper[29097]: I0312 18:35:18.364097 29097 patch_prober.go:28] interesting pod/console-6b98bc4d-xfxc9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 12 18:35:18.364352 master-0 kubenswrapper[29097]: I0312 18:35:18.364171 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6b98bc4d-xfxc9" podUID="f5dd5dc7-7bc4-4154-8aac-876d2a0ae565" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 12 18:35:18.812087 master-0 kubenswrapper[29097]: W0312 18:35:18.812037 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9d8ed6f_1332_471a_891a_9f7f8dbc78b6.slice/crio-c46dc53d677723b7c24e1d6c1784e62b58852f61c48feea210a9ecdcf76ab779 WatchSource:0}: Error finding container c46dc53d677723b7c24e1d6c1784e62b58852f61c48feea210a9ecdcf76ab779: Status 404 returned error can't find the container with id c46dc53d677723b7c24e1d6c1784e62b58852f61c48feea210a9ecdcf76ab779 Mar 12 18:35:18.816567 master-0 kubenswrapper[29097]: I0312 18:35:18.816500 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-fb478b976-xhpmp"] Mar 12 18:35:19.211204 master-0 kubenswrapper[29097]: I0312 18:35:19.211087 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-fb478b976-xhpmp" event={"ID":"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6","Type":"ContainerStarted","Data":"718140b4abb58963bee21e1495289d75b397d95cf81fe483bc008b0a0b8126f8"} Mar 12 18:35:19.211709 master-0 kubenswrapper[29097]: I0312 18:35:19.211692 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-fb478b976-xhpmp" event={"ID":"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6","Type":"ContainerStarted","Data":"c46dc53d677723b7c24e1d6c1784e62b58852f61c48feea210a9ecdcf76ab779"} Mar 12 18:35:21.214485 master-0 kubenswrapper[29097]: E0312 18:35:21.214432 29097 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33feec78_4592_4343_965b_aa1b7044fcf3.slice/crio-26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Error finding container 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Status 404 returned error can't find the container with id 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547 Mar 12 18:35:27.435111 master-0 kubenswrapper[29097]: I0312 18:35:27.435065 29097 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 12 18:35:27.438422 master-0 kubenswrapper[29097]: I0312 18:35:27.435947 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:35:27.438422 master-0 kubenswrapper[29097]: I0312 18:35:27.436656 29097 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 12 18:35:27.438422 master-0 kubenswrapper[29097]: I0312 18:35:27.436840 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver" containerID="cri-o://c9bc9e878bed3b90772ae5003d1b2ca4996292289bf7d1cc533124053934668a" gracePeriod=15 Mar 12 18:35:27.438422 master-0 kubenswrapper[29097]: I0312 18:35:27.436880 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://e7ce24ab0b9f229716c1a4b1a3fb2207e524cf67808c81bc6c33b4728c2eace5" gracePeriod=15 Mar 12 18:35:27.438422 master-0 kubenswrapper[29097]: I0312 18:35:27.436940 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d875071495799394d0af1424c90324a61a48bf14e6ee5a465b2d494f41651511" gracePeriod=15 Mar 12 18:35:27.438422 master-0 kubenswrapper[29097]: I0312 18:35:27.436955 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-syncer" containerID="cri-o://bf20bfc4d81330e9293a7f1910215e8cf740a716550d0740753717eae110e681" gracePeriod=15 Mar 12 18:35:27.438422 master-0 kubenswrapper[29097]: I0312 18:35:27.436857 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-check-endpoints" containerID="cri-o://40c14f8ed3988e0ad05d3439bdeed6169aee5127ecaba660a91f16c1877529e6" gracePeriod=15 Mar 12 18:35:27.439627 master-0 kubenswrapper[29097]: I0312 18:35:27.438580 29097 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 12 18:35:27.439627 master-0 kubenswrapper[29097]: E0312 18:35:27.438762 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-insecure-readyz" Mar 12 18:35:27.439627 master-0 kubenswrapper[29097]: I0312 18:35:27.438775 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-insecure-readyz" Mar 12 18:35:27.439627 master-0 kubenswrapper[29097]: E0312 18:35:27.438862 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-check-endpoints" Mar 12 18:35:27.439627 master-0 kubenswrapper[29097]: I0312 18:35:27.438872 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-check-endpoints" Mar 12 18:35:27.439627 master-0 kubenswrapper[29097]: E0312 18:35:27.438883 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48512e02022680c9d90092634f0fc146" containerName="setup" Mar 12 18:35:27.439627 master-0 kubenswrapper[29097]: I0312 18:35:27.438889 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="48512e02022680c9d90092634f0fc146" containerName="setup" Mar 12 18:35:27.439627 master-0 kubenswrapper[29097]: E0312 18:35:27.438901 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-syncer" Mar 12 18:35:27.439627 master-0 kubenswrapper[29097]: I0312 18:35:27.438907 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-syncer" Mar 12 18:35:27.439627 master-0 kubenswrapper[29097]: E0312 18:35:27.438914 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-regeneration-controller" Mar 12 18:35:27.439627 master-0 kubenswrapper[29097]: I0312 18:35:27.438920 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-regeneration-controller" Mar 12 18:35:27.439627 master-0 kubenswrapper[29097]: E0312 18:35:27.438931 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver" Mar 12 18:35:27.439627 master-0 kubenswrapper[29097]: I0312 18:35:27.438937 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver" Mar 12 18:35:27.439627 master-0 kubenswrapper[29097]: I0312 18:35:27.439072 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-regeneration-controller" Mar 12 18:35:27.439627 master-0 kubenswrapper[29097]: I0312 18:35:27.439092 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver" Mar 12 18:35:27.439627 master-0 kubenswrapper[29097]: I0312 18:35:27.439106 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-check-endpoints" Mar 12 18:35:27.439627 master-0 kubenswrapper[29097]: I0312 18:35:27.439119 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-insecure-readyz" Mar 12 18:35:27.439627 master-0 kubenswrapper[29097]: I0312 18:35:27.439133 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-syncer" Mar 12 18:35:27.541642 master-0 kubenswrapper[29097]: E0312 18:35:27.536928 29097 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:35:27.541642 master-0 kubenswrapper[29097]: I0312 18:35:27.541342 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:35:27.541642 master-0 kubenswrapper[29097]: I0312 18:35:27.541389 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:35:27.541642 master-0 kubenswrapper[29097]: I0312 18:35:27.541463 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:35:27.541642 master-0 kubenswrapper[29097]: I0312 18:35:27.541483 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:35:27.541642 master-0 kubenswrapper[29097]: I0312 18:35:27.541501 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:35:27.541642 master-0 kubenswrapper[29097]: I0312 18:35:27.541575 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:35:27.542056 master-0 kubenswrapper[29097]: I0312 18:35:27.541782 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:35:27.542056 master-0 kubenswrapper[29097]: I0312 18:35:27.541837 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:35:27.643831 master-0 kubenswrapper[29097]: I0312 18:35:27.643752 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:35:27.644028 master-0 kubenswrapper[29097]: I0312 18:35:27.643765 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:35:27.644028 master-0 kubenswrapper[29097]: I0312 18:35:27.643931 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:35:27.644028 master-0 kubenswrapper[29097]: I0312 18:35:27.643974 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:35:27.644028 master-0 kubenswrapper[29097]: I0312 18:35:27.644017 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:35:27.644289 master-0 kubenswrapper[29097]: I0312 18:35:27.644037 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:35:27.644289 master-0 kubenswrapper[29097]: I0312 18:35:27.644048 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:35:27.644289 master-0 kubenswrapper[29097]: I0312 18:35:27.644092 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:35:27.644289 master-0 kubenswrapper[29097]: I0312 18:35:27.644101 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:35:27.644289 master-0 kubenswrapper[29097]: I0312 18:35:27.644223 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:35:27.644482 master-0 kubenswrapper[29097]: I0312 18:35:27.644300 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:35:27.644482 master-0 kubenswrapper[29097]: I0312 18:35:27.644342 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:35:27.644482 master-0 kubenswrapper[29097]: I0312 18:35:27.644368 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:35:27.644482 master-0 kubenswrapper[29097]: I0312 18:35:27.644418 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:35:27.644482 master-0 kubenswrapper[29097]: I0312 18:35:27.644414 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:35:27.644741 master-0 kubenswrapper[29097]: I0312 18:35:27.644555 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:35:27.838508 master-0 kubenswrapper[29097]: I0312 18:35:27.838443 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:35:27.869119 master-0 kubenswrapper[29097]: E0312 18:35:27.868946 29097 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189c2bd0855d149c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:a814bd60de133d95cf99630a978c017e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:35:27.867794588 +0000 UTC m=+367.421774685,LastTimestamp:2026-03-12 18:35:27.867794588 +0000 UTC m=+367.421774685,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:35:28.310146 master-0 kubenswrapper[29097]: I0312 18:35:28.310071 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_48512e02022680c9d90092634f0fc146/kube-apiserver-cert-syncer/0.log" Mar 12 18:35:28.311155 master-0 kubenswrapper[29097]: I0312 18:35:28.311110 29097 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="40c14f8ed3988e0ad05d3439bdeed6169aee5127ecaba660a91f16c1877529e6" exitCode=0 Mar 12 18:35:28.311224 master-0 kubenswrapper[29097]: I0312 18:35:28.311146 29097 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="d875071495799394d0af1424c90324a61a48bf14e6ee5a465b2d494f41651511" exitCode=0 Mar 12 18:35:28.311224 master-0 kubenswrapper[29097]: I0312 18:35:28.311178 29097 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="e7ce24ab0b9f229716c1a4b1a3fb2207e524cf67808c81bc6c33b4728c2eace5" exitCode=0 Mar 12 18:35:28.311224 master-0 kubenswrapper[29097]: I0312 18:35:28.311188 29097 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="bf20bfc4d81330e9293a7f1910215e8cf740a716550d0740753717eae110e681" exitCode=2 Mar 12 18:35:28.313542 master-0 kubenswrapper[29097]: I0312 18:35:28.313488 29097 generic.go:334] "Generic (PLEG): container finished" podID="0a514abd-72d2-4281-a679-77d4e6158c9f" containerID="f5e2db81b19f6700c2cdac16d21f40471e8bf77eb3d3a8f65c149848f2446d45" exitCode=0 Mar 12 18:35:28.313689 master-0 kubenswrapper[29097]: I0312 18:35:28.313628 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"0a514abd-72d2-4281-a679-77d4e6158c9f","Type":"ContainerDied","Data":"f5e2db81b19f6700c2cdac16d21f40471e8bf77eb3d3a8f65c149848f2446d45"} Mar 12 18:35:28.315172 master-0 kubenswrapper[29097]: I0312 18:35:28.315078 29097 status_manager.go:851] "Failed to get status for pod" podUID="48512e02022680c9d90092634f0fc146" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:35:28.315913 master-0 kubenswrapper[29097]: I0312 18:35:28.315847 29097 status_manager.go:851] "Failed to get status for pod" podUID="0a514abd-72d2-4281-a679-77d4e6158c9f" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:35:28.316123 master-0 kubenswrapper[29097]: I0312 18:35:28.316072 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"a814bd60de133d95cf99630a978c017e","Type":"ContainerStarted","Data":"0aca5d3bee881b5f583870ed7c13fc7c245cbcbdac48aa3d637220f53a08e1f4"} Mar 12 18:35:28.316183 master-0 kubenswrapper[29097]: I0312 18:35:28.316143 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"a814bd60de133d95cf99630a978c017e","Type":"ContainerStarted","Data":"e8dfb27d14ac4f4bca8e3f3badf1889a1f8835293c0a9d8306b83fce293d3cdb"} Mar 12 18:35:28.317136 master-0 kubenswrapper[29097]: I0312 18:35:28.317078 29097 status_manager.go:851] "Failed to get status for pod" podUID="48512e02022680c9d90092634f0fc146" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:35:28.317136 master-0 kubenswrapper[29097]: E0312 18:35:28.317110 29097 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:35:28.317887 master-0 kubenswrapper[29097]: I0312 18:35:28.317829 29097 status_manager.go:851] "Failed to get status for pod" podUID="0a514abd-72d2-4281-a679-77d4e6158c9f" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:35:28.331964 master-0 kubenswrapper[29097]: I0312 18:35:28.331926 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-fb478b976-xhpmp" Mar 12 18:35:28.332040 master-0 kubenswrapper[29097]: I0312 18:35:28.332015 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-fb478b976-xhpmp" Mar 12 18:35:28.334818 master-0 kubenswrapper[29097]: I0312 18:35:28.334765 29097 patch_prober.go:28] interesting pod/console-fb478b976-xhpmp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.109:8443/health\": dial tcp 10.128.0.109:8443: connect: connection refused" start-of-body= Mar 12 18:35:28.334900 master-0 kubenswrapper[29097]: I0312 18:35:28.334848 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-fb478b976-xhpmp" podUID="a9d8ed6f-1332-471a-891a-9f7f8dbc78b6" containerName="console" probeResult="failure" output="Get \"https://10.128.0.109:8443/health\": dial tcp 10.128.0.109:8443: connect: connection refused" Mar 12 18:35:28.364410 master-0 kubenswrapper[29097]: I0312 18:35:28.364305 29097 patch_prober.go:28] interesting pod/console-6b98bc4d-xfxc9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 12 18:35:28.364476 master-0 kubenswrapper[29097]: I0312 18:35:28.364392 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6b98bc4d-xfxc9" podUID="f5dd5dc7-7bc4-4154-8aac-876d2a0ae565" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 12 18:35:29.329221 master-0 kubenswrapper[29097]: E0312 18:35:29.328557 29097 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:35:29.794653 master-0 kubenswrapper[29097]: I0312 18:35:29.794507 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 18:35:29.795377 master-0 kubenswrapper[29097]: I0312 18:35:29.795304 29097 status_manager.go:851] "Failed to get status for pod" podUID="0a514abd-72d2-4281-a679-77d4e6158c9f" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:35:29.889399 master-0 kubenswrapper[29097]: I0312 18:35:29.889287 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0a514abd-72d2-4281-a679-77d4e6158c9f-kubelet-dir\") pod \"0a514abd-72d2-4281-a679-77d4e6158c9f\" (UID: \"0a514abd-72d2-4281-a679-77d4e6158c9f\") " Mar 12 18:35:29.889399 master-0 kubenswrapper[29097]: I0312 18:35:29.889341 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a514abd-72d2-4281-a679-77d4e6158c9f-kube-api-access\") pod \"0a514abd-72d2-4281-a679-77d4e6158c9f\" (UID: \"0a514abd-72d2-4281-a679-77d4e6158c9f\") " Mar 12 18:35:29.889623 master-0 kubenswrapper[29097]: I0312 18:35:29.889404 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a514abd-72d2-4281-a679-77d4e6158c9f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0a514abd-72d2-4281-a679-77d4e6158c9f" (UID: "0a514abd-72d2-4281-a679-77d4e6158c9f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:35:29.889678 master-0 kubenswrapper[29097]: I0312 18:35:29.889635 29097 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0a514abd-72d2-4281-a679-77d4e6158c9f-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:35:29.892239 master-0 kubenswrapper[29097]: I0312 18:35:29.892178 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a514abd-72d2-4281-a679-77d4e6158c9f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0a514abd-72d2-4281-a679-77d4e6158c9f" (UID: "0a514abd-72d2-4281-a679-77d4e6158c9f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:35:29.990242 master-0 kubenswrapper[29097]: I0312 18:35:29.990183 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0a514abd-72d2-4281-a679-77d4e6158c9f-var-lock\") pod \"0a514abd-72d2-4281-a679-77d4e6158c9f\" (UID: \"0a514abd-72d2-4281-a679-77d4e6158c9f\") " Mar 12 18:35:29.990429 master-0 kubenswrapper[29097]: I0312 18:35:29.990288 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a514abd-72d2-4281-a679-77d4e6158c9f-var-lock" (OuterVolumeSpecName: "var-lock") pod "0a514abd-72d2-4281-a679-77d4e6158c9f" (UID: "0a514abd-72d2-4281-a679-77d4e6158c9f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:35:29.990551 master-0 kubenswrapper[29097]: I0312 18:35:29.990518 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a514abd-72d2-4281-a679-77d4e6158c9f-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 18:35:29.990616 master-0 kubenswrapper[29097]: I0312 18:35:29.990573 29097 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0a514abd-72d2-4281-a679-77d4e6158c9f-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 18:35:30.340120 master-0 kubenswrapper[29097]: I0312 18:35:30.340051 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_48512e02022680c9d90092634f0fc146/kube-apiserver-cert-syncer/0.log" Mar 12 18:35:30.341523 master-0 kubenswrapper[29097]: I0312 18:35:30.341457 29097 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="c9bc9e878bed3b90772ae5003d1b2ca4996292289bf7d1cc533124053934668a" exitCode=0 Mar 12 18:35:30.344112 master-0 kubenswrapper[29097]: I0312 18:35:30.344040 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"0a514abd-72d2-4281-a679-77d4e6158c9f","Type":"ContainerDied","Data":"9b0f3ca5a42544c6cfe6a6d6429d18f20fe2743e2955ad68dcb4476fe176e3ff"} Mar 12 18:35:30.344112 master-0 kubenswrapper[29097]: I0312 18:35:30.344094 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b0f3ca5a42544c6cfe6a6d6429d18f20fe2743e2955ad68dcb4476fe176e3ff" Mar 12 18:35:30.344339 master-0 kubenswrapper[29097]: I0312 18:35:30.344170 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 18:35:30.365155 master-0 kubenswrapper[29097]: I0312 18:35:30.365088 29097 status_manager.go:851] "Failed to get status for pod" podUID="0a514abd-72d2-4281-a679-77d4e6158c9f" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:35:30.471612 master-0 kubenswrapper[29097]: I0312 18:35:30.471426 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_48512e02022680c9d90092634f0fc146/kube-apiserver-cert-syncer/0.log" Mar 12 18:35:30.473340 master-0 kubenswrapper[29097]: I0312 18:35:30.473217 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:35:30.475558 master-0 kubenswrapper[29097]: I0312 18:35:30.475023 29097 status_manager.go:851] "Failed to get status for pod" podUID="48512e02022680c9d90092634f0fc146" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:35:30.477139 master-0 kubenswrapper[29097]: I0312 18:35:30.476987 29097 status_manager.go:851] "Failed to get status for pod" podUID="0a514abd-72d2-4281-a679-77d4e6158c9f" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:35:30.599585 master-0 kubenswrapper[29097]: I0312 18:35:30.599372 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir\") pod \"48512e02022680c9d90092634f0fc146\" (UID: \"48512e02022680c9d90092634f0fc146\") " Mar 12 18:35:30.599585 master-0 kubenswrapper[29097]: I0312 18:35:30.599457 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir\") pod \"48512e02022680c9d90092634f0fc146\" (UID: \"48512e02022680c9d90092634f0fc146\") " Mar 12 18:35:30.599585 master-0 kubenswrapper[29097]: I0312 18:35:30.599520 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir\") pod \"48512e02022680c9d90092634f0fc146\" (UID: \"48512e02022680c9d90092634f0fc146\") " Mar 12 18:35:30.600104 master-0 kubenswrapper[29097]: I0312 18:35:30.599601 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "48512e02022680c9d90092634f0fc146" (UID: "48512e02022680c9d90092634f0fc146"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:35:30.600104 master-0 kubenswrapper[29097]: I0312 18:35:30.599657 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "48512e02022680c9d90092634f0fc146" (UID: "48512e02022680c9d90092634f0fc146"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:35:30.600104 master-0 kubenswrapper[29097]: I0312 18:35:30.599779 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "48512e02022680c9d90092634f0fc146" (UID: "48512e02022680c9d90092634f0fc146"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:35:30.600104 master-0 kubenswrapper[29097]: I0312 18:35:30.599867 29097 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:35:30.600104 master-0 kubenswrapper[29097]: I0312 18:35:30.599881 29097 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:35:30.600104 master-0 kubenswrapper[29097]: I0312 18:35:30.599891 29097 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:35:30.727997 master-0 kubenswrapper[29097]: I0312 18:35:30.727935 29097 status_manager.go:851] "Failed to get status for pod" podUID="48512e02022680c9d90092634f0fc146" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:35:30.729248 master-0 kubenswrapper[29097]: I0312 18:35:30.729061 29097 status_manager.go:851] "Failed to get status for pod" podUID="0a514abd-72d2-4281-a679-77d4e6158c9f" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:35:30.730027 master-0 kubenswrapper[29097]: I0312 18:35:30.729984 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48512e02022680c9d90092634f0fc146" path="/var/lib/kubelet/pods/48512e02022680c9d90092634f0fc146/volumes" Mar 12 18:35:31.366737 master-0 kubenswrapper[29097]: I0312 18:35:31.366699 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_48512e02022680c9d90092634f0fc146/kube-apiserver-cert-syncer/0.log" Mar 12 18:35:31.368012 master-0 kubenswrapper[29097]: I0312 18:35:31.367975 29097 scope.go:117] "RemoveContainer" containerID="40c14f8ed3988e0ad05d3439bdeed6169aee5127ecaba660a91f16c1877529e6" Mar 12 18:35:31.368135 master-0 kubenswrapper[29097]: I0312 18:35:31.368120 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:35:31.371672 master-0 kubenswrapper[29097]: I0312 18:35:31.371475 29097 status_manager.go:851] "Failed to get status for pod" podUID="48512e02022680c9d90092634f0fc146" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:35:31.372770 master-0 kubenswrapper[29097]: I0312 18:35:31.372692 29097 status_manager.go:851] "Failed to get status for pod" podUID="0a514abd-72d2-4281-a679-77d4e6158c9f" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:35:31.373969 master-0 kubenswrapper[29097]: I0312 18:35:31.373894 29097 status_manager.go:851] "Failed to get status for pod" podUID="48512e02022680c9d90092634f0fc146" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:35:31.374972 master-0 kubenswrapper[29097]: I0312 18:35:31.374916 29097 status_manager.go:851] "Failed to get status for pod" podUID="0a514abd-72d2-4281-a679-77d4e6158c9f" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:35:31.393036 master-0 kubenswrapper[29097]: I0312 18:35:31.392970 29097 scope.go:117] "RemoveContainer" containerID="d875071495799394d0af1424c90324a61a48bf14e6ee5a465b2d494f41651511" Mar 12 18:35:31.410783 master-0 kubenswrapper[29097]: I0312 18:35:31.410744 29097 scope.go:117] "RemoveContainer" containerID="e7ce24ab0b9f229716c1a4b1a3fb2207e524cf67808c81bc6c33b4728c2eace5" Mar 12 18:35:31.434094 master-0 kubenswrapper[29097]: I0312 18:35:31.433687 29097 scope.go:117] "RemoveContainer" containerID="bf20bfc4d81330e9293a7f1910215e8cf740a716550d0740753717eae110e681" Mar 12 18:35:31.451153 master-0 kubenswrapper[29097]: I0312 18:35:31.451114 29097 scope.go:117] "RemoveContainer" containerID="c9bc9e878bed3b90772ae5003d1b2ca4996292289bf7d1cc533124053934668a" Mar 12 18:35:31.477383 master-0 kubenswrapper[29097]: I0312 18:35:31.477351 29097 scope.go:117] "RemoveContainer" containerID="fc961e393edb556b8cbec8ab17e54863b3322ecccf97284873c2a4ada171ec46" Mar 12 18:35:32.381623 master-0 kubenswrapper[29097]: I0312 18:35:32.381560 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-4-master-0_fa84cbbb-e30c-4630-9d4a-8e64b207d4bd/installer/0.log" Mar 12 18:35:32.382396 master-0 kubenswrapper[29097]: I0312 18:35:32.381630 29097 generic.go:334] "Generic (PLEG): container finished" podID="fa84cbbb-e30c-4630-9d4a-8e64b207d4bd" containerID="587738a3b14acb38f92478918dd11ba02514019556206015b7687a6f4db85543" exitCode=1 Mar 12 18:35:32.382396 master-0 kubenswrapper[29097]: I0312 18:35:32.381669 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"fa84cbbb-e30c-4630-9d4a-8e64b207d4bd","Type":"ContainerDied","Data":"587738a3b14acb38f92478918dd11ba02514019556206015b7687a6f4db85543"} Mar 12 18:35:32.382636 master-0 kubenswrapper[29097]: I0312 18:35:32.382573 29097 status_manager.go:851] "Failed to get status for pod" podUID="48512e02022680c9d90092634f0fc146" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:35:32.383168 master-0 kubenswrapper[29097]: I0312 18:35:32.383102 29097 status_manager.go:851] "Failed to get status for pod" podUID="0a514abd-72d2-4281-a679-77d4e6158c9f" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:35:32.383714 master-0 kubenswrapper[29097]: I0312 18:35:32.383658 29097 status_manager.go:851] "Failed to get status for pod" podUID="fa84cbbb-e30c-4630-9d4a-8e64b207d4bd" pod="openshift-kube-controller-manager/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:35:32.918149 master-0 kubenswrapper[29097]: E0312 18:35:32.917999 29097 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189c2bd0855d149c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:a814bd60de133d95cf99630a978c017e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 18:35:27.867794588 +0000 UTC m=+367.421774685,LastTimestamp:2026-03-12 18:35:27.867794588 +0000 UTC m=+367.421774685,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 18:35:33.700806 master-0 kubenswrapper[29097]: I0312 18:35:33.700398 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-4-master-0_fa84cbbb-e30c-4630-9d4a-8e64b207d4bd/installer/0.log" Mar 12 18:35:33.700806 master-0 kubenswrapper[29097]: I0312 18:35:33.700455 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 18:35:33.701237 master-0 kubenswrapper[29097]: I0312 18:35:33.700998 29097 status_manager.go:851] "Failed to get status for pod" podUID="fa84cbbb-e30c-4630-9d4a-8e64b207d4bd" pod="openshift-kube-controller-manager/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:35:33.701357 master-0 kubenswrapper[29097]: I0312 18:35:33.701326 29097 status_manager.go:851] "Failed to get status for pod" podUID="0a514abd-72d2-4281-a679-77d4e6158c9f" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:35:33.868129 master-0 kubenswrapper[29097]: I0312 18:35:33.868055 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fa84cbbb-e30c-4630-9d4a-8e64b207d4bd-var-lock\") pod \"fa84cbbb-e30c-4630-9d4a-8e64b207d4bd\" (UID: \"fa84cbbb-e30c-4630-9d4a-8e64b207d4bd\") " Mar 12 18:35:33.868349 master-0 kubenswrapper[29097]: I0312 18:35:33.868228 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa84cbbb-e30c-4630-9d4a-8e64b207d4bd-var-lock" (OuterVolumeSpecName: "var-lock") pod "fa84cbbb-e30c-4630-9d4a-8e64b207d4bd" (UID: "fa84cbbb-e30c-4630-9d4a-8e64b207d4bd"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:35:33.868349 master-0 kubenswrapper[29097]: I0312 18:35:33.868289 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa84cbbb-e30c-4630-9d4a-8e64b207d4bd-kube-api-access\") pod \"fa84cbbb-e30c-4630-9d4a-8e64b207d4bd\" (UID: \"fa84cbbb-e30c-4630-9d4a-8e64b207d4bd\") " Mar 12 18:35:33.868601 master-0 kubenswrapper[29097]: I0312 18:35:33.868436 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa84cbbb-e30c-4630-9d4a-8e64b207d4bd-kubelet-dir\") pod \"fa84cbbb-e30c-4630-9d4a-8e64b207d4bd\" (UID: \"fa84cbbb-e30c-4630-9d4a-8e64b207d4bd\") " Mar 12 18:35:33.868601 master-0 kubenswrapper[29097]: I0312 18:35:33.868538 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa84cbbb-e30c-4630-9d4a-8e64b207d4bd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fa84cbbb-e30c-4630-9d4a-8e64b207d4bd" (UID: "fa84cbbb-e30c-4630-9d4a-8e64b207d4bd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:35:33.869258 master-0 kubenswrapper[29097]: I0312 18:35:33.869207 29097 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa84cbbb-e30c-4630-9d4a-8e64b207d4bd-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:35:33.869258 master-0 kubenswrapper[29097]: I0312 18:35:33.869253 29097 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fa84cbbb-e30c-4630-9d4a-8e64b207d4bd-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 18:35:33.879882 master-0 kubenswrapper[29097]: I0312 18:35:33.879810 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa84cbbb-e30c-4630-9d4a-8e64b207d4bd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fa84cbbb-e30c-4630-9d4a-8e64b207d4bd" (UID: "fa84cbbb-e30c-4630-9d4a-8e64b207d4bd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:35:33.970536 master-0 kubenswrapper[29097]: I0312 18:35:33.970399 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa84cbbb-e30c-4630-9d4a-8e64b207d4bd-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 18:35:34.398267 master-0 kubenswrapper[29097]: I0312 18:35:34.398106 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-4-master-0_fa84cbbb-e30c-4630-9d4a-8e64b207d4bd/installer/0.log" Mar 12 18:35:34.398267 master-0 kubenswrapper[29097]: I0312 18:35:34.398183 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"fa84cbbb-e30c-4630-9d4a-8e64b207d4bd","Type":"ContainerDied","Data":"578eb61079a92a5b32c2d92d97ffd6d2f96dcb1b31c40587a1d185c5ba27e96e"} Mar 12 18:35:34.398267 master-0 kubenswrapper[29097]: I0312 18:35:34.398214 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="578eb61079a92a5b32c2d92d97ffd6d2f96dcb1b31c40587a1d185c5ba27e96e" Mar 12 18:35:34.398267 master-0 kubenswrapper[29097]: I0312 18:35:34.398255 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 18:35:34.425947 master-0 kubenswrapper[29097]: I0312 18:35:34.425835 29097 status_manager.go:851] "Failed to get status for pod" podUID="0a514abd-72d2-4281-a679-77d4e6158c9f" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:35:34.426869 master-0 kubenswrapper[29097]: I0312 18:35:34.426819 29097 status_manager.go:851] "Failed to get status for pod" podUID="fa84cbbb-e30c-4630-9d4a-8e64b207d4bd" pod="openshift-kube-controller-manager/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:35:34.745992 master-0 kubenswrapper[29097]: E0312 18:35:34.745835 29097 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:35:34.747085 master-0 kubenswrapper[29097]: E0312 18:35:34.747004 29097 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:35:34.748175 master-0 kubenswrapper[29097]: E0312 18:35:34.748115 29097 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:35:34.748856 master-0 kubenswrapper[29097]: E0312 18:35:34.748805 29097 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:35:34.749345 master-0 kubenswrapper[29097]: E0312 18:35:34.749305 29097 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:35:34.749408 master-0 kubenswrapper[29097]: I0312 18:35:34.749336 29097 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 12 18:35:34.750006 master-0 kubenswrapper[29097]: E0312 18:35:34.749948 29097 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 12 18:35:34.951181 master-0 kubenswrapper[29097]: E0312 18:35:34.951105 29097 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 12 18:35:35.353356 master-0 kubenswrapper[29097]: E0312 18:35:35.353245 29097 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 12 18:35:36.155259 master-0 kubenswrapper[29097]: E0312 18:35:36.155187 29097 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 12 18:35:37.756912 master-0 kubenswrapper[29097]: E0312 18:35:37.756842 29097 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 12 18:35:38.331970 master-0 kubenswrapper[29097]: I0312 18:35:38.331927 29097 patch_prober.go:28] interesting pod/console-fb478b976-xhpmp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.109:8443/health\": dial tcp 10.128.0.109:8443: connect: connection refused" start-of-body= Mar 12 18:35:38.332299 master-0 kubenswrapper[29097]: I0312 18:35:38.332267 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-fb478b976-xhpmp" podUID="a9d8ed6f-1332-471a-891a-9f7f8dbc78b6" containerName="console" probeResult="failure" output="Get \"https://10.128.0.109:8443/health\": dial tcp 10.128.0.109:8443: connect: connection refused" Mar 12 18:35:38.363928 master-0 kubenswrapper[29097]: I0312 18:35:38.363873 29097 patch_prober.go:28] interesting pod/console-6b98bc4d-xfxc9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 12 18:35:38.364102 master-0 kubenswrapper[29097]: I0312 18:35:38.363939 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6b98bc4d-xfxc9" podUID="f5dd5dc7-7bc4-4154-8aac-876d2a0ae565" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 12 18:35:38.720392 master-0 kubenswrapper[29097]: I0312 18:35:38.720281 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:35:38.722553 master-0 kubenswrapper[29097]: I0312 18:35:38.722494 29097 status_manager.go:851] "Failed to get status for pod" podUID="0a514abd-72d2-4281-a679-77d4e6158c9f" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:35:38.724084 master-0 kubenswrapper[29097]: I0312 18:35:38.724049 29097 status_manager.go:851] "Failed to get status for pod" podUID="fa84cbbb-e30c-4630-9d4a-8e64b207d4bd" pod="openshift-kube-controller-manager/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:35:38.746139 master-0 kubenswrapper[29097]: I0312 18:35:38.746066 29097 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="823203d3-b506-4b3a-87a5-37f5c25eac99" Mar 12 18:35:38.746139 master-0 kubenswrapper[29097]: I0312 18:35:38.746119 29097 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="823203d3-b506-4b3a-87a5-37f5c25eac99" Mar 12 18:35:38.747107 master-0 kubenswrapper[29097]: E0312 18:35:38.746995 29097 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:35:38.758685 master-0 kubenswrapper[29097]: I0312 18:35:38.758630 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:35:39.439246 master-0 kubenswrapper[29097]: I0312 18:35:39.439136 29097 generic.go:334] "Generic (PLEG): container finished" podID="36d4251d3504cdc0ec85144c1379056c" containerID="ff218b375c79caa5b922c7bad1ac00cf0d3ffac4b58338c1395234394d43d7bf" exitCode=0 Mar 12 18:35:39.439568 master-0 kubenswrapper[29097]: I0312 18:35:39.439235 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerDied","Data":"ff218b375c79caa5b922c7bad1ac00cf0d3ffac4b58338c1395234394d43d7bf"} Mar 12 18:35:39.439670 master-0 kubenswrapper[29097]: I0312 18:35:39.439590 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerStarted","Data":"e2a57fda6a99349cb11933dfafff52e1dedfcb43d1fe91febbe5d401609ef77c"} Mar 12 18:35:39.439994 master-0 kubenswrapper[29097]: I0312 18:35:39.439966 29097 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="823203d3-b506-4b3a-87a5-37f5c25eac99" Mar 12 18:35:39.439994 master-0 kubenswrapper[29097]: I0312 18:35:39.439992 29097 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="823203d3-b506-4b3a-87a5-37f5c25eac99" Mar 12 18:35:39.440840 master-0 kubenswrapper[29097]: I0312 18:35:39.440774 29097 status_manager.go:851] "Failed to get status for pod" podUID="0a514abd-72d2-4281-a679-77d4e6158c9f" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:35:39.440910 master-0 kubenswrapper[29097]: E0312 18:35:39.440836 29097 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:35:39.441591 master-0 kubenswrapper[29097]: I0312 18:35:39.441504 29097 status_manager.go:851] "Failed to get status for pod" podUID="fa84cbbb-e30c-4630-9d4a-8e64b207d4bd" pod="openshift-kube-controller-manager/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 18:35:40.454418 master-0 kubenswrapper[29097]: I0312 18:35:40.452822 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerStarted","Data":"3d70e6195dc62849c9f6232346e1abe62d989e29c08dc8c745644fa0a848bf73"} Mar 12 18:35:40.454418 master-0 kubenswrapper[29097]: I0312 18:35:40.452875 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerStarted","Data":"da85d24c97d936d69a06df6bb2a2d2fb6dd1708ce28597ebcde70e48a6b04156"} Mar 12 18:35:40.454418 master-0 kubenswrapper[29097]: I0312 18:35:40.452886 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerStarted","Data":"c281878a85824958ab9be1a5508717464835ad5b43df350cffb335e5aba09a0e"} Mar 12 18:35:41.463584 master-0 kubenswrapper[29097]: I0312 18:35:41.463500 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerStarted","Data":"e79981f119f3a9fe5aabae995ee8a00b5aa5e2a57e9cb2089545e673409b183c"} Mar 12 18:35:41.463584 master-0 kubenswrapper[29097]: I0312 18:35:41.463576 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerStarted","Data":"c71a69a1d14ee201c8d43bf31412ca18c4624022e9d397810739f8a2ed0de329"} Mar 12 18:35:41.464155 master-0 kubenswrapper[29097]: I0312 18:35:41.463628 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:35:41.464155 master-0 kubenswrapper[29097]: I0312 18:35:41.463740 29097 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="823203d3-b506-4b3a-87a5-37f5c25eac99" Mar 12 18:35:41.464155 master-0 kubenswrapper[29097]: I0312 18:35:41.463765 29097 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="823203d3-b506-4b3a-87a5-37f5c25eac99" Mar 12 18:35:43.000296 master-0 kubenswrapper[29097]: I0312 18:35:43.000205 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-68799679d4-tcwkt" podUID="2f8180d8-5283-409a-b36e-4786c8483171" containerName="console" containerID="cri-o://92d0d3642838b699586062a9bd0f4f5d1078e1604de639f4aa04e3d8c53b3dfe" gracePeriod=15 Mar 12 18:35:43.244183 master-0 kubenswrapper[29097]: I0312 18:35:43.244135 29097 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 12 18:35:43.244451 master-0 kubenswrapper[29097]: I0312 18:35:43.244420 29097 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="49835aec35bdc5feca0d7cf24779b8da" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 12 18:35:43.482927 master-0 kubenswrapper[29097]: I0312 18:35:43.482863 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-68799679d4-tcwkt_2f8180d8-5283-409a-b36e-4786c8483171/console/0.log" Mar 12 18:35:43.482927 master-0 kubenswrapper[29097]: I0312 18:35:43.482926 29097 generic.go:334] "Generic (PLEG): container finished" podID="2f8180d8-5283-409a-b36e-4786c8483171" containerID="92d0d3642838b699586062a9bd0f4f5d1078e1604de639f4aa04e3d8c53b3dfe" exitCode=2 Mar 12 18:35:43.483244 master-0 kubenswrapper[29097]: I0312 18:35:43.482995 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-68799679d4-tcwkt" event={"ID":"2f8180d8-5283-409a-b36e-4786c8483171","Type":"ContainerDied","Data":"92d0d3642838b699586062a9bd0f4f5d1078e1604de639f4aa04e3d8c53b3dfe"} Mar 12 18:35:43.485286 master-0 kubenswrapper[29097]: I0312 18:35:43.485247 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_49835aec35bdc5feca0d7cf24779b8da/kube-controller-manager/2.log" Mar 12 18:35:43.486252 master-0 kubenswrapper[29097]: I0312 18:35:43.486212 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_49835aec35bdc5feca0d7cf24779b8da/kube-controller-manager/1.log" Mar 12 18:35:43.487946 master-0 kubenswrapper[29097]: I0312 18:35:43.487875 29097 generic.go:334] "Generic (PLEG): container finished" podID="49835aec35bdc5feca0d7cf24779b8da" containerID="d8bc1ab80b512a9b34b5a39f82b7bfc61939a83d8fe4158a7181d62b837fd9c1" exitCode=1 Mar 12 18:35:43.487946 master-0 kubenswrapper[29097]: I0312 18:35:43.487926 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"49835aec35bdc5feca0d7cf24779b8da","Type":"ContainerDied","Data":"d8bc1ab80b512a9b34b5a39f82b7bfc61939a83d8fe4158a7181d62b837fd9c1"} Mar 12 18:35:43.488106 master-0 kubenswrapper[29097]: I0312 18:35:43.487981 29097 scope.go:117] "RemoveContainer" containerID="62dd3567f8e7ab9a6f6b7c22887f90bf8f9e191c219728fb147a35e29d0e7d8e" Mar 12 18:35:43.489229 master-0 kubenswrapper[29097]: I0312 18:35:43.489184 29097 scope.go:117] "RemoveContainer" containerID="d8bc1ab80b512a9b34b5a39f82b7bfc61939a83d8fe4158a7181d62b837fd9c1" Mar 12 18:35:43.490331 master-0 kubenswrapper[29097]: E0312 18:35:43.490224 29097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(49835aec35bdc5feca0d7cf24779b8da)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="49835aec35bdc5feca0d7cf24779b8da" Mar 12 18:35:43.594076 master-0 kubenswrapper[29097]: I0312 18:35:43.594022 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-68799679d4-tcwkt_2f8180d8-5283-409a-b36e-4786c8483171/console/0.log" Mar 12 18:35:43.594255 master-0 kubenswrapper[29097]: I0312 18:35:43.594121 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-68799679d4-tcwkt" Mar 12 18:35:43.720252 master-0 kubenswrapper[29097]: I0312 18:35:43.720194 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:35:43.755271 master-0 kubenswrapper[29097]: I0312 18:35:43.755189 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f8180d8-5283-409a-b36e-4786c8483171-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "2f8180d8-5283-409a-b36e-4786c8483171" (UID: "2f8180d8-5283-409a-b36e-4786c8483171"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:35:43.755271 master-0 kubenswrapper[29097]: I0312 18:35:43.754333 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f8180d8-5283-409a-b36e-4786c8483171-trusted-ca-bundle\") pod \"2f8180d8-5283-409a-b36e-4786c8483171\" (UID: \"2f8180d8-5283-409a-b36e-4786c8483171\") " Mar 12 18:35:43.755758 master-0 kubenswrapper[29097]: I0312 18:35:43.755593 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2f8180d8-5283-409a-b36e-4786c8483171-service-ca\") pod \"2f8180d8-5283-409a-b36e-4786c8483171\" (UID: \"2f8180d8-5283-409a-b36e-4786c8483171\") " Mar 12 18:35:43.755874 master-0 kubenswrapper[29097]: I0312 18:35:43.755785 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2f8180d8-5283-409a-b36e-4786c8483171-console-config\") pod \"2f8180d8-5283-409a-b36e-4786c8483171\" (UID: \"2f8180d8-5283-409a-b36e-4786c8483171\") " Mar 12 18:35:43.755874 master-0 kubenswrapper[29097]: I0312 18:35:43.755833 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2f8180d8-5283-409a-b36e-4786c8483171-console-serving-cert\") pod \"2f8180d8-5283-409a-b36e-4786c8483171\" (UID: \"2f8180d8-5283-409a-b36e-4786c8483171\") " Mar 12 18:35:43.756060 master-0 kubenswrapper[29097]: I0312 18:35:43.755886 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2f8180d8-5283-409a-b36e-4786c8483171-oauth-serving-cert\") pod \"2f8180d8-5283-409a-b36e-4786c8483171\" (UID: \"2f8180d8-5283-409a-b36e-4786c8483171\") " Mar 12 18:35:43.756060 master-0 kubenswrapper[29097]: I0312 18:35:43.755946 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9rfs\" (UniqueName: \"kubernetes.io/projected/2f8180d8-5283-409a-b36e-4786c8483171-kube-api-access-n9rfs\") pod \"2f8180d8-5283-409a-b36e-4786c8483171\" (UID: \"2f8180d8-5283-409a-b36e-4786c8483171\") " Mar 12 18:35:43.756060 master-0 kubenswrapper[29097]: I0312 18:35:43.755979 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2f8180d8-5283-409a-b36e-4786c8483171-console-oauth-config\") pod \"2f8180d8-5283-409a-b36e-4786c8483171\" (UID: \"2f8180d8-5283-409a-b36e-4786c8483171\") " Mar 12 18:35:43.758270 master-0 kubenswrapper[29097]: I0312 18:35:43.757847 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f8180d8-5283-409a-b36e-4786c8483171-console-config" (OuterVolumeSpecName: "console-config") pod "2f8180d8-5283-409a-b36e-4786c8483171" (UID: "2f8180d8-5283-409a-b36e-4786c8483171"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:35:43.759412 master-0 kubenswrapper[29097]: I0312 18:35:43.758733 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:35:43.759412 master-0 kubenswrapper[29097]: I0312 18:35:43.758845 29097 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f8180d8-5283-409a-b36e-4786c8483171-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:35:43.759412 master-0 kubenswrapper[29097]: I0312 18:35:43.759061 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:35:43.760250 master-0 kubenswrapper[29097]: I0312 18:35:43.760030 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f8180d8-5283-409a-b36e-4786c8483171-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "2f8180d8-5283-409a-b36e-4786c8483171" (UID: "2f8180d8-5283-409a-b36e-4786c8483171"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:35:43.760915 master-0 kubenswrapper[29097]: I0312 18:35:43.760857 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f8180d8-5283-409a-b36e-4786c8483171-service-ca" (OuterVolumeSpecName: "service-ca") pod "2f8180d8-5283-409a-b36e-4786c8483171" (UID: "2f8180d8-5283-409a-b36e-4786c8483171"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:35:43.761405 master-0 kubenswrapper[29097]: I0312 18:35:43.761321 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f8180d8-5283-409a-b36e-4786c8483171-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "2f8180d8-5283-409a-b36e-4786c8483171" (UID: "2f8180d8-5283-409a-b36e-4786c8483171"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:35:43.763724 master-0 kubenswrapper[29097]: I0312 18:35:43.763669 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f8180d8-5283-409a-b36e-4786c8483171-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "2f8180d8-5283-409a-b36e-4786c8483171" (UID: "2f8180d8-5283-409a-b36e-4786c8483171"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:35:43.765167 master-0 kubenswrapper[29097]: I0312 18:35:43.765122 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f8180d8-5283-409a-b36e-4786c8483171-kube-api-access-n9rfs" (OuterVolumeSpecName: "kube-api-access-n9rfs") pod "2f8180d8-5283-409a-b36e-4786c8483171" (UID: "2f8180d8-5283-409a-b36e-4786c8483171"). InnerVolumeSpecName "kube-api-access-n9rfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:35:43.767716 master-0 kubenswrapper[29097]: I0312 18:35:43.767656 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:35:43.861137 master-0 kubenswrapper[29097]: I0312 18:35:43.860961 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9rfs\" (UniqueName: \"kubernetes.io/projected/2f8180d8-5283-409a-b36e-4786c8483171-kube-api-access-n9rfs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:35:43.861137 master-0 kubenswrapper[29097]: I0312 18:35:43.861038 29097 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2f8180d8-5283-409a-b36e-4786c8483171-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:35:43.861137 master-0 kubenswrapper[29097]: I0312 18:35:43.861062 29097 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2f8180d8-5283-409a-b36e-4786c8483171-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 18:35:43.861137 master-0 kubenswrapper[29097]: I0312 18:35:43.861086 29097 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2f8180d8-5283-409a-b36e-4786c8483171-console-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:35:43.861137 master-0 kubenswrapper[29097]: I0312 18:35:43.861106 29097 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2f8180d8-5283-409a-b36e-4786c8483171-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 18:35:43.861137 master-0 kubenswrapper[29097]: I0312 18:35:43.861129 29097 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2f8180d8-5283-409a-b36e-4786c8483171-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 18:35:44.499669 master-0 kubenswrapper[29097]: I0312 18:35:44.499598 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-68799679d4-tcwkt_2f8180d8-5283-409a-b36e-4786c8483171/console/0.log" Mar 12 18:35:44.499669 master-0 kubenswrapper[29097]: I0312 18:35:44.499666 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-68799679d4-tcwkt" event={"ID":"2f8180d8-5283-409a-b36e-4786c8483171","Type":"ContainerDied","Data":"16748130ffd70bebb1c38ebc6df15f662f6ac8e77eb06b99d5842eb185edd374"} Mar 12 18:35:44.500647 master-0 kubenswrapper[29097]: I0312 18:35:44.499701 29097 scope.go:117] "RemoveContainer" containerID="92d0d3642838b699586062a9bd0f4f5d1078e1604de639f4aa04e3d8c53b3dfe" Mar 12 18:35:44.500647 master-0 kubenswrapper[29097]: I0312 18:35:44.499761 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-68799679d4-tcwkt" Mar 12 18:35:44.502671 master-0 kubenswrapper[29097]: I0312 18:35:44.502635 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_49835aec35bdc5feca0d7cf24779b8da/kube-controller-manager/2.log" Mar 12 18:35:44.505624 master-0 kubenswrapper[29097]: I0312 18:35:44.505557 29097 scope.go:117] "RemoveContainer" containerID="d8bc1ab80b512a9b34b5a39f82b7bfc61939a83d8fe4158a7181d62b837fd9c1" Mar 12 18:35:44.506363 master-0 kubenswrapper[29097]: E0312 18:35:44.506296 29097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(49835aec35bdc5feca0d7cf24779b8da)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="49835aec35bdc5feca0d7cf24779b8da" Mar 12 18:35:46.477895 master-0 kubenswrapper[29097]: I0312 18:35:46.477850 29097 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:35:46.529094 master-0 kubenswrapper[29097]: I0312 18:35:46.529030 29097 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="823203d3-b506-4b3a-87a5-37f5c25eac99" Mar 12 18:35:46.529094 master-0 kubenswrapper[29097]: I0312 18:35:46.529068 29097 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="823203d3-b506-4b3a-87a5-37f5c25eac99" Mar 12 18:35:46.534776 master-0 kubenswrapper[29097]: I0312 18:35:46.534727 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:35:46.538220 master-0 kubenswrapper[29097]: I0312 18:35:46.538148 29097 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="36d4251d3504cdc0ec85144c1379056c" podUID="22c9fbb4-7379-4656-a81d-74655888fe75" Mar 12 18:35:47.539738 master-0 kubenswrapper[29097]: I0312 18:35:47.539682 29097 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="823203d3-b506-4b3a-87a5-37f5c25eac99" Mar 12 18:35:47.539738 master-0 kubenswrapper[29097]: I0312 18:35:47.539730 29097 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="823203d3-b506-4b3a-87a5-37f5c25eac99" Mar 12 18:35:48.333021 master-0 kubenswrapper[29097]: I0312 18:35:48.332601 29097 patch_prober.go:28] interesting pod/console-fb478b976-xhpmp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.109:8443/health\": dial tcp 10.128.0.109:8443: connect: connection refused" start-of-body= Mar 12 18:35:48.333021 master-0 kubenswrapper[29097]: I0312 18:35:48.332658 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-fb478b976-xhpmp" podUID="a9d8ed6f-1332-471a-891a-9f7f8dbc78b6" containerName="console" probeResult="failure" output="Get \"https://10.128.0.109:8443/health\": dial tcp 10.128.0.109:8443: connect: connection refused" Mar 12 18:35:48.364941 master-0 kubenswrapper[29097]: I0312 18:35:48.364875 29097 patch_prober.go:28] interesting pod/console-6b98bc4d-xfxc9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 12 18:35:48.365179 master-0 kubenswrapper[29097]: I0312 18:35:48.364980 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6b98bc4d-xfxc9" podUID="f5dd5dc7-7bc4-4154-8aac-876d2a0ae565" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 12 18:35:49.142310 master-0 kubenswrapper[29097]: I0312 18:35:49.142266 29097 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:35:49.143754 master-0 kubenswrapper[29097]: I0312 18:35:49.143736 29097 scope.go:117] "RemoveContainer" containerID="d8bc1ab80b512a9b34b5a39f82b7bfc61939a83d8fe4158a7181d62b837fd9c1" Mar 12 18:35:49.144375 master-0 kubenswrapper[29097]: E0312 18:35:49.144352 29097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(49835aec35bdc5feca0d7cf24779b8da)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="49835aec35bdc5feca0d7cf24779b8da" Mar 12 18:35:50.737780 master-0 kubenswrapper[29097]: I0312 18:35:50.737689 29097 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="36d4251d3504cdc0ec85144c1379056c" podUID="22c9fbb4-7379-4656-a81d-74655888fe75" Mar 12 18:35:52.544169 master-0 kubenswrapper[29097]: I0312 18:35:52.544096 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 12 18:35:53.036303 master-0 kubenswrapper[29097]: I0312 18:35:53.036130 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 12 18:35:53.242945 master-0 kubenswrapper[29097]: I0312 18:35:53.242858 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:35:53.244024 master-0 kubenswrapper[29097]: I0312 18:35:53.243984 29097 scope.go:117] "RemoveContainer" containerID="d8bc1ab80b512a9b34b5a39f82b7bfc61939a83d8fe4158a7181d62b837fd9c1" Mar 12 18:35:53.245691 master-0 kubenswrapper[29097]: E0312 18:35:53.244929 29097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(49835aec35bdc5feca0d7cf24779b8da)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="49835aec35bdc5feca0d7cf24779b8da" Mar 12 18:35:53.324621 master-0 kubenswrapper[29097]: I0312 18:35:53.324564 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 12 18:35:53.775865 master-0 kubenswrapper[29097]: I0312 18:35:53.775716 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 12 18:35:54.095674 master-0 kubenswrapper[29097]: I0312 18:35:54.095606 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 12 18:35:54.719117 master-0 kubenswrapper[29097]: I0312 18:35:54.719030 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 12 18:35:54.784194 master-0 kubenswrapper[29097]: I0312 18:35:54.784115 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-hm292" Mar 12 18:35:55.078610 master-0 kubenswrapper[29097]: I0312 18:35:55.077645 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-hsjbb" Mar 12 18:35:55.362722 master-0 kubenswrapper[29097]: I0312 18:35:55.362537 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 12 18:35:56.201768 master-0 kubenswrapper[29097]: I0312 18:35:56.201675 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 12 18:35:56.212237 master-0 kubenswrapper[29097]: I0312 18:35:56.212167 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 12 18:35:56.491556 master-0 kubenswrapper[29097]: I0312 18:35:56.491382 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 18:35:56.924191 master-0 kubenswrapper[29097]: I0312 18:35:56.924132 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 12 18:35:57.077606 master-0 kubenswrapper[29097]: I0312 18:35:57.077576 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 12 18:35:57.829076 master-0 kubenswrapper[29097]: I0312 18:35:57.829003 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 12 18:35:58.171249 master-0 kubenswrapper[29097]: I0312 18:35:58.171130 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 12 18:35:58.199790 master-0 kubenswrapper[29097]: I0312 18:35:58.199723 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 12 18:35:58.332004 master-0 kubenswrapper[29097]: I0312 18:35:58.331935 29097 patch_prober.go:28] interesting pod/console-fb478b976-xhpmp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.109:8443/health\": dial tcp 10.128.0.109:8443: connect: connection refused" start-of-body= Mar 12 18:35:58.332256 master-0 kubenswrapper[29097]: I0312 18:35:58.332010 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-fb478b976-xhpmp" podUID="a9d8ed6f-1332-471a-891a-9f7f8dbc78b6" containerName="console" probeResult="failure" output="Get \"https://10.128.0.109:8443/health\": dial tcp 10.128.0.109:8443: connect: connection refused" Mar 12 18:35:58.364804 master-0 kubenswrapper[29097]: I0312 18:35:58.364705 29097 patch_prober.go:28] interesting pod/console-6b98bc4d-xfxc9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 12 18:35:58.365088 master-0 kubenswrapper[29097]: I0312 18:35:58.364847 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6b98bc4d-xfxc9" podUID="f5dd5dc7-7bc4-4154-8aac-876d2a0ae565" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 12 18:35:58.504202 master-0 kubenswrapper[29097]: I0312 18:35:58.504041 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-mbtwq" Mar 12 18:35:58.508458 master-0 kubenswrapper[29097]: I0312 18:35:58.508403 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 12 18:35:58.524111 master-0 kubenswrapper[29097]: I0312 18:35:58.524056 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 12 18:35:58.786981 master-0 kubenswrapper[29097]: I0312 18:35:58.786860 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 12 18:35:58.933496 master-0 kubenswrapper[29097]: I0312 18:35:58.933451 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 12 18:35:59.142766 master-0 kubenswrapper[29097]: I0312 18:35:59.142708 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 12 18:35:59.170287 master-0 kubenswrapper[29097]: I0312 18:35:59.170239 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 12 18:35:59.247295 master-0 kubenswrapper[29097]: I0312 18:35:59.246735 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 12 18:35:59.363582 master-0 kubenswrapper[29097]: I0312 18:35:59.363496 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 12 18:35:59.401045 master-0 kubenswrapper[29097]: I0312 18:35:59.400849 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 12 18:35:59.463749 master-0 kubenswrapper[29097]: I0312 18:35:59.463673 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 12 18:35:59.653440 master-0 kubenswrapper[29097]: I0312 18:35:59.653294 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 12 18:36:00.046973 master-0 kubenswrapper[29097]: I0312 18:36:00.046870 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 12 18:36:00.101962 master-0 kubenswrapper[29097]: I0312 18:36:00.101870 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 18:36:00.104610 master-0 kubenswrapper[29097]: I0312 18:36:00.104549 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 12 18:36:00.197003 master-0 kubenswrapper[29097]: I0312 18:36:00.196877 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 12 18:36:00.274793 master-0 kubenswrapper[29097]: I0312 18:36:00.274726 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 12 18:36:00.320742 master-0 kubenswrapper[29097]: I0312 18:36:00.320551 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 12 18:36:00.347121 master-0 kubenswrapper[29097]: I0312 18:36:00.347049 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 12 18:36:00.416197 master-0 kubenswrapper[29097]: I0312 18:36:00.416110 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 12 18:36:00.568846 master-0 kubenswrapper[29097]: I0312 18:36:00.568729 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 12 18:36:00.671608 master-0 kubenswrapper[29097]: I0312 18:36:00.671373 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 12 18:36:00.734009 master-0 kubenswrapper[29097]: I0312 18:36:00.733939 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 12 18:36:00.851780 master-0 kubenswrapper[29097]: I0312 18:36:00.851697 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 12 18:36:00.944643 master-0 kubenswrapper[29097]: I0312 18:36:00.944433 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 12 18:36:01.021417 master-0 kubenswrapper[29097]: I0312 18:36:01.021325 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-270bttnd3r3m0" Mar 12 18:36:01.157038 master-0 kubenswrapper[29097]: I0312 18:36:01.156978 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-fbn8j" Mar 12 18:36:01.241016 master-0 kubenswrapper[29097]: I0312 18:36:01.240901 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 12 18:36:01.314641 master-0 kubenswrapper[29097]: I0312 18:36:01.314564 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 12 18:36:01.364348 master-0 kubenswrapper[29097]: I0312 18:36:01.364279 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 12 18:36:01.407534 master-0 kubenswrapper[29097]: I0312 18:36:01.407462 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 18:36:01.493280 master-0 kubenswrapper[29097]: I0312 18:36:01.493153 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 12 18:36:01.513930 master-0 kubenswrapper[29097]: I0312 18:36:01.513889 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 12 18:36:01.541419 master-0 kubenswrapper[29097]: I0312 18:36:01.541356 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 12 18:36:01.636475 master-0 kubenswrapper[29097]: I0312 18:36:01.636411 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 12 18:36:01.647797 master-0 kubenswrapper[29097]: I0312 18:36:01.647761 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 12 18:36:01.649812 master-0 kubenswrapper[29097]: I0312 18:36:01.649774 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 12 18:36:01.739179 master-0 kubenswrapper[29097]: I0312 18:36:01.739117 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 12 18:36:01.851243 master-0 kubenswrapper[29097]: I0312 18:36:01.851175 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 12 18:36:01.923408 master-0 kubenswrapper[29097]: I0312 18:36:01.923357 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 12 18:36:02.016106 master-0 kubenswrapper[29097]: I0312 18:36:02.016047 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 12 18:36:02.027535 master-0 kubenswrapper[29097]: I0312 18:36:02.027476 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 12 18:36:02.095034 master-0 kubenswrapper[29097]: I0312 18:36:02.094965 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 12 18:36:02.096424 master-0 kubenswrapper[29097]: I0312 18:36:02.096391 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 12 18:36:02.229219 master-0 kubenswrapper[29097]: I0312 18:36:02.229057 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-gzn76" Mar 12 18:36:02.331124 master-0 kubenswrapper[29097]: I0312 18:36:02.331076 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 12 18:36:02.356360 master-0 kubenswrapper[29097]: I0312 18:36:02.356298 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-7k9rb" Mar 12 18:36:02.369940 master-0 kubenswrapper[29097]: I0312 18:36:02.369873 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 12 18:36:02.423373 master-0 kubenswrapper[29097]: I0312 18:36:02.423288 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 12 18:36:02.465599 master-0 kubenswrapper[29097]: I0312 18:36:02.465529 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 12 18:36:02.533909 master-0 kubenswrapper[29097]: I0312 18:36:02.533760 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 12 18:36:02.641608 master-0 kubenswrapper[29097]: I0312 18:36:02.641529 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 12 18:36:02.661032 master-0 kubenswrapper[29097]: I0312 18:36:02.660968 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 12 18:36:02.730612 master-0 kubenswrapper[29097]: I0312 18:36:02.730506 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 12 18:36:02.764139 master-0 kubenswrapper[29097]: I0312 18:36:02.764071 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 12 18:36:02.819383 master-0 kubenswrapper[29097]: I0312 18:36:02.819331 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 12 18:36:02.846434 master-0 kubenswrapper[29097]: I0312 18:36:02.846384 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 12 18:36:02.867023 master-0 kubenswrapper[29097]: I0312 18:36:02.866941 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 12 18:36:02.908414 master-0 kubenswrapper[29097]: I0312 18:36:02.908363 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 18:36:02.971038 master-0 kubenswrapper[29097]: I0312 18:36:02.970999 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 12 18:36:03.027875 master-0 kubenswrapper[29097]: I0312 18:36:03.027801 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 12 18:36:03.091633 master-0 kubenswrapper[29097]: I0312 18:36:03.091466 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 12 18:36:03.100943 master-0 kubenswrapper[29097]: I0312 18:36:03.100901 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 12 18:36:03.148289 master-0 kubenswrapper[29097]: I0312 18:36:03.148212 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 12 18:36:03.163609 master-0 kubenswrapper[29097]: I0312 18:36:03.163510 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 12 18:36:03.320980 master-0 kubenswrapper[29097]: I0312 18:36:03.320861 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-vt5nq" Mar 12 18:36:03.409360 master-0 kubenswrapper[29097]: I0312 18:36:03.409217 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 12 18:36:03.470754 master-0 kubenswrapper[29097]: I0312 18:36:03.470625 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 12 18:36:03.472614 master-0 kubenswrapper[29097]: I0312 18:36:03.472509 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 12 18:36:03.584456 master-0 kubenswrapper[29097]: I0312 18:36:03.584376 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 12 18:36:03.629460 master-0 kubenswrapper[29097]: I0312 18:36:03.629380 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 12 18:36:03.690032 master-0 kubenswrapper[29097]: I0312 18:36:03.689867 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 12 18:36:03.752608 master-0 kubenswrapper[29097]: I0312 18:36:03.752500 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-w4bj7" Mar 12 18:36:03.786182 master-0 kubenswrapper[29097]: I0312 18:36:03.786099 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 12 18:36:03.800085 master-0 kubenswrapper[29097]: I0312 18:36:03.800040 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 12 18:36:03.835511 master-0 kubenswrapper[29097]: I0312 18:36:03.835434 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 12 18:36:03.913797 master-0 kubenswrapper[29097]: I0312 18:36:03.913723 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-ldpgf" Mar 12 18:36:03.962771 master-0 kubenswrapper[29097]: I0312 18:36:03.962626 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 12 18:36:04.049120 master-0 kubenswrapper[29097]: I0312 18:36:04.049036 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 12 18:36:04.071419 master-0 kubenswrapper[29097]: I0312 18:36:04.071353 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 12 18:36:04.091256 master-0 kubenswrapper[29097]: I0312 18:36:04.091210 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 12 18:36:04.100743 master-0 kubenswrapper[29097]: I0312 18:36:04.100412 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 12 18:36:04.101999 master-0 kubenswrapper[29097]: I0312 18:36:04.101565 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 12 18:36:04.225369 master-0 kubenswrapper[29097]: I0312 18:36:04.224505 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 12 18:36:04.225369 master-0 kubenswrapper[29097]: I0312 18:36:04.224713 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 12 18:36:04.424065 master-0 kubenswrapper[29097]: I0312 18:36:04.424016 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-vtdm7" Mar 12 18:36:04.527057 master-0 kubenswrapper[29097]: I0312 18:36:04.526930 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 12 18:36:04.547721 master-0 kubenswrapper[29097]: I0312 18:36:04.547659 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-8jt4h" Mar 12 18:36:04.587498 master-0 kubenswrapper[29097]: I0312 18:36:04.587190 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 18:36:04.598077 master-0 kubenswrapper[29097]: I0312 18:36:04.598020 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 12 18:36:04.599400 master-0 kubenswrapper[29097]: I0312 18:36:04.599278 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 12 18:36:04.626555 master-0 kubenswrapper[29097]: I0312 18:36:04.626192 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 12 18:36:04.640021 master-0 kubenswrapper[29097]: I0312 18:36:04.639973 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 12 18:36:04.647681 master-0 kubenswrapper[29097]: I0312 18:36:04.647628 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 12 18:36:04.688236 master-0 kubenswrapper[29097]: I0312 18:36:04.688167 29097 generic.go:334] "Generic (PLEG): container finished" podID="9f1f60fa-d79d-4f31-b5bf-2ad333151537" containerID="e90331cb678c8e153c33f95cb18612384f7ac4bbc46e3e49ca8de188de41f79a" exitCode=0 Mar 12 18:36:04.688434 master-0 kubenswrapper[29097]: I0312 18:36:04.688237 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-5784dff469-l5d64" event={"ID":"9f1f60fa-d79d-4f31-b5bf-2ad333151537","Type":"ContainerDied","Data":"e90331cb678c8e153c33f95cb18612384f7ac4bbc46e3e49ca8de188de41f79a"} Mar 12 18:36:04.795012 master-0 kubenswrapper[29097]: I0312 18:36:04.793969 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 12 18:36:04.795445 master-0 kubenswrapper[29097]: I0312 18:36:04.795408 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 12 18:36:04.827237 master-0 kubenswrapper[29097]: I0312 18:36:04.827203 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 12 18:36:04.838953 master-0 kubenswrapper[29097]: I0312 18:36:04.838915 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 12 18:36:04.865975 master-0 kubenswrapper[29097]: I0312 18:36:04.865934 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 12 18:36:04.874359 master-0 kubenswrapper[29097]: I0312 18:36:04.872364 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:36:04.930415 master-0 kubenswrapper[29097]: I0312 18:36:04.930347 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/9f1f60fa-d79d-4f31-b5bf-2ad333151537-metrics-server-audit-profiles\") pod \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " Mar 12 18:36:04.930415 master-0 kubenswrapper[29097]: I0312 18:36:04.930417 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-client-ca-bundle\") pod \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " Mar 12 18:36:04.930840 master-0 kubenswrapper[29097]: I0312 18:36:04.930486 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f1f60fa-d79d-4f31-b5bf-2ad333151537-configmap-kubelet-serving-ca-bundle\") pod \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " Mar 12 18:36:04.930840 master-0 kubenswrapper[29097]: I0312 18:36:04.930510 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/9f1f60fa-d79d-4f31-b5bf-2ad333151537-audit-log\") pod \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " Mar 12 18:36:04.930840 master-0 kubenswrapper[29097]: I0312 18:36:04.930611 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-secret-metrics-client-certs\") pod \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " Mar 12 18:36:04.930840 master-0 kubenswrapper[29097]: I0312 18:36:04.930635 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-secret-metrics-server-tls\") pod \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " Mar 12 18:36:04.930840 master-0 kubenswrapper[29097]: I0312 18:36:04.930673 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqlfx\" (UniqueName: \"kubernetes.io/projected/9f1f60fa-d79d-4f31-b5bf-2ad333151537-kube-api-access-hqlfx\") pod \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\" (UID: \"9f1f60fa-d79d-4f31-b5bf-2ad333151537\") " Mar 12 18:36:04.931238 master-0 kubenswrapper[29097]: I0312 18:36:04.931185 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f1f60fa-d79d-4f31-b5bf-2ad333151537-audit-log" (OuterVolumeSpecName: "audit-log") pod "9f1f60fa-d79d-4f31-b5bf-2ad333151537" (UID: "9f1f60fa-d79d-4f31-b5bf-2ad333151537"). InnerVolumeSpecName "audit-log". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:36:04.931764 master-0 kubenswrapper[29097]: I0312 18:36:04.931718 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f1f60fa-d79d-4f31-b5bf-2ad333151537-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "9f1f60fa-d79d-4f31-b5bf-2ad333151537" (UID: "9f1f60fa-d79d-4f31-b5bf-2ad333151537"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:36:04.931929 master-0 kubenswrapper[29097]: I0312 18:36:04.931899 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f1f60fa-d79d-4f31-b5bf-2ad333151537-metrics-server-audit-profiles" (OuterVolumeSpecName: "metrics-server-audit-profiles") pod "9f1f60fa-d79d-4f31-b5bf-2ad333151537" (UID: "9f1f60fa-d79d-4f31-b5bf-2ad333151537"). InnerVolumeSpecName "metrics-server-audit-profiles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:36:04.933016 master-0 kubenswrapper[29097]: I0312 18:36:04.932971 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-client-ca-bundle" (OuterVolumeSpecName: "client-ca-bundle") pod "9f1f60fa-d79d-4f31-b5bf-2ad333151537" (UID: "9f1f60fa-d79d-4f31-b5bf-2ad333151537"). InnerVolumeSpecName "client-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:36:04.933825 master-0 kubenswrapper[29097]: I0312 18:36:04.933780 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f1f60fa-d79d-4f31-b5bf-2ad333151537-kube-api-access-hqlfx" (OuterVolumeSpecName: "kube-api-access-hqlfx") pod "9f1f60fa-d79d-4f31-b5bf-2ad333151537" (UID: "9f1f60fa-d79d-4f31-b5bf-2ad333151537"). InnerVolumeSpecName "kube-api-access-hqlfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:36:04.934218 master-0 kubenswrapper[29097]: I0312 18:36:04.934178 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "9f1f60fa-d79d-4f31-b5bf-2ad333151537" (UID: "9f1f60fa-d79d-4f31-b5bf-2ad333151537"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:36:04.934494 master-0 kubenswrapper[29097]: I0312 18:36:04.934452 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-secret-metrics-server-tls" (OuterVolumeSpecName: "secret-metrics-server-tls") pod "9f1f60fa-d79d-4f31-b5bf-2ad333151537" (UID: "9f1f60fa-d79d-4f31-b5bf-2ad333151537"). InnerVolumeSpecName "secret-metrics-server-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:36:04.962816 master-0 kubenswrapper[29097]: I0312 18:36:04.962766 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 12 18:36:04.972406 master-0 kubenswrapper[29097]: I0312 18:36:04.972316 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 12 18:36:04.987023 master-0 kubenswrapper[29097]: I0312 18:36:04.986987 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 12 18:36:04.987993 master-0 kubenswrapper[29097]: I0312 18:36:04.987958 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 12 18:36:04.995375 master-0 kubenswrapper[29097]: I0312 18:36:04.994687 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 12 18:36:05.005600 master-0 kubenswrapper[29097]: I0312 18:36:05.005574 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 12 18:36:05.031969 master-0 kubenswrapper[29097]: I0312 18:36:05.031936 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqlfx\" (UniqueName: \"kubernetes.io/projected/9f1f60fa-d79d-4f31-b5bf-2ad333151537-kube-api-access-hqlfx\") on node \"master-0\" DevicePath \"\"" Mar 12 18:36:05.031969 master-0 kubenswrapper[29097]: I0312 18:36:05.031964 29097 reconciler_common.go:293] "Volume detached for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/9f1f60fa-d79d-4f31-b5bf-2ad333151537-metrics-server-audit-profiles\") on node \"master-0\" DevicePath \"\"" Mar 12 18:36:05.032147 master-0 kubenswrapper[29097]: I0312 18:36:05.031974 29097 reconciler_common.go:293] "Volume detached for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-client-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:36:05.032147 master-0 kubenswrapper[29097]: I0312 18:36:05.031986 29097 reconciler_common.go:293] "Volume detached for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/9f1f60fa-d79d-4f31-b5bf-2ad333151537-audit-log\") on node \"master-0\" DevicePath \"\"" Mar 12 18:36:05.032147 master-0 kubenswrapper[29097]: I0312 18:36:05.031995 29097 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f1f60fa-d79d-4f31-b5bf-2ad333151537-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:36:05.032147 master-0 kubenswrapper[29097]: I0312 18:36:05.032004 29097 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:36:05.032147 master-0 kubenswrapper[29097]: I0312 18:36:05.032013 29097 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/9f1f60fa-d79d-4f31-b5bf-2ad333151537-secret-metrics-server-tls\") on node \"master-0\" DevicePath \"\"" Mar 12 18:36:05.044118 master-0 kubenswrapper[29097]: I0312 18:36:05.044073 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 12 18:36:05.052977 master-0 kubenswrapper[29097]: I0312 18:36:05.052875 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 12 18:36:05.083857 master-0 kubenswrapper[29097]: I0312 18:36:05.083804 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 12 18:36:05.121694 master-0 kubenswrapper[29097]: I0312 18:36:05.121631 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 12 18:36:05.151860 master-0 kubenswrapper[29097]: I0312 18:36:05.151811 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 12 18:36:05.172313 master-0 kubenswrapper[29097]: I0312 18:36:05.172225 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 12 18:36:05.227200 master-0 kubenswrapper[29097]: I0312 18:36:05.227143 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 12 18:36:05.227913 master-0 kubenswrapper[29097]: I0312 18:36:05.227854 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 12 18:36:05.287968 master-0 kubenswrapper[29097]: I0312 18:36:05.287905 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 12 18:36:05.288485 master-0 kubenswrapper[29097]: I0312 18:36:05.288443 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 12 18:36:05.288873 master-0 kubenswrapper[29097]: I0312 18:36:05.288819 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 12 18:36:05.329474 master-0 kubenswrapper[29097]: I0312 18:36:05.329332 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 12 18:36:05.337173 master-0 kubenswrapper[29097]: I0312 18:36:05.337126 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 12 18:36:05.378437 master-0 kubenswrapper[29097]: I0312 18:36:05.378366 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-6d4tt" Mar 12 18:36:05.465407 master-0 kubenswrapper[29097]: I0312 18:36:05.465328 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 12 18:36:05.508989 master-0 kubenswrapper[29097]: I0312 18:36:05.508916 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 12 18:36:05.530995 master-0 kubenswrapper[29097]: I0312 18:36:05.530920 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 12 18:36:05.546374 master-0 kubenswrapper[29097]: I0312 18:36:05.546308 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 12 18:36:05.592068 master-0 kubenswrapper[29097]: I0312 18:36:05.591935 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 12 18:36:05.673598 master-0 kubenswrapper[29097]: I0312 18:36:05.673538 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 12 18:36:05.679563 master-0 kubenswrapper[29097]: I0312 18:36:05.679508 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 12 18:36:05.699309 master-0 kubenswrapper[29097]: I0312 18:36:05.699258 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-5784dff469-l5d64" event={"ID":"9f1f60fa-d79d-4f31-b5bf-2ad333151537","Type":"ContainerDied","Data":"fa6c2fe81e494b2ba395dd1830ab3075ce81e641a81c84edd4df4d6a6849559f"} Mar 12 18:36:05.699309 master-0 kubenswrapper[29097]: I0312 18:36:05.699311 29097 scope.go:117] "RemoveContainer" containerID="e90331cb678c8e153c33f95cb18612384f7ac4bbc46e3e49ca8de188de41f79a" Mar 12 18:36:05.699735 master-0 kubenswrapper[29097]: I0312 18:36:05.699356 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-5784dff469-l5d64" Mar 12 18:36:05.720844 master-0 kubenswrapper[29097]: I0312 18:36:05.720799 29097 scope.go:117] "RemoveContainer" containerID="d8bc1ab80b512a9b34b5a39f82b7bfc61939a83d8fe4158a7181d62b837fd9c1" Mar 12 18:36:05.830978 master-0 kubenswrapper[29097]: I0312 18:36:05.830779 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 12 18:36:05.881494 master-0 kubenswrapper[29097]: I0312 18:36:05.880617 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-mbn45" Mar 12 18:36:05.915793 master-0 kubenswrapper[29097]: I0312 18:36:05.915681 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 12 18:36:05.956867 master-0 kubenswrapper[29097]: I0312 18:36:05.956750 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 12 18:36:06.017141 master-0 kubenswrapper[29097]: I0312 18:36:06.017113 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 12 18:36:06.040078 master-0 kubenswrapper[29097]: I0312 18:36:06.040040 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 12 18:36:06.113686 master-0 kubenswrapper[29097]: I0312 18:36:06.113641 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 18:36:06.178468 master-0 kubenswrapper[29097]: I0312 18:36:06.177633 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 12 18:36:06.185867 master-0 kubenswrapper[29097]: I0312 18:36:06.185797 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 12 18:36:06.286965 master-0 kubenswrapper[29097]: I0312 18:36:06.286899 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 12 18:36:06.334963 master-0 kubenswrapper[29097]: I0312 18:36:06.334910 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 12 18:36:06.357820 master-0 kubenswrapper[29097]: I0312 18:36:06.357788 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 12 18:36:06.362578 master-0 kubenswrapper[29097]: I0312 18:36:06.362542 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 12 18:36:06.370833 master-0 kubenswrapper[29097]: I0312 18:36:06.370803 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 12 18:36:06.418162 master-0 kubenswrapper[29097]: I0312 18:36:06.418123 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 12 18:36:06.524297 master-0 kubenswrapper[29097]: I0312 18:36:06.524155 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 18:36:06.575270 master-0 kubenswrapper[29097]: I0312 18:36:06.575197 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 12 18:36:06.599244 master-0 kubenswrapper[29097]: I0312 18:36:06.599204 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 12 18:36:06.603443 master-0 kubenswrapper[29097]: I0312 18:36:06.603412 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 12 18:36:06.616666 master-0 kubenswrapper[29097]: I0312 18:36:06.616611 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 12 18:36:06.708547 master-0 kubenswrapper[29097]: I0312 18:36:06.708477 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 12 18:36:06.710337 master-0 kubenswrapper[29097]: I0312 18:36:06.710305 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_49835aec35bdc5feca0d7cf24779b8da/kube-controller-manager/2.log" Mar 12 18:36:06.711369 master-0 kubenswrapper[29097]: I0312 18:36:06.711333 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"49835aec35bdc5feca0d7cf24779b8da","Type":"ContainerStarted","Data":"27ec01e446898cdb09325e858095825a7ec9b233787886936fcd21a787d5965b"} Mar 12 18:36:06.830402 master-0 kubenswrapper[29097]: I0312 18:36:06.830350 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 12 18:36:06.856395 master-0 kubenswrapper[29097]: I0312 18:36:06.856266 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-8nmsp" Mar 12 18:36:06.863820 master-0 kubenswrapper[29097]: I0312 18:36:06.863769 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 12 18:36:06.879715 master-0 kubenswrapper[29097]: I0312 18:36:06.879681 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 12 18:36:07.015925 master-0 kubenswrapper[29097]: I0312 18:36:07.015856 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 12 18:36:07.029578 master-0 kubenswrapper[29097]: I0312 18:36:07.029546 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 12 18:36:07.310429 master-0 kubenswrapper[29097]: I0312 18:36:07.310378 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-ssqhn" Mar 12 18:36:07.375813 master-0 kubenswrapper[29097]: I0312 18:36:07.375758 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 12 18:36:07.384681 master-0 kubenswrapper[29097]: I0312 18:36:07.384630 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 12 18:36:07.402052 master-0 kubenswrapper[29097]: I0312 18:36:07.402004 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-hhnmb" Mar 12 18:36:07.429006 master-0 kubenswrapper[29097]: I0312 18:36:07.428791 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 12 18:36:07.436419 master-0 kubenswrapper[29097]: I0312 18:36:07.436371 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 12 18:36:07.458270 master-0 kubenswrapper[29097]: I0312 18:36:07.458184 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-9f7ld" Mar 12 18:36:07.569495 master-0 kubenswrapper[29097]: I0312 18:36:07.569346 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 12 18:36:07.606495 master-0 kubenswrapper[29097]: I0312 18:36:07.606443 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 12 18:36:07.657420 master-0 kubenswrapper[29097]: I0312 18:36:07.657348 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 12 18:36:07.670474 master-0 kubenswrapper[29097]: I0312 18:36:07.670420 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-8275t" Mar 12 18:36:07.681237 master-0 kubenswrapper[29097]: I0312 18:36:07.681178 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 12 18:36:07.710544 master-0 kubenswrapper[29097]: I0312 18:36:07.710425 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 12 18:36:07.730505 master-0 kubenswrapper[29097]: I0312 18:36:07.730467 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 12 18:36:07.750181 master-0 kubenswrapper[29097]: I0312 18:36:07.750122 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 12 18:36:07.817745 master-0 kubenswrapper[29097]: I0312 18:36:07.817677 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 12 18:36:07.942266 master-0 kubenswrapper[29097]: I0312 18:36:07.942145 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 12 18:36:07.949460 master-0 kubenswrapper[29097]: I0312 18:36:07.949428 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-pp56m" Mar 12 18:36:07.974847 master-0 kubenswrapper[29097]: I0312 18:36:07.974800 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 12 18:36:08.017723 master-0 kubenswrapper[29097]: I0312 18:36:08.017675 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 12 18:36:08.018324 master-0 kubenswrapper[29097]: I0312 18:36:08.018298 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 12 18:36:08.022310 master-0 kubenswrapper[29097]: I0312 18:36:08.022262 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-b88ct" Mar 12 18:36:08.105018 master-0 kubenswrapper[29097]: I0312 18:36:08.104952 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 12 18:36:08.131796 master-0 kubenswrapper[29097]: I0312 18:36:08.131755 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 12 18:36:08.156409 master-0 kubenswrapper[29097]: I0312 18:36:08.156368 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 12 18:36:08.280439 master-0 kubenswrapper[29097]: I0312 18:36:08.280298 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-4kvgb1ceuoc51" Mar 12 18:36:08.290772 master-0 kubenswrapper[29097]: I0312 18:36:08.290710 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 12 18:36:08.306809 master-0 kubenswrapper[29097]: I0312 18:36:08.306735 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 12 18:36:08.327930 master-0 kubenswrapper[29097]: I0312 18:36:08.327881 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-ffmfp" Mar 12 18:36:08.335487 master-0 kubenswrapper[29097]: I0312 18:36:08.335447 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-fb478b976-xhpmp" Mar 12 18:36:08.343690 master-0 kubenswrapper[29097]: I0312 18:36:08.343649 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-fb478b976-xhpmp" Mar 12 18:36:08.360479 master-0 kubenswrapper[29097]: I0312 18:36:08.360433 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 12 18:36:08.375084 master-0 kubenswrapper[29097]: I0312 18:36:08.375029 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6b98bc4d-xfxc9" Mar 12 18:36:08.384679 master-0 kubenswrapper[29097]: I0312 18:36:08.384624 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6b98bc4d-xfxc9" Mar 12 18:36:08.390739 master-0 kubenswrapper[29097]: I0312 18:36:08.390695 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 12 18:36:08.399999 master-0 kubenswrapper[29097]: I0312 18:36:08.399950 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 12 18:36:08.406183 master-0 kubenswrapper[29097]: I0312 18:36:08.406141 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 12 18:36:08.441136 master-0 kubenswrapper[29097]: I0312 18:36:08.441073 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 12 18:36:08.594480 master-0 kubenswrapper[29097]: I0312 18:36:08.594424 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 12 18:36:08.596772 master-0 kubenswrapper[29097]: I0312 18:36:08.596751 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 12 18:36:08.612119 master-0 kubenswrapper[29097]: I0312 18:36:08.612078 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-g4mv5" Mar 12 18:36:08.696914 master-0 kubenswrapper[29097]: I0312 18:36:08.696866 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 12 18:36:08.717660 master-0 kubenswrapper[29097]: I0312 18:36:08.717549 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 12 18:36:08.719373 master-0 kubenswrapper[29097]: I0312 18:36:08.719338 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 12 18:36:08.824844 master-0 kubenswrapper[29097]: I0312 18:36:08.824775 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 18:36:08.859901 master-0 kubenswrapper[29097]: I0312 18:36:08.859752 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 18:36:08.922664 master-0 kubenswrapper[29097]: I0312 18:36:08.922604 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-nv88b" Mar 12 18:36:08.947688 master-0 kubenswrapper[29097]: I0312 18:36:08.947633 29097 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 12 18:36:09.004971 master-0 kubenswrapper[29097]: I0312 18:36:09.004893 29097 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 12 18:36:09.009940 master-0 kubenswrapper[29097]: I0312 18:36:09.009846 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-fb478b976-xhpmp" podStartSLOduration=52.00982313 podStartE2EDuration="52.00982313s" podCreationTimestamp="2026-03-12 18:35:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:35:19.238089687 +0000 UTC m=+358.792069804" watchObservedRunningTime="2026-03-12 18:36:09.00982313 +0000 UTC m=+408.563803247" Mar 12 18:36:09.012406 master-0 kubenswrapper[29097]: I0312 18:36:09.012378 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/metrics-server-5784dff469-l5d64","openshift-kube-apiserver/kube-apiserver-master-0","openshift-console/console-68799679d4-tcwkt"] Mar 12 18:36:09.012488 master-0 kubenswrapper[29097]: I0312 18:36:09.012445 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 12 18:36:09.020733 master-0 kubenswrapper[29097]: I0312 18:36:09.020672 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 18:36:09.037309 master-0 kubenswrapper[29097]: I0312 18:36:09.037232 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=23.037218133 podStartE2EDuration="23.037218133s" podCreationTimestamp="2026-03-12 18:35:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:36:09.033136311 +0000 UTC m=+408.587116498" watchObservedRunningTime="2026-03-12 18:36:09.037218133 +0000 UTC m=+408.591198240" Mar 12 18:36:09.127795 master-0 kubenswrapper[29097]: I0312 18:36:09.127690 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 12 18:36:09.133779 master-0 kubenswrapper[29097]: I0312 18:36:09.133734 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 12 18:36:09.161853 master-0 kubenswrapper[29097]: I0312 18:36:09.161802 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 12 18:36:09.202208 master-0 kubenswrapper[29097]: I0312 18:36:09.202155 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 18:36:09.255019 master-0 kubenswrapper[29097]: I0312 18:36:09.254962 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 12 18:36:09.265924 master-0 kubenswrapper[29097]: I0312 18:36:09.265868 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 12 18:36:09.370001 master-0 kubenswrapper[29097]: I0312 18:36:09.369918 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 12 18:36:09.393414 master-0 kubenswrapper[29097]: I0312 18:36:09.393283 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 12 18:36:09.402202 master-0 kubenswrapper[29097]: I0312 18:36:09.402158 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 12 18:36:09.456396 master-0 kubenswrapper[29097]: I0312 18:36:09.456335 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 12 18:36:09.607960 master-0 kubenswrapper[29097]: I0312 18:36:09.607887 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 12 18:36:09.674401 master-0 kubenswrapper[29097]: I0312 18:36:09.674278 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 12 18:36:09.692875 master-0 kubenswrapper[29097]: I0312 18:36:09.692812 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-q4h9m" Mar 12 18:36:09.695644 master-0 kubenswrapper[29097]: I0312 18:36:09.695591 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 12 18:36:09.698681 master-0 kubenswrapper[29097]: I0312 18:36:09.698640 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 12 18:36:09.726408 master-0 kubenswrapper[29097]: I0312 18:36:09.726334 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 12 18:36:09.791341 master-0 kubenswrapper[29097]: I0312 18:36:09.791279 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 12 18:36:09.857123 master-0 kubenswrapper[29097]: I0312 18:36:09.857062 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-djr46" Mar 12 18:36:09.860498 master-0 kubenswrapper[29097]: I0312 18:36:09.860408 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 12 18:36:09.893173 master-0 kubenswrapper[29097]: I0312 18:36:09.893067 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 12 18:36:09.894899 master-0 kubenswrapper[29097]: I0312 18:36:09.894819 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 12 18:36:10.012081 master-0 kubenswrapper[29097]: I0312 18:36:10.011895 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 12 18:36:10.038416 master-0 kubenswrapper[29097]: I0312 18:36:10.038335 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 12 18:36:10.164540 master-0 kubenswrapper[29097]: I0312 18:36:10.164450 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-cjzzq" Mar 12 18:36:10.174745 master-0 kubenswrapper[29097]: I0312 18:36:10.174694 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 12 18:36:10.190996 master-0 kubenswrapper[29097]: I0312 18:36:10.190946 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-6gh5d" Mar 12 18:36:10.211470 master-0 kubenswrapper[29097]: I0312 18:36:10.211405 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 12 18:36:10.225917 master-0 kubenswrapper[29097]: I0312 18:36:10.225860 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 18:36:10.235358 master-0 kubenswrapper[29097]: I0312 18:36:10.235326 29097 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 12 18:36:10.316702 master-0 kubenswrapper[29097]: I0312 18:36:10.316623 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 12 18:36:10.329253 master-0 kubenswrapper[29097]: I0312 18:36:10.329226 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-fvbintakd9ghl" Mar 12 18:36:10.337925 master-0 kubenswrapper[29097]: I0312 18:36:10.337864 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 18:36:10.341412 master-0 kubenswrapper[29097]: I0312 18:36:10.341369 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 12 18:36:10.367230 master-0 kubenswrapper[29097]: I0312 18:36:10.367195 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 12 18:36:10.381311 master-0 kubenswrapper[29097]: I0312 18:36:10.381198 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 12 18:36:10.540171 master-0 kubenswrapper[29097]: I0312 18:36:10.540116 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 12 18:36:10.576014 master-0 kubenswrapper[29097]: I0312 18:36:10.575890 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 12 18:36:10.680673 master-0 kubenswrapper[29097]: I0312 18:36:10.680613 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-n2hnv" Mar 12 18:36:10.716075 master-0 kubenswrapper[29097]: I0312 18:36:10.716012 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 12 18:36:10.733780 master-0 kubenswrapper[29097]: I0312 18:36:10.733705 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f8180d8-5283-409a-b36e-4786c8483171" path="/var/lib/kubelet/pods/2f8180d8-5283-409a-b36e-4786c8483171/volumes" Mar 12 18:36:10.735444 master-0 kubenswrapper[29097]: I0312 18:36:10.735387 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f1f60fa-d79d-4f31-b5bf-2ad333151537" path="/var/lib/kubelet/pods/9f1f60fa-d79d-4f31-b5bf-2ad333151537/volumes" Mar 12 18:36:10.743693 master-0 kubenswrapper[29097]: I0312 18:36:10.743655 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 12 18:36:10.764475 master-0 kubenswrapper[29097]: I0312 18:36:10.764414 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 12 18:36:10.792296 master-0 kubenswrapper[29097]: I0312 18:36:10.792254 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-g4mx9" Mar 12 18:36:10.799604 master-0 kubenswrapper[29097]: I0312 18:36:10.799552 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 12 18:36:10.887145 master-0 kubenswrapper[29097]: I0312 18:36:10.887022 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 12 18:36:11.004351 master-0 kubenswrapper[29097]: I0312 18:36:11.004296 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 12 18:36:11.014035 master-0 kubenswrapper[29097]: I0312 18:36:11.013674 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 12 18:36:11.019930 master-0 kubenswrapper[29097]: I0312 18:36:11.019895 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 12 18:36:11.141810 master-0 kubenswrapper[29097]: I0312 18:36:11.141676 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 12 18:36:11.161470 master-0 kubenswrapper[29097]: I0312 18:36:11.161402 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 12 18:36:11.198236 master-0 kubenswrapper[29097]: I0312 18:36:11.198196 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 12 18:36:11.521171 master-0 kubenswrapper[29097]: I0312 18:36:11.521042 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 12 18:36:11.578722 master-0 kubenswrapper[29097]: I0312 18:36:11.578616 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 12 18:36:11.603488 master-0 kubenswrapper[29097]: I0312 18:36:11.603426 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 12 18:36:11.636424 master-0 kubenswrapper[29097]: I0312 18:36:11.636360 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 12 18:36:11.660920 master-0 kubenswrapper[29097]: I0312 18:36:11.660840 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 12 18:36:11.707287 master-0 kubenswrapper[29097]: I0312 18:36:11.707201 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 12 18:36:11.717228 master-0 kubenswrapper[29097]: I0312 18:36:11.717163 29097 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 12 18:36:11.802434 master-0 kubenswrapper[29097]: I0312 18:36:11.802373 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 12 18:36:11.816150 master-0 kubenswrapper[29097]: I0312 18:36:11.816085 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 12 18:36:11.872742 master-0 kubenswrapper[29097]: I0312 18:36:11.872645 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 12 18:36:11.893827 master-0 kubenswrapper[29097]: I0312 18:36:11.893769 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 12 18:36:11.917933 master-0 kubenswrapper[29097]: I0312 18:36:11.917855 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-h5f5n" Mar 12 18:36:12.003327 master-0 kubenswrapper[29097]: I0312 18:36:12.003225 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 12 18:36:12.020109 master-0 kubenswrapper[29097]: I0312 18:36:12.020024 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 12 18:36:12.081310 master-0 kubenswrapper[29097]: I0312 18:36:12.081123 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 12 18:36:12.084062 master-0 kubenswrapper[29097]: I0312 18:36:12.084016 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 12 18:36:12.156617 master-0 kubenswrapper[29097]: I0312 18:36:12.156551 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 12 18:36:12.259076 master-0 kubenswrapper[29097]: I0312 18:36:12.259005 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 12 18:36:12.305338 master-0 kubenswrapper[29097]: I0312 18:36:12.305261 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 12 18:36:12.362744 master-0 kubenswrapper[29097]: I0312 18:36:12.362616 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 12 18:36:12.468602 master-0 kubenswrapper[29097]: I0312 18:36:12.468496 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 12 18:36:12.515948 master-0 kubenswrapper[29097]: I0312 18:36:12.515886 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 12 18:36:12.569464 master-0 kubenswrapper[29097]: I0312 18:36:12.569417 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 12 18:36:12.777596 master-0 kubenswrapper[29097]: I0312 18:36:12.777465 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-sjkl7" Mar 12 18:36:12.859505 master-0 kubenswrapper[29097]: I0312 18:36:12.859445 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 12 18:36:12.889736 master-0 kubenswrapper[29097]: I0312 18:36:12.889681 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 12 18:36:12.906691 master-0 kubenswrapper[29097]: I0312 18:36:12.906623 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 12 18:36:12.958001 master-0 kubenswrapper[29097]: I0312 18:36:12.957919 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 12 18:36:12.996056 master-0 kubenswrapper[29097]: I0312 18:36:12.995993 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 12 18:36:13.035548 master-0 kubenswrapper[29097]: I0312 18:36:13.035389 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 12 18:36:13.055021 master-0 kubenswrapper[29097]: I0312 18:36:13.054950 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 12 18:36:13.088888 master-0 kubenswrapper[29097]: I0312 18:36:13.088834 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-cc9lz" Mar 12 18:36:13.177656 master-0 kubenswrapper[29097]: I0312 18:36:13.177616 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 12 18:36:13.217081 master-0 kubenswrapper[29097]: I0312 18:36:13.217026 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 12 18:36:13.217242 master-0 kubenswrapper[29097]: I0312 18:36:13.217156 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 12 18:36:13.242871 master-0 kubenswrapper[29097]: I0312 18:36:13.242817 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:36:13.257738 master-0 kubenswrapper[29097]: I0312 18:36:13.257698 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 12 18:36:13.320166 master-0 kubenswrapper[29097]: I0312 18:36:13.320076 29097 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 12 18:36:13.367891 master-0 kubenswrapper[29097]: I0312 18:36:13.367792 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 12 18:36:13.380331 master-0 kubenswrapper[29097]: I0312 18:36:13.380282 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 12 18:36:13.398103 master-0 kubenswrapper[29097]: I0312 18:36:13.398039 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 12 18:36:13.440771 master-0 kubenswrapper[29097]: I0312 18:36:13.440698 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 12 18:36:13.570496 master-0 kubenswrapper[29097]: I0312 18:36:13.570375 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 12 18:36:13.630091 master-0 kubenswrapper[29097]: I0312 18:36:13.630039 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 18:36:13.669533 master-0 kubenswrapper[29097]: I0312 18:36:13.669480 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 12 18:36:13.719968 master-0 kubenswrapper[29097]: I0312 18:36:13.719900 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:36:13.720291 master-0 kubenswrapper[29097]: I0312 18:36:13.720127 29097 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 12 18:36:13.720291 master-0 kubenswrapper[29097]: I0312 18:36:13.720173 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="49835aec35bdc5feca0d7cf24779b8da" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 12 18:36:13.748141 master-0 kubenswrapper[29097]: I0312 18:36:13.748023 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 12 18:36:13.824384 master-0 kubenswrapper[29097]: I0312 18:36:13.824239 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 12 18:36:13.998884 master-0 kubenswrapper[29097]: I0312 18:36:13.998813 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 12 18:36:14.108183 master-0 kubenswrapper[29097]: I0312 18:36:14.108035 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 12 18:36:14.335104 master-0 kubenswrapper[29097]: I0312 18:36:14.334995 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 12 18:36:14.674228 master-0 kubenswrapper[29097]: I0312 18:36:14.674150 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 12 18:36:14.839008 master-0 kubenswrapper[29097]: I0312 18:36:14.838938 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-72pgx" Mar 12 18:36:14.847386 master-0 kubenswrapper[29097]: I0312 18:36:14.847320 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-fsl62" Mar 12 18:36:14.952768 master-0 kubenswrapper[29097]: I0312 18:36:14.952540 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 12 18:36:14.973959 master-0 kubenswrapper[29097]: I0312 18:36:14.973869 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 12 18:36:15.622838 master-0 kubenswrapper[29097]: I0312 18:36:15.622764 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 12 18:36:17.398764 master-0 kubenswrapper[29097]: I0312 18:36:17.398681 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 12 18:36:20.127090 master-0 kubenswrapper[29097]: I0312 18:36:20.127015 29097 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 12 18:36:20.128040 master-0 kubenswrapper[29097]: I0312 18:36:20.127436 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="a814bd60de133d95cf99630a978c017e" containerName="startup-monitor" containerID="cri-o://0aca5d3bee881b5f583870ed7c13fc7c245cbcbdac48aa3d637220f53a08e1f4" gracePeriod=5 Mar 12 18:36:21.197965 master-0 kubenswrapper[29097]: E0312 18:36:21.197784 29097 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33feec78_4592_4343_965b_aa1b7044fcf3.slice/crio-26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Error finding container 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Status 404 returned error can't find the container with id 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547 Mar 12 18:36:23.720392 master-0 kubenswrapper[29097]: I0312 18:36:23.720296 29097 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 12 18:36:23.721013 master-0 kubenswrapper[29097]: I0312 18:36:23.720389 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="49835aec35bdc5feca0d7cf24779b8da" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 12 18:36:25.734735 master-0 kubenswrapper[29097]: I0312 18:36:25.734692 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_a814bd60de133d95cf99630a978c017e/startup-monitor/0.log" Mar 12 18:36:25.736284 master-0 kubenswrapper[29097]: I0312 18:36:25.734779 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:36:25.820498 master-0 kubenswrapper[29097]: I0312 18:36:25.820411 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-log\") pod \"a814bd60de133d95cf99630a978c017e\" (UID: \"a814bd60de133d95cf99630a978c017e\") " Mar 12 18:36:25.820852 master-0 kubenswrapper[29097]: I0312 18:36:25.820552 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-manifests\") pod \"a814bd60de133d95cf99630a978c017e\" (UID: \"a814bd60de133d95cf99630a978c017e\") " Mar 12 18:36:25.820852 master-0 kubenswrapper[29097]: I0312 18:36:25.820555 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-log" (OuterVolumeSpecName: "var-log") pod "a814bd60de133d95cf99630a978c017e" (UID: "a814bd60de133d95cf99630a978c017e"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:36:25.820852 master-0 kubenswrapper[29097]: I0312 18:36:25.820674 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-manifests" (OuterVolumeSpecName: "manifests") pod "a814bd60de133d95cf99630a978c017e" (UID: "a814bd60de133d95cf99630a978c017e"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:36:25.820852 master-0 kubenswrapper[29097]: I0312 18:36:25.820697 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-resource-dir\") pod \"a814bd60de133d95cf99630a978c017e\" (UID: \"a814bd60de133d95cf99630a978c017e\") " Mar 12 18:36:25.820852 master-0 kubenswrapper[29097]: I0312 18:36:25.820760 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "a814bd60de133d95cf99630a978c017e" (UID: "a814bd60de133d95cf99630a978c017e"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:36:25.820852 master-0 kubenswrapper[29097]: I0312 18:36:25.820835 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-pod-resource-dir\") pod \"a814bd60de133d95cf99630a978c017e\" (UID: \"a814bd60de133d95cf99630a978c017e\") " Mar 12 18:36:25.821303 master-0 kubenswrapper[29097]: I0312 18:36:25.820991 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-lock\") pod \"a814bd60de133d95cf99630a978c017e\" (UID: \"a814bd60de133d95cf99630a978c017e\") " Mar 12 18:36:25.821303 master-0 kubenswrapper[29097]: I0312 18:36:25.821110 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-lock" (OuterVolumeSpecName: "var-lock") pod "a814bd60de133d95cf99630a978c017e" (UID: "a814bd60de133d95cf99630a978c017e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:36:25.822767 master-0 kubenswrapper[29097]: I0312 18:36:25.822727 29097 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:36:25.822767 master-0 kubenswrapper[29097]: I0312 18:36:25.822765 29097 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 18:36:25.823002 master-0 kubenswrapper[29097]: I0312 18:36:25.822785 29097 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-log\") on node \"master-0\" DevicePath \"\"" Mar 12 18:36:25.823002 master-0 kubenswrapper[29097]: I0312 18:36:25.822806 29097 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-manifests\") on node \"master-0\" DevicePath \"\"" Mar 12 18:36:25.828862 master-0 kubenswrapper[29097]: I0312 18:36:25.828804 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "a814bd60de133d95cf99630a978c017e" (UID: "a814bd60de133d95cf99630a978c017e"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:36:25.885382 master-0 kubenswrapper[29097]: I0312 18:36:25.885308 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_a814bd60de133d95cf99630a978c017e/startup-monitor/0.log" Mar 12 18:36:25.885680 master-0 kubenswrapper[29097]: I0312 18:36:25.885402 29097 generic.go:334] "Generic (PLEG): container finished" podID="a814bd60de133d95cf99630a978c017e" containerID="0aca5d3bee881b5f583870ed7c13fc7c245cbcbdac48aa3d637220f53a08e1f4" exitCode=137 Mar 12 18:36:25.885680 master-0 kubenswrapper[29097]: I0312 18:36:25.885482 29097 scope.go:117] "RemoveContainer" containerID="0aca5d3bee881b5f583870ed7c13fc7c245cbcbdac48aa3d637220f53a08e1f4" Mar 12 18:36:25.885680 master-0 kubenswrapper[29097]: I0312 18:36:25.885509 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 18:36:25.919915 master-0 kubenswrapper[29097]: I0312 18:36:25.919839 29097 scope.go:117] "RemoveContainer" containerID="0aca5d3bee881b5f583870ed7c13fc7c245cbcbdac48aa3d637220f53a08e1f4" Mar 12 18:36:25.920657 master-0 kubenswrapper[29097]: E0312 18:36:25.920588 29097 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0aca5d3bee881b5f583870ed7c13fc7c245cbcbdac48aa3d637220f53a08e1f4\": container with ID starting with 0aca5d3bee881b5f583870ed7c13fc7c245cbcbdac48aa3d637220f53a08e1f4 not found: ID does not exist" containerID="0aca5d3bee881b5f583870ed7c13fc7c245cbcbdac48aa3d637220f53a08e1f4" Mar 12 18:36:25.920756 master-0 kubenswrapper[29097]: I0312 18:36:25.920666 29097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0aca5d3bee881b5f583870ed7c13fc7c245cbcbdac48aa3d637220f53a08e1f4"} err="failed to get container status \"0aca5d3bee881b5f583870ed7c13fc7c245cbcbdac48aa3d637220f53a08e1f4\": rpc error: code = NotFound desc = could not find container \"0aca5d3bee881b5f583870ed7c13fc7c245cbcbdac48aa3d637220f53a08e1f4\": container with ID starting with 0aca5d3bee881b5f583870ed7c13fc7c245cbcbdac48aa3d637220f53a08e1f4 not found: ID does not exist" Mar 12 18:36:25.924749 master-0 kubenswrapper[29097]: I0312 18:36:25.924689 29097 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:36:26.744279 master-0 kubenswrapper[29097]: I0312 18:36:26.744218 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a814bd60de133d95cf99630a978c017e" path="/var/lib/kubelet/pods/a814bd60de133d95cf99630a978c017e/volumes" Mar 12 18:36:33.724369 master-0 kubenswrapper[29097]: I0312 18:36:33.724294 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:36:33.730052 master-0 kubenswrapper[29097]: I0312 18:36:33.730017 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:36:46.599328 master-0 kubenswrapper[29097]: I0312 18:36:46.599236 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-retry-1-master-0"] Mar 12 18:36:46.600559 master-0 kubenswrapper[29097]: E0312 18:36:46.599721 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa84cbbb-e30c-4630-9d4a-8e64b207d4bd" containerName="installer" Mar 12 18:36:46.600559 master-0 kubenswrapper[29097]: I0312 18:36:46.599749 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa84cbbb-e30c-4630-9d4a-8e64b207d4bd" containerName="installer" Mar 12 18:36:46.600559 master-0 kubenswrapper[29097]: E0312 18:36:46.599778 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a814bd60de133d95cf99630a978c017e" containerName="startup-monitor" Mar 12 18:36:46.600559 master-0 kubenswrapper[29097]: I0312 18:36:46.599790 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="a814bd60de133d95cf99630a978c017e" containerName="startup-monitor" Mar 12 18:36:46.600559 master-0 kubenswrapper[29097]: E0312 18:36:46.599840 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a514abd-72d2-4281-a679-77d4e6158c9f" containerName="installer" Mar 12 18:36:46.600559 master-0 kubenswrapper[29097]: I0312 18:36:46.599852 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a514abd-72d2-4281-a679-77d4e6158c9f" containerName="installer" Mar 12 18:36:46.600559 master-0 kubenswrapper[29097]: E0312 18:36:46.599881 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f8180d8-5283-409a-b36e-4786c8483171" containerName="console" Mar 12 18:36:46.600559 master-0 kubenswrapper[29097]: I0312 18:36:46.599893 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f8180d8-5283-409a-b36e-4786c8483171" containerName="console" Mar 12 18:36:46.600559 master-0 kubenswrapper[29097]: E0312 18:36:46.599915 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f1f60fa-d79d-4f31-b5bf-2ad333151537" containerName="metrics-server" Mar 12 18:36:46.600559 master-0 kubenswrapper[29097]: I0312 18:36:46.599926 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f1f60fa-d79d-4f31-b5bf-2ad333151537" containerName="metrics-server" Mar 12 18:36:46.600559 master-0 kubenswrapper[29097]: I0312 18:36:46.600169 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f8180d8-5283-409a-b36e-4786c8483171" containerName="console" Mar 12 18:36:46.600559 master-0 kubenswrapper[29097]: I0312 18:36:46.600194 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="a814bd60de133d95cf99630a978c017e" containerName="startup-monitor" Mar 12 18:36:46.600559 master-0 kubenswrapper[29097]: I0312 18:36:46.600215 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa84cbbb-e30c-4630-9d4a-8e64b207d4bd" containerName="installer" Mar 12 18:36:46.600559 master-0 kubenswrapper[29097]: I0312 18:36:46.600233 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a514abd-72d2-4281-a679-77d4e6158c9f" containerName="installer" Mar 12 18:36:46.600559 master-0 kubenswrapper[29097]: I0312 18:36:46.600261 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f1f60fa-d79d-4f31-b5bf-2ad333151537" containerName="metrics-server" Mar 12 18:36:46.602025 master-0 kubenswrapper[29097]: I0312 18:36:46.600940 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Mar 12 18:36:46.607170 master-0 kubenswrapper[29097]: I0312 18:36:46.607117 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-tr8hr" Mar 12 18:36:46.608941 master-0 kubenswrapper[29097]: I0312 18:36:46.608808 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 12 18:36:46.624938 master-0 kubenswrapper[29097]: I0312 18:36:46.624827 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-retry-1-master-0"] Mar 12 18:36:46.682747 master-0 kubenswrapper[29097]: I0312 18:36:46.682658 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/041f65ed-8421-458e-bc4c-ffc7c6bd4660-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"041f65ed-8421-458e-bc4c-ffc7c6bd4660\") " pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Mar 12 18:36:46.683091 master-0 kubenswrapper[29097]: I0312 18:36:46.683037 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/041f65ed-8421-458e-bc4c-ffc7c6bd4660-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"041f65ed-8421-458e-bc4c-ffc7c6bd4660\") " pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Mar 12 18:36:46.683153 master-0 kubenswrapper[29097]: I0312 18:36:46.683121 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/041f65ed-8421-458e-bc4c-ffc7c6bd4660-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"041f65ed-8421-458e-bc4c-ffc7c6bd4660\") " pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Mar 12 18:36:46.784775 master-0 kubenswrapper[29097]: I0312 18:36:46.784704 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/041f65ed-8421-458e-bc4c-ffc7c6bd4660-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"041f65ed-8421-458e-bc4c-ffc7c6bd4660\") " pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Mar 12 18:36:46.784775 master-0 kubenswrapper[29097]: I0312 18:36:46.784779 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/041f65ed-8421-458e-bc4c-ffc7c6bd4660-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"041f65ed-8421-458e-bc4c-ffc7c6bd4660\") " pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Mar 12 18:36:46.785056 master-0 kubenswrapper[29097]: I0312 18:36:46.784839 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/041f65ed-8421-458e-bc4c-ffc7c6bd4660-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"041f65ed-8421-458e-bc4c-ffc7c6bd4660\") " pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Mar 12 18:36:46.785056 master-0 kubenswrapper[29097]: I0312 18:36:46.784840 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/041f65ed-8421-458e-bc4c-ffc7c6bd4660-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"041f65ed-8421-458e-bc4c-ffc7c6bd4660\") " pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Mar 12 18:36:46.785149 master-0 kubenswrapper[29097]: I0312 18:36:46.785051 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/041f65ed-8421-458e-bc4c-ffc7c6bd4660-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"041f65ed-8421-458e-bc4c-ffc7c6bd4660\") " pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Mar 12 18:36:46.801433 master-0 kubenswrapper[29097]: I0312 18:36:46.801392 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/041f65ed-8421-458e-bc4c-ffc7c6bd4660-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"041f65ed-8421-458e-bc4c-ffc7c6bd4660\") " pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Mar 12 18:36:46.938996 master-0 kubenswrapper[29097]: I0312 18:36:46.938826 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Mar 12 18:36:47.395900 master-0 kubenswrapper[29097]: I0312 18:36:47.395828 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-retry-1-master-0"] Mar 12 18:36:48.058739 master-0 kubenswrapper[29097]: I0312 18:36:48.058676 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" event={"ID":"041f65ed-8421-458e-bc4c-ffc7c6bd4660","Type":"ContainerStarted","Data":"e16003898332347835ae7c273649a43f8ef80db188d6690db59e62aadac793d9"} Mar 12 18:36:48.058739 master-0 kubenswrapper[29097]: I0312 18:36:48.058750 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" event={"ID":"041f65ed-8421-458e-bc4c-ffc7c6bd4660","Type":"ContainerStarted","Data":"c9d68f7ba3c249e475b534e5afc0e791e4836af4ccc8f2b8446edec2e46b0b8b"} Mar 12 18:36:48.080054 master-0 kubenswrapper[29097]: I0312 18:36:48.079984 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" podStartSLOduration=2.079966257 podStartE2EDuration="2.079966257s" podCreationTimestamp="2026-03-12 18:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:36:48.077976108 +0000 UTC m=+447.631956215" watchObservedRunningTime="2026-03-12 18:36:48.079966257 +0000 UTC m=+447.633946364" Mar 12 18:36:48.478851 master-0 kubenswrapper[29097]: I0312 18:36:48.477579 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn"] Mar 12 18:36:48.497529 master-0 kubenswrapper[29097]: I0312 18:36:48.497452 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6b98bc4d-xfxc9"] Mar 12 18:36:48.533573 master-0 kubenswrapper[29097]: I0312 18:36:48.533408 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-56b9f847c7-f5n7l"] Mar 12 18:36:48.534751 master-0 kubenswrapper[29097]: I0312 18:36:48.534718 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-56b9f847c7-f5n7l" Mar 12 18:36:48.548574 master-0 kubenswrapper[29097]: I0312 18:36:48.548224 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-56b9f847c7-f5n7l"] Mar 12 18:36:48.620230 master-0 kubenswrapper[29097]: I0312 18:36:48.616723 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqng8\" (UniqueName: \"kubernetes.io/projected/5bf90e22-d433-4ade-9fa6-873b297d1f58-kube-api-access-gqng8\") pod \"console-56b9f847c7-f5n7l\" (UID: \"5bf90e22-d433-4ade-9fa6-873b297d1f58\") " pod="openshift-console/console-56b9f847c7-f5n7l" Mar 12 18:36:48.620230 master-0 kubenswrapper[29097]: I0312 18:36:48.616785 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5bf90e22-d433-4ade-9fa6-873b297d1f58-oauth-serving-cert\") pod \"console-56b9f847c7-f5n7l\" (UID: \"5bf90e22-d433-4ade-9fa6-873b297d1f58\") " pod="openshift-console/console-56b9f847c7-f5n7l" Mar 12 18:36:48.620230 master-0 kubenswrapper[29097]: I0312 18:36:48.616813 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bf90e22-d433-4ade-9fa6-873b297d1f58-trusted-ca-bundle\") pod \"console-56b9f847c7-f5n7l\" (UID: \"5bf90e22-d433-4ade-9fa6-873b297d1f58\") " pod="openshift-console/console-56b9f847c7-f5n7l" Mar 12 18:36:48.620230 master-0 kubenswrapper[29097]: I0312 18:36:48.616854 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5bf90e22-d433-4ade-9fa6-873b297d1f58-console-config\") pod \"console-56b9f847c7-f5n7l\" (UID: \"5bf90e22-d433-4ade-9fa6-873b297d1f58\") " pod="openshift-console/console-56b9f847c7-f5n7l" Mar 12 18:36:48.620230 master-0 kubenswrapper[29097]: I0312 18:36:48.616881 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5bf90e22-d433-4ade-9fa6-873b297d1f58-service-ca\") pod \"console-56b9f847c7-f5n7l\" (UID: \"5bf90e22-d433-4ade-9fa6-873b297d1f58\") " pod="openshift-console/console-56b9f847c7-f5n7l" Mar 12 18:36:48.620230 master-0 kubenswrapper[29097]: I0312 18:36:48.616908 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5bf90e22-d433-4ade-9fa6-873b297d1f58-console-serving-cert\") pod \"console-56b9f847c7-f5n7l\" (UID: \"5bf90e22-d433-4ade-9fa6-873b297d1f58\") " pod="openshift-console/console-56b9f847c7-f5n7l" Mar 12 18:36:48.620230 master-0 kubenswrapper[29097]: I0312 18:36:48.616936 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5bf90e22-d433-4ade-9fa6-873b297d1f58-console-oauth-config\") pod \"console-56b9f847c7-f5n7l\" (UID: \"5bf90e22-d433-4ade-9fa6-873b297d1f58\") " pod="openshift-console/console-56b9f847c7-f5n7l" Mar 12 18:36:48.718333 master-0 kubenswrapper[29097]: I0312 18:36:48.718249 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5bf90e22-d433-4ade-9fa6-873b297d1f58-console-config\") pod \"console-56b9f847c7-f5n7l\" (UID: \"5bf90e22-d433-4ade-9fa6-873b297d1f58\") " pod="openshift-console/console-56b9f847c7-f5n7l" Mar 12 18:36:48.718333 master-0 kubenswrapper[29097]: I0312 18:36:48.718349 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5bf90e22-d433-4ade-9fa6-873b297d1f58-service-ca\") pod \"console-56b9f847c7-f5n7l\" (UID: \"5bf90e22-d433-4ade-9fa6-873b297d1f58\") " pod="openshift-console/console-56b9f847c7-f5n7l" Mar 12 18:36:48.718668 master-0 kubenswrapper[29097]: I0312 18:36:48.718507 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5bf90e22-d433-4ade-9fa6-873b297d1f58-console-serving-cert\") pod \"console-56b9f847c7-f5n7l\" (UID: \"5bf90e22-d433-4ade-9fa6-873b297d1f58\") " pod="openshift-console/console-56b9f847c7-f5n7l" Mar 12 18:36:48.718668 master-0 kubenswrapper[29097]: I0312 18:36:48.718584 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5bf90e22-d433-4ade-9fa6-873b297d1f58-console-oauth-config\") pod \"console-56b9f847c7-f5n7l\" (UID: \"5bf90e22-d433-4ade-9fa6-873b297d1f58\") " pod="openshift-console/console-56b9f847c7-f5n7l" Mar 12 18:36:48.718668 master-0 kubenswrapper[29097]: I0312 18:36:48.718620 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqng8\" (UniqueName: \"kubernetes.io/projected/5bf90e22-d433-4ade-9fa6-873b297d1f58-kube-api-access-gqng8\") pod \"console-56b9f847c7-f5n7l\" (UID: \"5bf90e22-d433-4ade-9fa6-873b297d1f58\") " pod="openshift-console/console-56b9f847c7-f5n7l" Mar 12 18:36:48.720652 master-0 kubenswrapper[29097]: I0312 18:36:48.718664 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5bf90e22-d433-4ade-9fa6-873b297d1f58-oauth-serving-cert\") pod \"console-56b9f847c7-f5n7l\" (UID: \"5bf90e22-d433-4ade-9fa6-873b297d1f58\") " pod="openshift-console/console-56b9f847c7-f5n7l" Mar 12 18:36:48.720652 master-0 kubenswrapper[29097]: I0312 18:36:48.719096 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bf90e22-d433-4ade-9fa6-873b297d1f58-trusted-ca-bundle\") pod \"console-56b9f847c7-f5n7l\" (UID: \"5bf90e22-d433-4ade-9fa6-873b297d1f58\") " pod="openshift-console/console-56b9f847c7-f5n7l" Mar 12 18:36:48.720652 master-0 kubenswrapper[29097]: I0312 18:36:48.719425 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5bf90e22-d433-4ade-9fa6-873b297d1f58-console-config\") pod \"console-56b9f847c7-f5n7l\" (UID: \"5bf90e22-d433-4ade-9fa6-873b297d1f58\") " pod="openshift-console/console-56b9f847c7-f5n7l" Mar 12 18:36:48.720652 master-0 kubenswrapper[29097]: I0312 18:36:48.719470 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5bf90e22-d433-4ade-9fa6-873b297d1f58-service-ca\") pod \"console-56b9f847c7-f5n7l\" (UID: \"5bf90e22-d433-4ade-9fa6-873b297d1f58\") " pod="openshift-console/console-56b9f847c7-f5n7l" Mar 12 18:36:48.720652 master-0 kubenswrapper[29097]: I0312 18:36:48.719683 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5bf90e22-d433-4ade-9fa6-873b297d1f58-oauth-serving-cert\") pod \"console-56b9f847c7-f5n7l\" (UID: \"5bf90e22-d433-4ade-9fa6-873b297d1f58\") " pod="openshift-console/console-56b9f847c7-f5n7l" Mar 12 18:36:48.720652 master-0 kubenswrapper[29097]: I0312 18:36:48.720100 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bf90e22-d433-4ade-9fa6-873b297d1f58-trusted-ca-bundle\") pod \"console-56b9f847c7-f5n7l\" (UID: \"5bf90e22-d433-4ade-9fa6-873b297d1f58\") " pod="openshift-console/console-56b9f847c7-f5n7l" Mar 12 18:36:48.723219 master-0 kubenswrapper[29097]: I0312 18:36:48.721692 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5bf90e22-d433-4ade-9fa6-873b297d1f58-console-serving-cert\") pod \"console-56b9f847c7-f5n7l\" (UID: \"5bf90e22-d433-4ade-9fa6-873b297d1f58\") " pod="openshift-console/console-56b9f847c7-f5n7l" Mar 12 18:36:48.723219 master-0 kubenswrapper[29097]: I0312 18:36:48.722201 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5bf90e22-d433-4ade-9fa6-873b297d1f58-console-oauth-config\") pod \"console-56b9f847c7-f5n7l\" (UID: \"5bf90e22-d433-4ade-9fa6-873b297d1f58\") " pod="openshift-console/console-56b9f847c7-f5n7l" Mar 12 18:36:48.738312 master-0 kubenswrapper[29097]: I0312 18:36:48.738210 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqng8\" (UniqueName: \"kubernetes.io/projected/5bf90e22-d433-4ade-9fa6-873b297d1f58-kube-api-access-gqng8\") pod \"console-56b9f847c7-f5n7l\" (UID: \"5bf90e22-d433-4ade-9fa6-873b297d1f58\") " pod="openshift-console/console-56b9f847c7-f5n7l" Mar 12 18:36:48.871272 master-0 kubenswrapper[29097]: I0312 18:36:48.871221 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-56b9f847c7-f5n7l" Mar 12 18:36:49.300284 master-0 kubenswrapper[29097]: W0312 18:36:49.300233 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bf90e22_d433_4ade_9fa6_873b297d1f58.slice/crio-e3dc1d30bf9e5b352e5a203ce9b222a11dfecf1fa18bed1f18dbd9c41ff07225 WatchSource:0}: Error finding container e3dc1d30bf9e5b352e5a203ce9b222a11dfecf1fa18bed1f18dbd9c41ff07225: Status 404 returned error can't find the container with id e3dc1d30bf9e5b352e5a203ce9b222a11dfecf1fa18bed1f18dbd9c41ff07225 Mar 12 18:36:49.309614 master-0 kubenswrapper[29097]: I0312 18:36:49.309578 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-56b9f847c7-f5n7l"] Mar 12 18:36:50.079563 master-0 kubenswrapper[29097]: I0312 18:36:50.078927 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-56b9f847c7-f5n7l" event={"ID":"5bf90e22-d433-4ade-9fa6-873b297d1f58","Type":"ContainerStarted","Data":"6974ec10a99c79fe4f7c5b5a49cbef1b19e8efa3b652c51be4dfe65f3f7b7275"} Mar 12 18:36:50.079563 master-0 kubenswrapper[29097]: I0312 18:36:50.079019 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-56b9f847c7-f5n7l" event={"ID":"5bf90e22-d433-4ade-9fa6-873b297d1f58","Type":"ContainerStarted","Data":"e3dc1d30bf9e5b352e5a203ce9b222a11dfecf1fa18bed1f18dbd9c41ff07225"} Mar 12 18:36:50.107491 master-0 kubenswrapper[29097]: I0312 18:36:50.107350 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-56b9f847c7-f5n7l" podStartSLOduration=2.107317533 podStartE2EDuration="2.107317533s" podCreationTimestamp="2026-03-12 18:36:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:36:50.103560349 +0000 UTC m=+449.657540526" watchObservedRunningTime="2026-03-12 18:36:50.107317533 +0000 UTC m=+449.661297630" Mar 12 18:36:58.872316 master-0 kubenswrapper[29097]: I0312 18:36:58.872247 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-56b9f847c7-f5n7l" Mar 12 18:36:58.872943 master-0 kubenswrapper[29097]: I0312 18:36:58.872590 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-56b9f847c7-f5n7l" Mar 12 18:36:58.876927 master-0 kubenswrapper[29097]: I0312 18:36:58.876886 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-56b9f847c7-f5n7l" Mar 12 18:36:59.144626 master-0 kubenswrapper[29097]: I0312 18:36:59.144400 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-56b9f847c7-f5n7l" Mar 12 18:36:59.461794 master-0 kubenswrapper[29097]: I0312 18:36:59.458939 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-fb478b976-xhpmp"] Mar 12 18:37:13.512332 master-0 kubenswrapper[29097]: I0312 18:37:13.512230 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" podUID="dd1c9272-702a-4134-b476-91ee66dc43dc" containerName="oauth-openshift" containerID="cri-o://7975f652d90e3f50e0487660fee053da7f9566334eafd0f9ddd007f615d8d957" gracePeriod=15 Mar 12 18:37:13.559053 master-0 kubenswrapper[29097]: I0312 18:37:13.558979 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6b98bc4d-xfxc9" podUID="f5dd5dc7-7bc4-4154-8aac-876d2a0ae565" containerName="console" containerID="cri-o://8b1c5296f22b65b4ce024e0502c4e4b80cc4d40a1ff3097484acb7dcf8d96015" gracePeriod=15 Mar 12 18:37:14.110230 master-0 kubenswrapper[29097]: I0312 18:37:14.110177 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:37:14.117161 master-0 kubenswrapper[29097]: I0312 18:37:14.117125 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6b98bc4d-xfxc9_f5dd5dc7-7bc4-4154-8aac-876d2a0ae565/console/0.log" Mar 12 18:37:14.117319 master-0 kubenswrapper[29097]: I0312 18:37:14.117193 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b98bc4d-xfxc9" Mar 12 18:37:14.475299 master-0 kubenswrapper[29097]: I0312 18:37:14.475157 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-session\") pod \"dd1c9272-702a-4134-b476-91ee66dc43dc\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " Mar 12 18:37:14.475299 master-0 kubenswrapper[29097]: I0312 18:37:14.475222 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-cliconfig\") pod \"dd1c9272-702a-4134-b476-91ee66dc43dc\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " Mar 12 18:37:14.475299 master-0 kubenswrapper[29097]: I0312 18:37:14.475250 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-console-oauth-config\") pod \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\" (UID: \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\") " Mar 12 18:37:14.475770 master-0 kubenswrapper[29097]: I0312 18:37:14.475318 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-router-certs\") pod \"dd1c9272-702a-4134-b476-91ee66dc43dc\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " Mar 12 18:37:14.475770 master-0 kubenswrapper[29097]: I0312 18:37:14.475365 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-user-template-provider-selection\") pod \"dd1c9272-702a-4134-b476-91ee66dc43dc\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " Mar 12 18:37:14.475770 master-0 kubenswrapper[29097]: I0312 18:37:14.475390 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dd1c9272-702a-4134-b476-91ee66dc43dc-audit-policies\") pod \"dd1c9272-702a-4134-b476-91ee66dc43dc\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " Mar 12 18:37:14.475770 master-0 kubenswrapper[29097]: I0312 18:37:14.475459 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-oauth-serving-cert\") pod \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\" (UID: \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\") " Mar 12 18:37:14.476035 master-0 kubenswrapper[29097]: I0312 18:37:14.475790 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dd1c9272-702a-4134-b476-91ee66dc43dc-audit-dir\") pod \"dd1c9272-702a-4134-b476-91ee66dc43dc\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " Mar 12 18:37:14.476035 master-0 kubenswrapper[29097]: I0312 18:37:14.475973 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-user-template-error\") pod \"dd1c9272-702a-4134-b476-91ee66dc43dc\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " Mar 12 18:37:14.476035 master-0 kubenswrapper[29097]: I0312 18:37:14.476014 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv5l6\" (UniqueName: \"kubernetes.io/projected/dd1c9272-702a-4134-b476-91ee66dc43dc-kube-api-access-kv5l6\") pod \"dd1c9272-702a-4134-b476-91ee66dc43dc\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " Mar 12 18:37:14.476162 master-0 kubenswrapper[29097]: I0312 18:37:14.476040 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g94cv\" (UniqueName: \"kubernetes.io/projected/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-kube-api-access-g94cv\") pod \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\" (UID: \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\") " Mar 12 18:37:14.476162 master-0 kubenswrapper[29097]: I0312 18:37:14.476073 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-serving-cert\") pod \"dd1c9272-702a-4134-b476-91ee66dc43dc\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " Mar 12 18:37:14.476162 master-0 kubenswrapper[29097]: I0312 18:37:14.476101 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-user-template-login\") pod \"dd1c9272-702a-4134-b476-91ee66dc43dc\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " Mar 12 18:37:14.476162 master-0 kubenswrapper[29097]: I0312 18:37:14.476127 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-trusted-ca-bundle\") pod \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\" (UID: \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\") " Mar 12 18:37:14.476162 master-0 kubenswrapper[29097]: I0312 18:37:14.476155 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-service-ca\") pod \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\" (UID: \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\") " Mar 12 18:37:14.476392 master-0 kubenswrapper[29097]: I0312 18:37:14.476202 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-trusted-ca-bundle\") pod \"dd1c9272-702a-4134-b476-91ee66dc43dc\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " Mar 12 18:37:14.476392 master-0 kubenswrapper[29097]: I0312 18:37:14.476243 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-console-serving-cert\") pod \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\" (UID: \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\") " Mar 12 18:37:14.477922 master-0 kubenswrapper[29097]: I0312 18:37:14.477447 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "f5dd5dc7-7bc4-4154-8aac-876d2a0ae565" (UID: "f5dd5dc7-7bc4-4154-8aac-876d2a0ae565"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:37:14.478243 master-0 kubenswrapper[29097]: I0312 18:37:14.478202 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "dd1c9272-702a-4134-b476-91ee66dc43dc" (UID: "dd1c9272-702a-4134-b476-91ee66dc43dc"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:37:14.483665 master-0 kubenswrapper[29097]: I0312 18:37:14.483639 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "dd1c9272-702a-4134-b476-91ee66dc43dc" (UID: "dd1c9272-702a-4134-b476-91ee66dc43dc"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:37:14.483820 master-0 kubenswrapper[29097]: I0312 18:37:14.483750 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd1c9272-702a-4134-b476-91ee66dc43dc-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "dd1c9272-702a-4134-b476-91ee66dc43dc" (UID: "dd1c9272-702a-4134-b476-91ee66dc43dc"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:37:14.484356 master-0 kubenswrapper[29097]: I0312 18:37:14.484337 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-ocp-branding-template\") pod \"dd1c9272-702a-4134-b476-91ee66dc43dc\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " Mar 12 18:37:14.484490 master-0 kubenswrapper[29097]: I0312 18:37:14.484474 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-console-config\") pod \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\" (UID: \"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565\") " Mar 12 18:37:14.484616 master-0 kubenswrapper[29097]: I0312 18:37:14.484596 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-service-ca\") pod \"dd1c9272-702a-4134-b476-91ee66dc43dc\" (UID: \"dd1c9272-702a-4134-b476-91ee66dc43dc\") " Mar 12 18:37:14.485307 master-0 kubenswrapper[29097]: I0312 18:37:14.485289 29097 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Mar 12 18:37:14.485402 master-0 kubenswrapper[29097]: I0312 18:37:14.485388 29097 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Mar 12 18:37:14.485821 master-0 kubenswrapper[29097]: I0312 18:37:14.485803 29097 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 18:37:14.485927 master-0 kubenswrapper[29097]: I0312 18:37:14.485911 29097 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dd1c9272-702a-4134-b476-91ee66dc43dc-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:37:14.486000 master-0 kubenswrapper[29097]: I0312 18:37:14.484399 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd1c9272-702a-4134-b476-91ee66dc43dc-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "dd1c9272-702a-4134-b476-91ee66dc43dc" (UID: "dd1c9272-702a-4134-b476-91ee66dc43dc"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:37:14.486076 master-0 kubenswrapper[29097]: I0312 18:37:14.484572 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "dd1c9272-702a-4134-b476-91ee66dc43dc" (UID: "dd1c9272-702a-4134-b476-91ee66dc43dc"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:37:14.486155 master-0 kubenswrapper[29097]: I0312 18:37:14.485105 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f5dd5dc7-7bc4-4154-8aac-876d2a0ae565" (UID: "f5dd5dc7-7bc4-4154-8aac-876d2a0ae565"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:37:14.494009 master-0 kubenswrapper[29097]: I0312 18:37:14.487798 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-service-ca" (OuterVolumeSpecName: "service-ca") pod "f5dd5dc7-7bc4-4154-8aac-876d2a0ae565" (UID: "f5dd5dc7-7bc4-4154-8aac-876d2a0ae565"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:37:14.494009 master-0 kubenswrapper[29097]: I0312 18:37:14.487984 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "dd1c9272-702a-4134-b476-91ee66dc43dc" (UID: "dd1c9272-702a-4134-b476-91ee66dc43dc"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:37:14.494009 master-0 kubenswrapper[29097]: I0312 18:37:14.488432 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "f5dd5dc7-7bc4-4154-8aac-876d2a0ae565" (UID: "f5dd5dc7-7bc4-4154-8aac-876d2a0ae565"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:37:14.494009 master-0 kubenswrapper[29097]: I0312 18:37:14.489246 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "dd1c9272-702a-4134-b476-91ee66dc43dc" (UID: "dd1c9272-702a-4134-b476-91ee66dc43dc"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:37:14.494009 master-0 kubenswrapper[29097]: I0312 18:37:14.489625 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "dd1c9272-702a-4134-b476-91ee66dc43dc" (UID: "dd1c9272-702a-4134-b476-91ee66dc43dc"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:37:14.494009 master-0 kubenswrapper[29097]: I0312 18:37:14.492724 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "f5dd5dc7-7bc4-4154-8aac-876d2a0ae565" (UID: "f5dd5dc7-7bc4-4154-8aac-876d2a0ae565"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:37:14.494009 master-0 kubenswrapper[29097]: I0312 18:37:14.492721 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "dd1c9272-702a-4134-b476-91ee66dc43dc" (UID: "dd1c9272-702a-4134-b476-91ee66dc43dc"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:37:14.496058 master-0 kubenswrapper[29097]: I0312 18:37:14.495272 29097 generic.go:334] "Generic (PLEG): container finished" podID="dd1c9272-702a-4134-b476-91ee66dc43dc" containerID="7975f652d90e3f50e0487660fee053da7f9566334eafd0f9ddd007f615d8d957" exitCode=0 Mar 12 18:37:14.496058 master-0 kubenswrapper[29097]: I0312 18:37:14.495372 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" event={"ID":"dd1c9272-702a-4134-b476-91ee66dc43dc","Type":"ContainerDied","Data":"7975f652d90e3f50e0487660fee053da7f9566334eafd0f9ddd007f615d8d957"} Mar 12 18:37:14.496058 master-0 kubenswrapper[29097]: I0312 18:37:14.495401 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" event={"ID":"dd1c9272-702a-4134-b476-91ee66dc43dc","Type":"ContainerDied","Data":"39c1a12a17beecbab75c81b6d4e44c42daad16690939a63c1ab604700c07f752"} Mar 12 18:37:14.496058 master-0 kubenswrapper[29097]: I0312 18:37:14.495416 29097 scope.go:117] "RemoveContainer" containerID="7975f652d90e3f50e0487660fee053da7f9566334eafd0f9ddd007f615d8d957" Mar 12 18:37:14.496058 master-0 kubenswrapper[29097]: I0312 18:37:14.495556 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn" Mar 12 18:37:14.496292 master-0 kubenswrapper[29097]: I0312 18:37:14.496225 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "dd1c9272-702a-4134-b476-91ee66dc43dc" (UID: "dd1c9272-702a-4134-b476-91ee66dc43dc"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:37:14.496542 master-0 kubenswrapper[29097]: I0312 18:37:14.496492 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-console-config" (OuterVolumeSpecName: "console-config") pod "f5dd5dc7-7bc4-4154-8aac-876d2a0ae565" (UID: "f5dd5dc7-7bc4-4154-8aac-876d2a0ae565"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:37:14.502730 master-0 kubenswrapper[29097]: I0312 18:37:14.500626 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf"] Mar 12 18:37:14.502730 master-0 kubenswrapper[29097]: E0312 18:37:14.500933 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd1c9272-702a-4134-b476-91ee66dc43dc" containerName="oauth-openshift" Mar 12 18:37:14.502730 master-0 kubenswrapper[29097]: I0312 18:37:14.500950 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd1c9272-702a-4134-b476-91ee66dc43dc" containerName="oauth-openshift" Mar 12 18:37:14.502730 master-0 kubenswrapper[29097]: E0312 18:37:14.500974 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5dd5dc7-7bc4-4154-8aac-876d2a0ae565" containerName="console" Mar 12 18:37:14.502730 master-0 kubenswrapper[29097]: I0312 18:37:14.500983 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5dd5dc7-7bc4-4154-8aac-876d2a0ae565" containerName="console" Mar 12 18:37:14.502730 master-0 kubenswrapper[29097]: I0312 18:37:14.501153 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5dd5dc7-7bc4-4154-8aac-876d2a0ae565" containerName="console" Mar 12 18:37:14.502730 master-0 kubenswrapper[29097]: I0312 18:37:14.501205 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd1c9272-702a-4134-b476-91ee66dc43dc" containerName="oauth-openshift" Mar 12 18:37:14.502730 master-0 kubenswrapper[29097]: I0312 18:37:14.501871 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.503354 master-0 kubenswrapper[29097]: I0312 18:37:14.503324 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6b98bc4d-xfxc9_f5dd5dc7-7bc4-4154-8aac-876d2a0ae565/console/0.log" Mar 12 18:37:14.503528 master-0 kubenswrapper[29097]: I0312 18:37:14.503485 29097 generic.go:334] "Generic (PLEG): container finished" podID="f5dd5dc7-7bc4-4154-8aac-876d2a0ae565" containerID="8b1c5296f22b65b4ce024e0502c4e4b80cc4d40a1ff3097484acb7dcf8d96015" exitCode=2 Mar 12 18:37:14.503648 master-0 kubenswrapper[29097]: I0312 18:37:14.503625 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b98bc4d-xfxc9" event={"ID":"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565","Type":"ContainerDied","Data":"8b1c5296f22b65b4ce024e0502c4e4b80cc4d40a1ff3097484acb7dcf8d96015"} Mar 12 18:37:14.503744 master-0 kubenswrapper[29097]: I0312 18:37:14.503728 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b98bc4d-xfxc9" event={"ID":"f5dd5dc7-7bc4-4154-8aac-876d2a0ae565","Type":"ContainerDied","Data":"8e244a931ddc4ee85f82052bb7168c2225b1fc274389de87fe03a130410d8991"} Mar 12 18:37:14.503883 master-0 kubenswrapper[29097]: I0312 18:37:14.503868 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b98bc4d-xfxc9" Mar 12 18:37:14.504742 master-0 kubenswrapper[29097]: I0312 18:37:14.504680 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd1c9272-702a-4134-b476-91ee66dc43dc-kube-api-access-kv5l6" (OuterVolumeSpecName: "kube-api-access-kv5l6") pod "dd1c9272-702a-4134-b476-91ee66dc43dc" (UID: "dd1c9272-702a-4134-b476-91ee66dc43dc"). InnerVolumeSpecName "kube-api-access-kv5l6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:37:14.514016 master-0 kubenswrapper[29097]: I0312 18:37:14.513969 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-kube-api-access-g94cv" (OuterVolumeSpecName: "kube-api-access-g94cv") pod "f5dd5dc7-7bc4-4154-8aac-876d2a0ae565" (UID: "f5dd5dc7-7bc4-4154-8aac-876d2a0ae565"). InnerVolumeSpecName "kube-api-access-g94cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:37:14.514016 master-0 kubenswrapper[29097]: I0312 18:37:14.513972 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "dd1c9272-702a-4134-b476-91ee66dc43dc" (UID: "dd1c9272-702a-4134-b476-91ee66dc43dc"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:37:14.514473 master-0 kubenswrapper[29097]: I0312 18:37:14.514127 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "dd1c9272-702a-4134-b476-91ee66dc43dc" (UID: "dd1c9272-702a-4134-b476-91ee66dc43dc"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:37:14.524998 master-0 kubenswrapper[29097]: I0312 18:37:14.522555 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf"] Mar 12 18:37:14.535049 master-0 kubenswrapper[29097]: I0312 18:37:14.534651 29097 scope.go:117] "RemoveContainer" containerID="7975f652d90e3f50e0487660fee053da7f9566334eafd0f9ddd007f615d8d957" Mar 12 18:37:14.536447 master-0 kubenswrapper[29097]: E0312 18:37:14.536407 29097 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7975f652d90e3f50e0487660fee053da7f9566334eafd0f9ddd007f615d8d957\": container with ID starting with 7975f652d90e3f50e0487660fee053da7f9566334eafd0f9ddd007f615d8d957 not found: ID does not exist" containerID="7975f652d90e3f50e0487660fee053da7f9566334eafd0f9ddd007f615d8d957" Mar 12 18:37:14.536675 master-0 kubenswrapper[29097]: I0312 18:37:14.536448 29097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7975f652d90e3f50e0487660fee053da7f9566334eafd0f9ddd007f615d8d957"} err="failed to get container status \"7975f652d90e3f50e0487660fee053da7f9566334eafd0f9ddd007f615d8d957\": rpc error: code = NotFound desc = could not find container \"7975f652d90e3f50e0487660fee053da7f9566334eafd0f9ddd007f615d8d957\": container with ID starting with 7975f652d90e3f50e0487660fee053da7f9566334eafd0f9ddd007f615d8d957 not found: ID does not exist" Mar 12 18:37:14.536675 master-0 kubenswrapper[29097]: I0312 18:37:14.536478 29097 scope.go:117] "RemoveContainer" containerID="8b1c5296f22b65b4ce024e0502c4e4b80cc4d40a1ff3097484acb7dcf8d96015" Mar 12 18:37:14.560066 master-0 kubenswrapper[29097]: I0312 18:37:14.560001 29097 scope.go:117] "RemoveContainer" containerID="8b1c5296f22b65b4ce024e0502c4e4b80cc4d40a1ff3097484acb7dcf8d96015" Mar 12 18:37:14.561587 master-0 kubenswrapper[29097]: E0312 18:37:14.561504 29097 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b1c5296f22b65b4ce024e0502c4e4b80cc4d40a1ff3097484acb7dcf8d96015\": container with ID starting with 8b1c5296f22b65b4ce024e0502c4e4b80cc4d40a1ff3097484acb7dcf8d96015 not found: ID does not exist" containerID="8b1c5296f22b65b4ce024e0502c4e4b80cc4d40a1ff3097484acb7dcf8d96015" Mar 12 18:37:14.561642 master-0 kubenswrapper[29097]: I0312 18:37:14.561589 29097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b1c5296f22b65b4ce024e0502c4e4b80cc4d40a1ff3097484acb7dcf8d96015"} err="failed to get container status \"8b1c5296f22b65b4ce024e0502c4e4b80cc4d40a1ff3097484acb7dcf8d96015\": rpc error: code = NotFound desc = could not find container \"8b1c5296f22b65b4ce024e0502c4e4b80cc4d40a1ff3097484acb7dcf8d96015\": container with ID starting with 8b1c5296f22b65b4ce024e0502c4e4b80cc4d40a1ff3097484acb7dcf8d96015 not found: ID does not exist" Mar 12 18:37:14.587458 master-0 kubenswrapper[29097]: I0312 18:37:14.587403 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/25ded464-44eb-4070-83c7-245528c9ba11-audit-dir\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.587458 master-0 kubenswrapper[29097]: I0312 18:37:14.587452 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25ded464-44eb-4070-83c7-245528c9ba11-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.587458 master-0 kubenswrapper[29097]: I0312 18:37:14.587472 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/25ded464-44eb-4070-83c7-245528c9ba11-v4-0-config-system-service-ca\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.587748 master-0 kubenswrapper[29097]: I0312 18:37:14.587489 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/25ded464-44eb-4070-83c7-245528c9ba11-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.587748 master-0 kubenswrapper[29097]: I0312 18:37:14.587574 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/25ded464-44eb-4070-83c7-245528c9ba11-v4-0-config-system-session\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.587748 master-0 kubenswrapper[29097]: I0312 18:37:14.587593 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qfzz\" (UniqueName: \"kubernetes.io/projected/25ded464-44eb-4070-83c7-245528c9ba11-kube-api-access-4qfzz\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.587748 master-0 kubenswrapper[29097]: I0312 18:37:14.587627 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/25ded464-44eb-4070-83c7-245528c9ba11-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.587748 master-0 kubenswrapper[29097]: I0312 18:37:14.587652 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/25ded464-44eb-4070-83c7-245528c9ba11-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.587748 master-0 kubenswrapper[29097]: I0312 18:37:14.587673 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/25ded464-44eb-4070-83c7-245528c9ba11-v4-0-config-user-template-error\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.587748 master-0 kubenswrapper[29097]: I0312 18:37:14.587698 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/25ded464-44eb-4070-83c7-245528c9ba11-v4-0-config-user-template-login\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.587748 master-0 kubenswrapper[29097]: I0312 18:37:14.587726 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/25ded464-44eb-4070-83c7-245528c9ba11-audit-policies\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.587748 master-0 kubenswrapper[29097]: I0312 18:37:14.587744 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/25ded464-44eb-4070-83c7-245528c9ba11-v4-0-config-system-router-certs\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.588000 master-0 kubenswrapper[29097]: I0312 18:37:14.587776 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/25ded464-44eb-4070-83c7-245528c9ba11-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.588000 master-0 kubenswrapper[29097]: I0312 18:37:14.587948 29097 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:37:14.588000 master-0 kubenswrapper[29097]: I0312 18:37:14.587985 29097 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Mar 12 18:37:14.588000 master-0 kubenswrapper[29097]: I0312 18:37:14.587996 29097 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dd1c9272-702a-4134-b476-91ee66dc43dc-audit-policies\") on node \"master-0\" DevicePath \"\"" Mar 12 18:37:14.588111 master-0 kubenswrapper[29097]: I0312 18:37:14.588008 29097 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Mar 12 18:37:14.588111 master-0 kubenswrapper[29097]: I0312 18:37:14.588018 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kv5l6\" (UniqueName: \"kubernetes.io/projected/dd1c9272-702a-4134-b476-91ee66dc43dc-kube-api-access-kv5l6\") on node \"master-0\" DevicePath \"\"" Mar 12 18:37:14.588111 master-0 kubenswrapper[29097]: I0312 18:37:14.588028 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g94cv\" (UniqueName: \"kubernetes.io/projected/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-kube-api-access-g94cv\") on node \"master-0\" DevicePath \"\"" Mar 12 18:37:14.588111 master-0 kubenswrapper[29097]: I0312 18:37:14.588037 29097 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 18:37:14.588111 master-0 kubenswrapper[29097]: I0312 18:37:14.588069 29097 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Mar 12 18:37:14.588111 master-0 kubenswrapper[29097]: I0312 18:37:14.588079 29097 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:37:14.588111 master-0 kubenswrapper[29097]: I0312 18:37:14.588089 29097 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 18:37:14.588111 master-0 kubenswrapper[29097]: I0312 18:37:14.588099 29097 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:37:14.588111 master-0 kubenswrapper[29097]: I0312 18:37:14.588107 29097 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 18:37:14.588364 master-0 kubenswrapper[29097]: I0312 18:37:14.588117 29097 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Mar 12 18:37:14.588364 master-0 kubenswrapper[29097]: I0312 18:37:14.588149 29097 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-console-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:37:14.588364 master-0 kubenswrapper[29097]: I0312 18:37:14.588159 29097 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd1c9272-702a-4134-b476-91ee66dc43dc-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 18:37:14.588364 master-0 kubenswrapper[29097]: I0312 18:37:14.588168 29097 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:37:14.689087 master-0 kubenswrapper[29097]: I0312 18:37:14.689040 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/25ded464-44eb-4070-83c7-245528c9ba11-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.689294 master-0 kubenswrapper[29097]: I0312 18:37:14.689113 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/25ded464-44eb-4070-83c7-245528c9ba11-audit-dir\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.689294 master-0 kubenswrapper[29097]: I0312 18:37:14.689136 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25ded464-44eb-4070-83c7-245528c9ba11-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.689294 master-0 kubenswrapper[29097]: I0312 18:37:14.689152 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/25ded464-44eb-4070-83c7-245528c9ba11-v4-0-config-system-service-ca\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.689294 master-0 kubenswrapper[29097]: I0312 18:37:14.689172 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/25ded464-44eb-4070-83c7-245528c9ba11-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.689294 master-0 kubenswrapper[29097]: I0312 18:37:14.689198 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/25ded464-44eb-4070-83c7-245528c9ba11-v4-0-config-system-session\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.689294 master-0 kubenswrapper[29097]: I0312 18:37:14.689212 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qfzz\" (UniqueName: \"kubernetes.io/projected/25ded464-44eb-4070-83c7-245528c9ba11-kube-api-access-4qfzz\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.689294 master-0 kubenswrapper[29097]: I0312 18:37:14.689234 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/25ded464-44eb-4070-83c7-245528c9ba11-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.689294 master-0 kubenswrapper[29097]: I0312 18:37:14.689258 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/25ded464-44eb-4070-83c7-245528c9ba11-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.689294 master-0 kubenswrapper[29097]: I0312 18:37:14.689276 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/25ded464-44eb-4070-83c7-245528c9ba11-v4-0-config-user-template-error\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.689573 master-0 kubenswrapper[29097]: I0312 18:37:14.689299 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/25ded464-44eb-4070-83c7-245528c9ba11-v4-0-config-user-template-login\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.689573 master-0 kubenswrapper[29097]: I0312 18:37:14.689330 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/25ded464-44eb-4070-83c7-245528c9ba11-audit-policies\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.689573 master-0 kubenswrapper[29097]: I0312 18:37:14.689349 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/25ded464-44eb-4070-83c7-245528c9ba11-v4-0-config-system-router-certs\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.690878 master-0 kubenswrapper[29097]: I0312 18:37:14.690831 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/25ded464-44eb-4070-83c7-245528c9ba11-audit-dir\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.690942 master-0 kubenswrapper[29097]: I0312 18:37:14.690846 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/25ded464-44eb-4070-83c7-245528c9ba11-v4-0-config-system-service-ca\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.691125 master-0 kubenswrapper[29097]: I0312 18:37:14.691102 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25ded464-44eb-4070-83c7-245528c9ba11-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.691311 master-0 kubenswrapper[29097]: I0312 18:37:14.691285 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/25ded464-44eb-4070-83c7-245528c9ba11-audit-policies\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.691374 master-0 kubenswrapper[29097]: I0312 18:37:14.691314 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/25ded464-44eb-4070-83c7-245528c9ba11-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.695093 master-0 kubenswrapper[29097]: I0312 18:37:14.695063 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/25ded464-44eb-4070-83c7-245528c9ba11-v4-0-config-user-template-error\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.695238 master-0 kubenswrapper[29097]: I0312 18:37:14.695208 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/25ded464-44eb-4070-83c7-245528c9ba11-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.695307 master-0 kubenswrapper[29097]: I0312 18:37:14.695274 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/25ded464-44eb-4070-83c7-245528c9ba11-v4-0-config-system-session\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.695523 master-0 kubenswrapper[29097]: I0312 18:37:14.695488 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/25ded464-44eb-4070-83c7-245528c9ba11-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.696338 master-0 kubenswrapper[29097]: I0312 18:37:14.696300 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/25ded464-44eb-4070-83c7-245528c9ba11-v4-0-config-system-router-certs\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.696646 master-0 kubenswrapper[29097]: I0312 18:37:14.696625 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/25ded464-44eb-4070-83c7-245528c9ba11-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.696646 master-0 kubenswrapper[29097]: I0312 18:37:14.696638 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/25ded464-44eb-4070-83c7-245528c9ba11-v4-0-config-user-template-login\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.713961 master-0 kubenswrapper[29097]: I0312 18:37:14.713926 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qfzz\" (UniqueName: \"kubernetes.io/projected/25ded464-44eb-4070-83c7-245528c9ba11-kube-api-access-4qfzz\") pod \"oauth-openshift-5d5c7bc8d7-tldhf\" (UID: \"25ded464-44eb-4070-83c7-245528c9ba11\") " pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.818033 master-0 kubenswrapper[29097]: I0312 18:37:14.816130 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn"] Mar 12 18:37:14.818903 master-0 kubenswrapper[29097]: I0312 18:37:14.818884 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:14.831996 master-0 kubenswrapper[29097]: I0312 18:37:14.831769 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-7c9f57fd64-zpdxn"] Mar 12 18:37:14.844134 master-0 kubenswrapper[29097]: I0312 18:37:14.844082 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6b98bc4d-xfxc9"] Mar 12 18:37:14.850189 master-0 kubenswrapper[29097]: I0312 18:37:14.850044 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6b98bc4d-xfxc9"] Mar 12 18:37:15.202945 master-0 kubenswrapper[29097]: I0312 18:37:15.202898 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf"] Mar 12 18:37:15.513650 master-0 kubenswrapper[29097]: I0312 18:37:15.513501 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" event={"ID":"25ded464-44eb-4070-83c7-245528c9ba11","Type":"ContainerStarted","Data":"c92d83ad76c66f5ac2f9c7c82d4c15900bcd7dd12946022d6e7cfeb276a1a17b"} Mar 12 18:37:15.513650 master-0 kubenswrapper[29097]: I0312 18:37:15.513560 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" event={"ID":"25ded464-44eb-4070-83c7-245528c9ba11","Type":"ContainerStarted","Data":"9485e5fb8ffcc4374851fe1c299d9aa6f08d82603a56295cbfa6e6a8e77ed69e"} Mar 12 18:37:15.514013 master-0 kubenswrapper[29097]: I0312 18:37:15.513986 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:15.535860 master-0 kubenswrapper[29097]: I0312 18:37:15.535781 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" podStartSLOduration=27.535759819 podStartE2EDuration="27.535759819s" podCreationTimestamp="2026-03-12 18:36:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:37:15.531883513 +0000 UTC m=+475.085863630" watchObservedRunningTime="2026-03-12 18:37:15.535759819 +0000 UTC m=+475.089739926" Mar 12 18:37:15.902954 master-0 kubenswrapper[29097]: I0312 18:37:15.902892 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5d5c7bc8d7-tldhf" Mar 12 18:37:16.728710 master-0 kubenswrapper[29097]: I0312 18:37:16.728653 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd1c9272-702a-4134-b476-91ee66dc43dc" path="/var/lib/kubelet/pods/dd1c9272-702a-4134-b476-91ee66dc43dc/volumes" Mar 12 18:37:16.729675 master-0 kubenswrapper[29097]: I0312 18:37:16.729211 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5dd5dc7-7bc4-4154-8aac-876d2a0ae565" path="/var/lib/kubelet/pods/f5dd5dc7-7bc4-4154-8aac-876d2a0ae565/volumes" Mar 12 18:37:20.356975 master-0 kubenswrapper[29097]: I0312 18:37:20.356915 29097 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 18:37:20.357632 master-0 kubenswrapper[29097]: I0312 18:37:20.357302 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="49835aec35bdc5feca0d7cf24779b8da" containerName="cluster-policy-controller" containerID="cri-o://f62d8ace6b78a3d4700c1f018543131bd0581db10be6e4a0ffe1a906b4efcd0a" gracePeriod=30 Mar 12 18:37:20.357632 master-0 kubenswrapper[29097]: I0312 18:37:20.357381 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="49835aec35bdc5feca0d7cf24779b8da" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://65937ab55749d18637d8330aa44ceee2c94d4c78aeac6055c20ae0425fb42bf6" gracePeriod=30 Mar 12 18:37:20.357632 master-0 kubenswrapper[29097]: I0312 18:37:20.357385 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="49835aec35bdc5feca0d7cf24779b8da" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://69f074324c83bb75cf9edb644e26f8a566f617056c611c320fb03fd80290ef36" gracePeriod=30 Mar 12 18:37:20.357780 master-0 kubenswrapper[29097]: I0312 18:37:20.357483 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="49835aec35bdc5feca0d7cf24779b8da" containerName="kube-controller-manager" containerID="cri-o://27ec01e446898cdb09325e858095825a7ec9b233787886936fcd21a787d5965b" gracePeriod=30 Mar 12 18:37:20.365766 master-0 kubenswrapper[29097]: I0312 18:37:20.365680 29097 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 18:37:20.366301 master-0 kubenswrapper[29097]: E0312 18:37:20.366263 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49835aec35bdc5feca0d7cf24779b8da" containerName="cluster-policy-controller" Mar 12 18:37:20.366383 master-0 kubenswrapper[29097]: I0312 18:37:20.366299 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="49835aec35bdc5feca0d7cf24779b8da" containerName="cluster-policy-controller" Mar 12 18:37:20.366383 master-0 kubenswrapper[29097]: E0312 18:37:20.366328 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49835aec35bdc5feca0d7cf24779b8da" containerName="kube-controller-manager-cert-syncer" Mar 12 18:37:20.366383 master-0 kubenswrapper[29097]: I0312 18:37:20.366345 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="49835aec35bdc5feca0d7cf24779b8da" containerName="kube-controller-manager-cert-syncer" Mar 12 18:37:20.366572 master-0 kubenswrapper[29097]: E0312 18:37:20.366388 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49835aec35bdc5feca0d7cf24779b8da" containerName="kube-controller-manager-recovery-controller" Mar 12 18:37:20.366572 master-0 kubenswrapper[29097]: I0312 18:37:20.366407 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="49835aec35bdc5feca0d7cf24779b8da" containerName="kube-controller-manager-recovery-controller" Mar 12 18:37:20.366572 master-0 kubenswrapper[29097]: E0312 18:37:20.366435 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49835aec35bdc5feca0d7cf24779b8da" containerName="kube-controller-manager" Mar 12 18:37:20.366572 master-0 kubenswrapper[29097]: I0312 18:37:20.366451 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="49835aec35bdc5feca0d7cf24779b8da" containerName="kube-controller-manager" Mar 12 18:37:20.366572 master-0 kubenswrapper[29097]: E0312 18:37:20.366494 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49835aec35bdc5feca0d7cf24779b8da" containerName="kube-controller-manager" Mar 12 18:37:20.366572 master-0 kubenswrapper[29097]: I0312 18:37:20.366510 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="49835aec35bdc5feca0d7cf24779b8da" containerName="kube-controller-manager" Mar 12 18:37:20.366572 master-0 kubenswrapper[29097]: E0312 18:37:20.366570 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49835aec35bdc5feca0d7cf24779b8da" containerName="kube-controller-manager" Mar 12 18:37:20.366892 master-0 kubenswrapper[29097]: I0312 18:37:20.366586 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="49835aec35bdc5feca0d7cf24779b8da" containerName="kube-controller-manager" Mar 12 18:37:20.366892 master-0 kubenswrapper[29097]: E0312 18:37:20.366603 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49835aec35bdc5feca0d7cf24779b8da" containerName="kube-controller-manager" Mar 12 18:37:20.366892 master-0 kubenswrapper[29097]: I0312 18:37:20.366618 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="49835aec35bdc5feca0d7cf24779b8da" containerName="kube-controller-manager" Mar 12 18:37:20.367065 master-0 kubenswrapper[29097]: I0312 18:37:20.366887 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="49835aec35bdc5feca0d7cf24779b8da" containerName="kube-controller-manager" Mar 12 18:37:20.367065 master-0 kubenswrapper[29097]: I0312 18:37:20.366921 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="49835aec35bdc5feca0d7cf24779b8da" containerName="kube-controller-manager" Mar 12 18:37:20.367065 master-0 kubenswrapper[29097]: I0312 18:37:20.366940 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="49835aec35bdc5feca0d7cf24779b8da" containerName="cluster-policy-controller" Mar 12 18:37:20.367065 master-0 kubenswrapper[29097]: I0312 18:37:20.366968 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="49835aec35bdc5feca0d7cf24779b8da" containerName="kube-controller-manager" Mar 12 18:37:20.367065 master-0 kubenswrapper[29097]: I0312 18:37:20.367003 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="49835aec35bdc5feca0d7cf24779b8da" containerName="kube-controller-manager-cert-syncer" Mar 12 18:37:20.367065 master-0 kubenswrapper[29097]: I0312 18:37:20.367030 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="49835aec35bdc5feca0d7cf24779b8da" containerName="kube-controller-manager-recovery-controller" Mar 12 18:37:20.368900 master-0 kubenswrapper[29097]: I0312 18:37:20.368501 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="49835aec35bdc5feca0d7cf24779b8da" containerName="kube-controller-manager" Mar 12 18:37:20.487590 master-0 kubenswrapper[29097]: I0312 18:37:20.487498 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f1e95f2c9985261e0b98e3f11e9f2080-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"f1e95f2c9985261e0b98e3f11e9f2080\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:37:20.488255 master-0 kubenswrapper[29097]: I0312 18:37:20.488078 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f1e95f2c9985261e0b98e3f11e9f2080-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"f1e95f2c9985261e0b98e3f11e9f2080\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:37:20.527247 master-0 kubenswrapper[29097]: I0312 18:37:20.527176 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_49835aec35bdc5feca0d7cf24779b8da/kube-controller-manager/2.log" Mar 12 18:37:20.529136 master-0 kubenswrapper[29097]: I0312 18:37:20.529105 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_49835aec35bdc5feca0d7cf24779b8da/kube-controller-manager-cert-syncer/0.log" Mar 12 18:37:20.530632 master-0 kubenswrapper[29097]: I0312 18:37:20.529484 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:37:20.533368 master-0 kubenswrapper[29097]: I0312 18:37:20.533331 29097 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="49835aec35bdc5feca0d7cf24779b8da" podUID="f1e95f2c9985261e0b98e3f11e9f2080" Mar 12 18:37:20.558383 master-0 kubenswrapper[29097]: I0312 18:37:20.558297 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_49835aec35bdc5feca0d7cf24779b8da/kube-controller-manager/2.log" Mar 12 18:37:20.559226 master-0 kubenswrapper[29097]: I0312 18:37:20.559162 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_49835aec35bdc5feca0d7cf24779b8da/kube-controller-manager-cert-syncer/0.log" Mar 12 18:37:20.559658 master-0 kubenswrapper[29097]: I0312 18:37:20.559609 29097 generic.go:334] "Generic (PLEG): container finished" podID="49835aec35bdc5feca0d7cf24779b8da" containerID="27ec01e446898cdb09325e858095825a7ec9b233787886936fcd21a787d5965b" exitCode=0 Mar 12 18:37:20.559658 master-0 kubenswrapper[29097]: I0312 18:37:20.559650 29097 generic.go:334] "Generic (PLEG): container finished" podID="49835aec35bdc5feca0d7cf24779b8da" containerID="65937ab55749d18637d8330aa44ceee2c94d4c78aeac6055c20ae0425fb42bf6" exitCode=0 Mar 12 18:37:20.559761 master-0 kubenswrapper[29097]: I0312 18:37:20.559661 29097 generic.go:334] "Generic (PLEG): container finished" podID="49835aec35bdc5feca0d7cf24779b8da" containerID="69f074324c83bb75cf9edb644e26f8a566f617056c611c320fb03fd80290ef36" exitCode=2 Mar 12 18:37:20.559761 master-0 kubenswrapper[29097]: I0312 18:37:20.559673 29097 generic.go:334] "Generic (PLEG): container finished" podID="49835aec35bdc5feca0d7cf24779b8da" containerID="f62d8ace6b78a3d4700c1f018543131bd0581db10be6e4a0ffe1a906b4efcd0a" exitCode=0 Mar 12 18:37:20.559761 master-0 kubenswrapper[29097]: I0312 18:37:20.559712 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="677d598751a9389168de4b8d58e7ebd447bfc781ea7149cdcbbd5de656faaac5" Mar 12 18:37:20.559761 master-0 kubenswrapper[29097]: I0312 18:37:20.559729 29097 scope.go:117] "RemoveContainer" containerID="d8bc1ab80b512a9b34b5a39f82b7bfc61939a83d8fe4158a7181d62b837fd9c1" Mar 12 18:37:20.559928 master-0 kubenswrapper[29097]: I0312 18:37:20.559870 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:37:20.564677 master-0 kubenswrapper[29097]: I0312 18:37:20.564610 29097 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="49835aec35bdc5feca0d7cf24779b8da" podUID="f1e95f2c9985261e0b98e3f11e9f2080" Mar 12 18:37:20.589756 master-0 kubenswrapper[29097]: I0312 18:37:20.589713 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f1e95f2c9985261e0b98e3f11e9f2080-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"f1e95f2c9985261e0b98e3f11e9f2080\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:37:20.589837 master-0 kubenswrapper[29097]: I0312 18:37:20.589813 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f1e95f2c9985261e0b98e3f11e9f2080-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"f1e95f2c9985261e0b98e3f11e9f2080\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:37:20.589894 master-0 kubenswrapper[29097]: I0312 18:37:20.589866 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f1e95f2c9985261e0b98e3f11e9f2080-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"f1e95f2c9985261e0b98e3f11e9f2080\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:37:20.589956 master-0 kubenswrapper[29097]: I0312 18:37:20.589937 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f1e95f2c9985261e0b98e3f11e9f2080-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"f1e95f2c9985261e0b98e3f11e9f2080\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:37:20.691450 master-0 kubenswrapper[29097]: I0312 18:37:20.691275 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49835aec35bdc5feca0d7cf24779b8da-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "49835aec35bdc5feca0d7cf24779b8da" (UID: "49835aec35bdc5feca0d7cf24779b8da"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:37:20.691450 master-0 kubenswrapper[29097]: I0312 18:37:20.691350 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/49835aec35bdc5feca0d7cf24779b8da-resource-dir\") pod \"49835aec35bdc5feca0d7cf24779b8da\" (UID: \"49835aec35bdc5feca0d7cf24779b8da\") " Mar 12 18:37:20.691450 master-0 kubenswrapper[29097]: I0312 18:37:20.691409 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/49835aec35bdc5feca0d7cf24779b8da-cert-dir\") pod \"49835aec35bdc5feca0d7cf24779b8da\" (UID: \"49835aec35bdc5feca0d7cf24779b8da\") " Mar 12 18:37:20.692054 master-0 kubenswrapper[29097]: I0312 18:37:20.691701 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49835aec35bdc5feca0d7cf24779b8da-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "49835aec35bdc5feca0d7cf24779b8da" (UID: "49835aec35bdc5feca0d7cf24779b8da"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:37:20.693759 master-0 kubenswrapper[29097]: I0312 18:37:20.692706 29097 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/49835aec35bdc5feca0d7cf24779b8da-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:37:20.693759 master-0 kubenswrapper[29097]: I0312 18:37:20.692727 29097 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/49835aec35bdc5feca0d7cf24779b8da-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:37:20.730770 master-0 kubenswrapper[29097]: I0312 18:37:20.730691 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49835aec35bdc5feca0d7cf24779b8da" path="/var/lib/kubelet/pods/49835aec35bdc5feca0d7cf24779b8da/volumes" Mar 12 18:37:20.869685 master-0 kubenswrapper[29097]: I0312 18:37:20.869614 29097 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="49835aec35bdc5feca0d7cf24779b8da" podUID="f1e95f2c9985261e0b98e3f11e9f2080" Mar 12 18:37:21.117013 master-0 kubenswrapper[29097]: I0312 18:37:21.116963 29097 scope.go:117] "RemoveContainer" containerID="f62d8ace6b78a3d4700c1f018543131bd0581db10be6e4a0ffe1a906b4efcd0a" Mar 12 18:37:21.150986 master-0 kubenswrapper[29097]: I0312 18:37:21.150887 29097 scope.go:117] "RemoveContainer" containerID="65937ab55749d18637d8330aa44ceee2c94d4c78aeac6055c20ae0425fb42bf6" Mar 12 18:37:21.177251 master-0 kubenswrapper[29097]: I0312 18:37:21.177183 29097 scope.go:117] "RemoveContainer" containerID="69f074324c83bb75cf9edb644e26f8a566f617056c611c320fb03fd80290ef36" Mar 12 18:37:21.224307 master-0 kubenswrapper[29097]: E0312 18:37:21.224264 29097 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33feec78_4592_4343_965b_aa1b7044fcf3.slice/crio-26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Error finding container 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Status 404 returned error can't find the container with id 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547 Mar 12 18:37:21.577572 master-0 kubenswrapper[29097]: I0312 18:37:21.577128 29097 generic.go:334] "Generic (PLEG): container finished" podID="041f65ed-8421-458e-bc4c-ffc7c6bd4660" containerID="e16003898332347835ae7c273649a43f8ef80db188d6690db59e62aadac793d9" exitCode=0 Mar 12 18:37:21.577572 master-0 kubenswrapper[29097]: I0312 18:37:21.577192 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" event={"ID":"041f65ed-8421-458e-bc4c-ffc7c6bd4660","Type":"ContainerDied","Data":"e16003898332347835ae7c273649a43f8ef80db188d6690db59e62aadac793d9"} Mar 12 18:37:22.894305 master-0 kubenswrapper[29097]: I0312 18:37:22.894267 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Mar 12 18:37:23.032775 master-0 kubenswrapper[29097]: I0312 18:37:23.032730 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/041f65ed-8421-458e-bc4c-ffc7c6bd4660-var-lock\") pod \"041f65ed-8421-458e-bc4c-ffc7c6bd4660\" (UID: \"041f65ed-8421-458e-bc4c-ffc7c6bd4660\") " Mar 12 18:37:23.032775 master-0 kubenswrapper[29097]: I0312 18:37:23.032777 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/041f65ed-8421-458e-bc4c-ffc7c6bd4660-kube-api-access\") pod \"041f65ed-8421-458e-bc4c-ffc7c6bd4660\" (UID: \"041f65ed-8421-458e-bc4c-ffc7c6bd4660\") " Mar 12 18:37:23.032994 master-0 kubenswrapper[29097]: I0312 18:37:23.032827 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/041f65ed-8421-458e-bc4c-ffc7c6bd4660-kubelet-dir\") pod \"041f65ed-8421-458e-bc4c-ffc7c6bd4660\" (UID: \"041f65ed-8421-458e-bc4c-ffc7c6bd4660\") " Mar 12 18:37:23.033035 master-0 kubenswrapper[29097]: I0312 18:37:23.033001 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/041f65ed-8421-458e-bc4c-ffc7c6bd4660-var-lock" (OuterVolumeSpecName: "var-lock") pod "041f65ed-8421-458e-bc4c-ffc7c6bd4660" (UID: "041f65ed-8421-458e-bc4c-ffc7c6bd4660"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:37:23.033198 master-0 kubenswrapper[29097]: I0312 18:37:23.033170 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/041f65ed-8421-458e-bc4c-ffc7c6bd4660-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "041f65ed-8421-458e-bc4c-ffc7c6bd4660" (UID: "041f65ed-8421-458e-bc4c-ffc7c6bd4660"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:37:23.033247 master-0 kubenswrapper[29097]: I0312 18:37:23.033195 29097 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/041f65ed-8421-458e-bc4c-ffc7c6bd4660-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 18:37:23.035740 master-0 kubenswrapper[29097]: I0312 18:37:23.035710 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/041f65ed-8421-458e-bc4c-ffc7c6bd4660-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "041f65ed-8421-458e-bc4c-ffc7c6bd4660" (UID: "041f65ed-8421-458e-bc4c-ffc7c6bd4660"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:37:23.134471 master-0 kubenswrapper[29097]: I0312 18:37:23.134358 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/041f65ed-8421-458e-bc4c-ffc7c6bd4660-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 18:37:23.134471 master-0 kubenswrapper[29097]: I0312 18:37:23.134403 29097 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/041f65ed-8421-458e-bc4c-ffc7c6bd4660-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:37:23.597618 master-0 kubenswrapper[29097]: I0312 18:37:23.597506 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" event={"ID":"041f65ed-8421-458e-bc4c-ffc7c6bd4660","Type":"ContainerDied","Data":"c9d68f7ba3c249e475b534e5afc0e791e4836af4ccc8f2b8446edec2e46b0b8b"} Mar 12 18:37:23.597618 master-0 kubenswrapper[29097]: I0312 18:37:23.597611 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9d68f7ba3c249e475b534e5afc0e791e4836af4ccc8f2b8446edec2e46b0b8b" Mar 12 18:37:23.597618 master-0 kubenswrapper[29097]: I0312 18:37:23.597560 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-retry-1-master-0" Mar 12 18:37:24.494498 master-0 kubenswrapper[29097]: I0312 18:37:24.494398 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-fb478b976-xhpmp" podUID="a9d8ed6f-1332-471a-891a-9f7f8dbc78b6" containerName="console" containerID="cri-o://718140b4abb58963bee21e1495289d75b397d95cf81fe483bc008b0a0b8126f8" gracePeriod=15 Mar 12 18:37:25.104663 master-0 kubenswrapper[29097]: I0312 18:37:25.104603 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-fb478b976-xhpmp_a9d8ed6f-1332-471a-891a-9f7f8dbc78b6/console/0.log" Mar 12 18:37:25.104944 master-0 kubenswrapper[29097]: I0312 18:37:25.104711 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-fb478b976-xhpmp" Mar 12 18:37:25.172613 master-0 kubenswrapper[29097]: I0312 18:37:25.172537 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-console-oauth-config\") pod \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\" (UID: \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\") " Mar 12 18:37:25.172613 master-0 kubenswrapper[29097]: I0312 18:37:25.172605 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-trusted-ca-bundle\") pod \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\" (UID: \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\") " Mar 12 18:37:25.172883 master-0 kubenswrapper[29097]: I0312 18:37:25.172642 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-console-serving-cert\") pod \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\" (UID: \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\") " Mar 12 18:37:25.172883 master-0 kubenswrapper[29097]: I0312 18:37:25.172672 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-oauth-serving-cert\") pod \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\" (UID: \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\") " Mar 12 18:37:25.172883 master-0 kubenswrapper[29097]: I0312 18:37:25.172702 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-console-config\") pod \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\" (UID: \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\") " Mar 12 18:37:25.172883 master-0 kubenswrapper[29097]: I0312 18:37:25.172732 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-service-ca\") pod \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\" (UID: \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\") " Mar 12 18:37:25.172883 master-0 kubenswrapper[29097]: I0312 18:37:25.172757 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9tx4\" (UniqueName: \"kubernetes.io/projected/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-kube-api-access-l9tx4\") pod \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\" (UID: \"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6\") " Mar 12 18:37:25.174237 master-0 kubenswrapper[29097]: I0312 18:37:25.174199 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a9d8ed6f-1332-471a-891a-9f7f8dbc78b6" (UID: "a9d8ed6f-1332-471a-891a-9f7f8dbc78b6"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:37:25.174320 master-0 kubenswrapper[29097]: I0312 18:37:25.174264 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-service-ca" (OuterVolumeSpecName: "service-ca") pod "a9d8ed6f-1332-471a-891a-9f7f8dbc78b6" (UID: "a9d8ed6f-1332-471a-891a-9f7f8dbc78b6"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:37:25.174572 master-0 kubenswrapper[29097]: I0312 18:37:25.174478 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a9d8ed6f-1332-471a-891a-9f7f8dbc78b6" (UID: "a9d8ed6f-1332-471a-891a-9f7f8dbc78b6"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:37:25.177611 master-0 kubenswrapper[29097]: I0312 18:37:25.177384 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-console-config" (OuterVolumeSpecName: "console-config") pod "a9d8ed6f-1332-471a-891a-9f7f8dbc78b6" (UID: "a9d8ed6f-1332-471a-891a-9f7f8dbc78b6"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:37:25.178634 master-0 kubenswrapper[29097]: I0312 18:37:25.178397 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a9d8ed6f-1332-471a-891a-9f7f8dbc78b6" (UID: "a9d8ed6f-1332-471a-891a-9f7f8dbc78b6"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:37:25.178634 master-0 kubenswrapper[29097]: I0312 18:37:25.178431 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a9d8ed6f-1332-471a-891a-9f7f8dbc78b6" (UID: "a9d8ed6f-1332-471a-891a-9f7f8dbc78b6"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:37:25.179057 master-0 kubenswrapper[29097]: I0312 18:37:25.179014 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-kube-api-access-l9tx4" (OuterVolumeSpecName: "kube-api-access-l9tx4") pod "a9d8ed6f-1332-471a-891a-9f7f8dbc78b6" (UID: "a9d8ed6f-1332-471a-891a-9f7f8dbc78b6"). InnerVolumeSpecName "kube-api-access-l9tx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:37:25.273748 master-0 kubenswrapper[29097]: I0312 18:37:25.273595 29097 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:37:25.273748 master-0 kubenswrapper[29097]: I0312 18:37:25.273637 29097 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:37:25.273748 master-0 kubenswrapper[29097]: I0312 18:37:25.273647 29097 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 18:37:25.273748 master-0 kubenswrapper[29097]: I0312 18:37:25.273656 29097 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 18:37:25.273748 master-0 kubenswrapper[29097]: I0312 18:37:25.273667 29097 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-console-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:37:25.273748 master-0 kubenswrapper[29097]: I0312 18:37:25.273676 29097 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 18:37:25.273748 master-0 kubenswrapper[29097]: I0312 18:37:25.273686 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9tx4\" (UniqueName: \"kubernetes.io/projected/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6-kube-api-access-l9tx4\") on node \"master-0\" DevicePath \"\"" Mar 12 18:37:25.620153 master-0 kubenswrapper[29097]: I0312 18:37:25.620076 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-fb478b976-xhpmp_a9d8ed6f-1332-471a-891a-9f7f8dbc78b6/console/0.log" Mar 12 18:37:25.621003 master-0 kubenswrapper[29097]: I0312 18:37:25.620166 29097 generic.go:334] "Generic (PLEG): container finished" podID="a9d8ed6f-1332-471a-891a-9f7f8dbc78b6" containerID="718140b4abb58963bee21e1495289d75b397d95cf81fe483bc008b0a0b8126f8" exitCode=2 Mar 12 18:37:25.621003 master-0 kubenswrapper[29097]: I0312 18:37:25.620208 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-fb478b976-xhpmp" event={"ID":"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6","Type":"ContainerDied","Data":"718140b4abb58963bee21e1495289d75b397d95cf81fe483bc008b0a0b8126f8"} Mar 12 18:37:25.621003 master-0 kubenswrapper[29097]: I0312 18:37:25.620256 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-fb478b976-xhpmp" event={"ID":"a9d8ed6f-1332-471a-891a-9f7f8dbc78b6","Type":"ContainerDied","Data":"c46dc53d677723b7c24e1d6c1784e62b58852f61c48feea210a9ecdcf76ab779"} Mar 12 18:37:25.621003 master-0 kubenswrapper[29097]: I0312 18:37:25.620258 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-fb478b976-xhpmp" Mar 12 18:37:25.621003 master-0 kubenswrapper[29097]: I0312 18:37:25.620290 29097 scope.go:117] "RemoveContainer" containerID="718140b4abb58963bee21e1495289d75b397d95cf81fe483bc008b0a0b8126f8" Mar 12 18:37:25.647692 master-0 kubenswrapper[29097]: I0312 18:37:25.647631 29097 scope.go:117] "RemoveContainer" containerID="718140b4abb58963bee21e1495289d75b397d95cf81fe483bc008b0a0b8126f8" Mar 12 18:37:25.648305 master-0 kubenswrapper[29097]: E0312 18:37:25.648247 29097 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"718140b4abb58963bee21e1495289d75b397d95cf81fe483bc008b0a0b8126f8\": container with ID starting with 718140b4abb58963bee21e1495289d75b397d95cf81fe483bc008b0a0b8126f8 not found: ID does not exist" containerID="718140b4abb58963bee21e1495289d75b397d95cf81fe483bc008b0a0b8126f8" Mar 12 18:37:25.648409 master-0 kubenswrapper[29097]: I0312 18:37:25.648305 29097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"718140b4abb58963bee21e1495289d75b397d95cf81fe483bc008b0a0b8126f8"} err="failed to get container status \"718140b4abb58963bee21e1495289d75b397d95cf81fe483bc008b0a0b8126f8\": rpc error: code = NotFound desc = could not find container \"718140b4abb58963bee21e1495289d75b397d95cf81fe483bc008b0a0b8126f8\": container with ID starting with 718140b4abb58963bee21e1495289d75b397d95cf81fe483bc008b0a0b8126f8 not found: ID does not exist" Mar 12 18:37:25.675452 master-0 kubenswrapper[29097]: I0312 18:37:25.675339 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-fb478b976-xhpmp"] Mar 12 18:37:25.683949 master-0 kubenswrapper[29097]: I0312 18:37:25.683740 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-fb478b976-xhpmp"] Mar 12 18:37:26.735144 master-0 kubenswrapper[29097]: I0312 18:37:26.735069 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9d8ed6f-1332-471a-891a-9f7f8dbc78b6" path="/var/lib/kubelet/pods/a9d8ed6f-1332-471a-891a-9f7f8dbc78b6/volumes" Mar 12 18:37:32.720902 master-0 kubenswrapper[29097]: I0312 18:37:32.720808 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:37:32.761104 master-0 kubenswrapper[29097]: I0312 18:37:32.761011 29097 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7ec32651-18dc-477a-9b17-5b2a2d43289e" Mar 12 18:37:32.761104 master-0 kubenswrapper[29097]: I0312 18:37:32.761080 29097 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7ec32651-18dc-477a-9b17-5b2a2d43289e" Mar 12 18:37:32.779092 master-0 kubenswrapper[29097]: I0312 18:37:32.779008 29097 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:37:32.792932 master-0 kubenswrapper[29097]: I0312 18:37:32.792827 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 18:37:32.794644 master-0 kubenswrapper[29097]: I0312 18:37:32.793828 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:37:32.802189 master-0 kubenswrapper[29097]: I0312 18:37:32.802105 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 18:37:32.809208 master-0 kubenswrapper[29097]: I0312 18:37:32.809133 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 18:37:32.831132 master-0 kubenswrapper[29097]: W0312 18:37:32.831070 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1e95f2c9985261e0b98e3f11e9f2080.slice/crio-23a684137c953c3c30c302f52a91a7102d0f2ae56ce54b850d4a5c7c905927b2 WatchSource:0}: Error finding container 23a684137c953c3c30c302f52a91a7102d0f2ae56ce54b850d4a5c7c905927b2: Status 404 returned error can't find the container with id 23a684137c953c3c30c302f52a91a7102d0f2ae56ce54b850d4a5c7c905927b2 Mar 12 18:37:33.705936 master-0 kubenswrapper[29097]: I0312 18:37:33.705878 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f1e95f2c9985261e0b98e3f11e9f2080","Type":"ContainerStarted","Data":"da9184532c94ad40758e8e6adbaa0785f835e263deb18961475a1433c6619805"} Mar 12 18:37:33.705936 master-0 kubenswrapper[29097]: I0312 18:37:33.705937 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f1e95f2c9985261e0b98e3f11e9f2080","Type":"ContainerStarted","Data":"5babe9dd5360ba6ffa54e142f0e281471773f98008ede5df16847445feb74123"} Mar 12 18:37:33.706181 master-0 kubenswrapper[29097]: I0312 18:37:33.705954 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f1e95f2c9985261e0b98e3f11e9f2080","Type":"ContainerStarted","Data":"06dc5d589b75b3012723af081836de7f1ab88f42c7aab3d0557b1c3318ad3ac4"} Mar 12 18:37:33.706181 master-0 kubenswrapper[29097]: I0312 18:37:33.705968 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f1e95f2c9985261e0b98e3f11e9f2080","Type":"ContainerStarted","Data":"23a684137c953c3c30c302f52a91a7102d0f2ae56ce54b850d4a5c7c905927b2"} Mar 12 18:37:34.744551 master-0 kubenswrapper[29097]: I0312 18:37:34.741684 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"f1e95f2c9985261e0b98e3f11e9f2080","Type":"ContainerStarted","Data":"ce0b6765947375ccffc44cf88c065997e375243c4d1085706f09a0c6a0de6e43"} Mar 12 18:37:34.758465 master-0 kubenswrapper[29097]: I0312 18:37:34.757890 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.757859371 podStartE2EDuration="2.757859371s" podCreationTimestamp="2026-03-12 18:37:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:37:34.752076297 +0000 UTC m=+494.306056454" watchObservedRunningTime="2026-03-12 18:37:34.757859371 +0000 UTC m=+494.311839498" Mar 12 18:37:42.794717 master-0 kubenswrapper[29097]: I0312 18:37:42.794654 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:37:42.794717 master-0 kubenswrapper[29097]: I0312 18:37:42.794725 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:37:42.795410 master-0 kubenswrapper[29097]: I0312 18:37:42.794872 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:37:42.795410 master-0 kubenswrapper[29097]: I0312 18:37:42.794970 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:37:42.807980 master-0 kubenswrapper[29097]: I0312 18:37:42.807933 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:37:42.808573 master-0 kubenswrapper[29097]: I0312 18:37:42.808502 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:37:43.804210 master-0 kubenswrapper[29097]: I0312 18:37:43.804176 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:37:43.807374 master-0 kubenswrapper[29097]: I0312 18:37:43.807340 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 18:38:10.883889 master-0 kubenswrapper[29097]: I0312 18:38:10.883807 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-5c96ff9cd6-xch96"] Mar 12 18:38:10.885000 master-0 kubenswrapper[29097]: E0312 18:38:10.884382 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="041f65ed-8421-458e-bc4c-ffc7c6bd4660" containerName="installer" Mar 12 18:38:10.885000 master-0 kubenswrapper[29097]: I0312 18:38:10.884418 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="041f65ed-8421-458e-bc4c-ffc7c6bd4660" containerName="installer" Mar 12 18:38:10.885000 master-0 kubenswrapper[29097]: E0312 18:38:10.884474 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9d8ed6f-1332-471a-891a-9f7f8dbc78b6" containerName="console" Mar 12 18:38:10.885000 master-0 kubenswrapper[29097]: I0312 18:38:10.884492 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9d8ed6f-1332-471a-891a-9f7f8dbc78b6" containerName="console" Mar 12 18:38:10.885000 master-0 kubenswrapper[29097]: I0312 18:38:10.884897 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9d8ed6f-1332-471a-891a-9f7f8dbc78b6" containerName="console" Mar 12 18:38:10.885000 master-0 kubenswrapper[29097]: I0312 18:38:10.884931 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="041f65ed-8421-458e-bc4c-ffc7c6bd4660" containerName="installer" Mar 12 18:38:10.887752 master-0 kubenswrapper[29097]: I0312 18:38:10.885907 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-5c96ff9cd6-xch96" Mar 12 18:38:10.892287 master-0 kubenswrapper[29097]: I0312 18:38:10.892204 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"openshift-service-ca.crt" Mar 12 18:38:10.892621 master-0 kubenswrapper[29097]: I0312 18:38:10.892577 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"sushy-emulator-config" Mar 12 18:38:10.892859 master-0 kubenswrapper[29097]: I0312 18:38:10.892824 29097 reflector.go:368] Caches populated for *v1.Secret from object-"sushy-emulator"/"os-client-config" Mar 12 18:38:10.893206 master-0 kubenswrapper[29097]: I0312 18:38:10.893168 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"kube-root-ca.crt" Mar 12 18:38:10.899913 master-0 kubenswrapper[29097]: I0312 18:38:10.899855 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-5c96ff9cd6-xch96"] Mar 12 18:38:10.949187 master-0 kubenswrapper[29097]: I0312 18:38:10.949130 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/1be0bb8e-7ef6-4556-a8c9-6c25568be40a-sushy-emulator-config\") pod \"sushy-emulator-5c96ff9cd6-xch96\" (UID: \"1be0bb8e-7ef6-4556-a8c9-6c25568be40a\") " pod="sushy-emulator/sushy-emulator-5c96ff9cd6-xch96" Mar 12 18:38:10.949385 master-0 kubenswrapper[29097]: I0312 18:38:10.949237 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/1be0bb8e-7ef6-4556-a8c9-6c25568be40a-os-client-config\") pod \"sushy-emulator-5c96ff9cd6-xch96\" (UID: \"1be0bb8e-7ef6-4556-a8c9-6c25568be40a\") " pod="sushy-emulator/sushy-emulator-5c96ff9cd6-xch96" Mar 12 18:38:10.949385 master-0 kubenswrapper[29097]: I0312 18:38:10.949281 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9257j\" (UniqueName: \"kubernetes.io/projected/1be0bb8e-7ef6-4556-a8c9-6c25568be40a-kube-api-access-9257j\") pod \"sushy-emulator-5c96ff9cd6-xch96\" (UID: \"1be0bb8e-7ef6-4556-a8c9-6c25568be40a\") " pod="sushy-emulator/sushy-emulator-5c96ff9cd6-xch96" Mar 12 18:38:11.051114 master-0 kubenswrapper[29097]: I0312 18:38:11.051048 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/1be0bb8e-7ef6-4556-a8c9-6c25568be40a-os-client-config\") pod \"sushy-emulator-5c96ff9cd6-xch96\" (UID: \"1be0bb8e-7ef6-4556-a8c9-6c25568be40a\") " pod="sushy-emulator/sushy-emulator-5c96ff9cd6-xch96" Mar 12 18:38:11.051329 master-0 kubenswrapper[29097]: I0312 18:38:11.051157 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9257j\" (UniqueName: \"kubernetes.io/projected/1be0bb8e-7ef6-4556-a8c9-6c25568be40a-kube-api-access-9257j\") pod \"sushy-emulator-5c96ff9cd6-xch96\" (UID: \"1be0bb8e-7ef6-4556-a8c9-6c25568be40a\") " pod="sushy-emulator/sushy-emulator-5c96ff9cd6-xch96" Mar 12 18:38:11.051329 master-0 kubenswrapper[29097]: I0312 18:38:11.051273 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/1be0bb8e-7ef6-4556-a8c9-6c25568be40a-sushy-emulator-config\") pod \"sushy-emulator-5c96ff9cd6-xch96\" (UID: \"1be0bb8e-7ef6-4556-a8c9-6c25568be40a\") " pod="sushy-emulator/sushy-emulator-5c96ff9cd6-xch96" Mar 12 18:38:11.053815 master-0 kubenswrapper[29097]: I0312 18:38:11.053768 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/1be0bb8e-7ef6-4556-a8c9-6c25568be40a-sushy-emulator-config\") pod \"sushy-emulator-5c96ff9cd6-xch96\" (UID: \"1be0bb8e-7ef6-4556-a8c9-6c25568be40a\") " pod="sushy-emulator/sushy-emulator-5c96ff9cd6-xch96" Mar 12 18:38:11.055052 master-0 kubenswrapper[29097]: I0312 18:38:11.055018 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/1be0bb8e-7ef6-4556-a8c9-6c25568be40a-os-client-config\") pod \"sushy-emulator-5c96ff9cd6-xch96\" (UID: \"1be0bb8e-7ef6-4556-a8c9-6c25568be40a\") " pod="sushy-emulator/sushy-emulator-5c96ff9cd6-xch96" Mar 12 18:38:11.066704 master-0 kubenswrapper[29097]: I0312 18:38:11.066668 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9257j\" (UniqueName: \"kubernetes.io/projected/1be0bb8e-7ef6-4556-a8c9-6c25568be40a-kube-api-access-9257j\") pod \"sushy-emulator-5c96ff9cd6-xch96\" (UID: \"1be0bb8e-7ef6-4556-a8c9-6c25568be40a\") " pod="sushy-emulator/sushy-emulator-5c96ff9cd6-xch96" Mar 12 18:38:11.275338 master-0 kubenswrapper[29097]: I0312 18:38:11.275209 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-5c96ff9cd6-xch96" Mar 12 18:38:11.738293 master-0 kubenswrapper[29097]: I0312 18:38:11.738226 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-5c96ff9cd6-xch96"] Mar 12 18:38:11.740504 master-0 kubenswrapper[29097]: W0312 18:38:11.740416 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1be0bb8e_7ef6_4556_a8c9_6c25568be40a.slice/crio-85f4d901e7104d152555695405940ff4c9d8eca5aedc8e3ef426666936c16672 WatchSource:0}: Error finding container 85f4d901e7104d152555695405940ff4c9d8eca5aedc8e3ef426666936c16672: Status 404 returned error can't find the container with id 85f4d901e7104d152555695405940ff4c9d8eca5aedc8e3ef426666936c16672 Mar 12 18:38:11.743968 master-0 kubenswrapper[29097]: I0312 18:38:11.743923 29097 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 18:38:12.036982 master-0 kubenswrapper[29097]: I0312 18:38:12.036862 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-5c96ff9cd6-xch96" event={"ID":"1be0bb8e-7ef6-4556-a8c9-6c25568be40a","Type":"ContainerStarted","Data":"85f4d901e7104d152555695405940ff4c9d8eca5aedc8e3ef426666936c16672"} Mar 12 18:38:21.194841 master-0 kubenswrapper[29097]: E0312 18:38:21.194718 29097 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33feec78_4592_4343_965b_aa1b7044fcf3.slice/crio-26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Error finding container 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Status 404 returned error can't find the container with id 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547 Mar 12 18:38:22.118987 master-0 kubenswrapper[29097]: I0312 18:38:22.118914 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-5c96ff9cd6-xch96" event={"ID":"1be0bb8e-7ef6-4556-a8c9-6c25568be40a","Type":"ContainerStarted","Data":"850edff1529a75c2049c177ea282d6c863c40eb3d9fca5fc399a112e3ea1afe6"} Mar 12 18:38:22.138613 master-0 kubenswrapper[29097]: I0312 18:38:22.138540 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-5c96ff9cd6-xch96" podStartSLOduration=2.070107873 podStartE2EDuration="12.13850184s" podCreationTimestamp="2026-03-12 18:38:10 +0000 UTC" firstStartedPulling="2026-03-12 18:38:11.743852664 +0000 UTC m=+531.297832801" lastFinishedPulling="2026-03-12 18:38:21.812246631 +0000 UTC m=+541.366226768" observedRunningTime="2026-03-12 18:38:22.135727941 +0000 UTC m=+541.689708058" watchObservedRunningTime="2026-03-12 18:38:22.13850184 +0000 UTC m=+541.692481937" Mar 12 18:38:31.276582 master-0 kubenswrapper[29097]: I0312 18:38:31.276460 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-5c96ff9cd6-xch96" Mar 12 18:38:31.277682 master-0 kubenswrapper[29097]: I0312 18:38:31.276752 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-5c96ff9cd6-xch96" Mar 12 18:38:31.293119 master-0 kubenswrapper[29097]: I0312 18:38:31.293050 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-5c96ff9cd6-xch96" Mar 12 18:38:32.217643 master-0 kubenswrapper[29097]: I0312 18:38:32.217562 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-5c96ff9cd6-xch96" Mar 12 18:38:51.417943 master-0 kubenswrapper[29097]: I0312 18:38:51.417849 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-poller-7bfbdc6697-ss4nh"] Mar 12 18:38:51.419751 master-0 kubenswrapper[29097]: I0312 18:38:51.419699 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-7bfbdc6697-ss4nh" Mar 12 18:38:51.567610 master-0 kubenswrapper[29097]: I0312 18:38:51.567510 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p96l\" (UniqueName: \"kubernetes.io/projected/bd4718e5-aad7-4879-9f63-9a05bd0992fb-kube-api-access-8p96l\") pod \"nova-console-poller-7bfbdc6697-ss4nh\" (UID: \"bd4718e5-aad7-4879-9f63-9a05bd0992fb\") " pod="sushy-emulator/nova-console-poller-7bfbdc6697-ss4nh" Mar 12 18:38:51.567835 master-0 kubenswrapper[29097]: I0312 18:38:51.567737 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/bd4718e5-aad7-4879-9f63-9a05bd0992fb-os-client-config\") pod \"nova-console-poller-7bfbdc6697-ss4nh\" (UID: \"bd4718e5-aad7-4879-9f63-9a05bd0992fb\") " pod="sushy-emulator/nova-console-poller-7bfbdc6697-ss4nh" Mar 12 18:38:51.669137 master-0 kubenswrapper[29097]: I0312 18:38:51.668966 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/bd4718e5-aad7-4879-9f63-9a05bd0992fb-os-client-config\") pod \"nova-console-poller-7bfbdc6697-ss4nh\" (UID: \"bd4718e5-aad7-4879-9f63-9a05bd0992fb\") " pod="sushy-emulator/nova-console-poller-7bfbdc6697-ss4nh" Mar 12 18:38:51.669137 master-0 kubenswrapper[29097]: I0312 18:38:51.669086 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p96l\" (UniqueName: \"kubernetes.io/projected/bd4718e5-aad7-4879-9f63-9a05bd0992fb-kube-api-access-8p96l\") pod \"nova-console-poller-7bfbdc6697-ss4nh\" (UID: \"bd4718e5-aad7-4879-9f63-9a05bd0992fb\") " pod="sushy-emulator/nova-console-poller-7bfbdc6697-ss4nh" Mar 12 18:38:51.674047 master-0 kubenswrapper[29097]: I0312 18:38:51.673983 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/bd4718e5-aad7-4879-9f63-9a05bd0992fb-os-client-config\") pod \"nova-console-poller-7bfbdc6697-ss4nh\" (UID: \"bd4718e5-aad7-4879-9f63-9a05bd0992fb\") " pod="sushy-emulator/nova-console-poller-7bfbdc6697-ss4nh" Mar 12 18:38:51.697605 master-0 kubenswrapper[29097]: I0312 18:38:51.697466 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-7bfbdc6697-ss4nh"] Mar 12 18:38:51.703613 master-0 kubenswrapper[29097]: I0312 18:38:51.703550 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p96l\" (UniqueName: \"kubernetes.io/projected/bd4718e5-aad7-4879-9f63-9a05bd0992fb-kube-api-access-8p96l\") pod \"nova-console-poller-7bfbdc6697-ss4nh\" (UID: \"bd4718e5-aad7-4879-9f63-9a05bd0992fb\") " pod="sushy-emulator/nova-console-poller-7bfbdc6697-ss4nh" Mar 12 18:38:51.733482 master-0 kubenswrapper[29097]: I0312 18:38:51.733385 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-7bfbdc6697-ss4nh" Mar 12 18:38:52.505055 master-0 kubenswrapper[29097]: I0312 18:38:52.504990 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-7bfbdc6697-ss4nh"] Mar 12 18:38:52.507690 master-0 kubenswrapper[29097]: W0312 18:38:52.507636 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd4718e5_aad7_4879_9f63_9a05bd0992fb.slice/crio-28427700a0f7bc30eb0fc959996fdd8dd8a9ecfa79f52d9480acf6ce8b114c44 WatchSource:0}: Error finding container 28427700a0f7bc30eb0fc959996fdd8dd8a9ecfa79f52d9480acf6ce8b114c44: Status 404 returned error can't find the container with id 28427700a0f7bc30eb0fc959996fdd8dd8a9ecfa79f52d9480acf6ce8b114c44 Mar 12 18:38:53.406608 master-0 kubenswrapper[29097]: I0312 18:38:53.406559 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-7bfbdc6697-ss4nh" event={"ID":"bd4718e5-aad7-4879-9f63-9a05bd0992fb","Type":"ContainerStarted","Data":"28427700a0f7bc30eb0fc959996fdd8dd8a9ecfa79f52d9480acf6ce8b114c44"} Mar 12 18:38:58.456554 master-0 kubenswrapper[29097]: I0312 18:38:58.456369 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-7bfbdc6697-ss4nh" event={"ID":"bd4718e5-aad7-4879-9f63-9a05bd0992fb","Type":"ContainerStarted","Data":"cac822bfbfcadc72950f1550329534255239a1f5fe0796b6c7f7a9bc59f365e4"} Mar 12 18:38:58.456554 master-0 kubenswrapper[29097]: I0312 18:38:58.456456 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-7bfbdc6697-ss4nh" event={"ID":"bd4718e5-aad7-4879-9f63-9a05bd0992fb","Type":"ContainerStarted","Data":"4660253cd2020681a8a48e68f72b49986cf6546c1efa8f6705d74f61d2f81b80"} Mar 12 18:38:58.549664 master-0 kubenswrapper[29097]: I0312 18:38:58.549503 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-poller-7bfbdc6697-ss4nh" podStartSLOduration=1.9414376469999999 podStartE2EDuration="7.549477996s" podCreationTimestamp="2026-03-12 18:38:51 +0000 UTC" firstStartedPulling="2026-03-12 18:38:52.511470899 +0000 UTC m=+572.065450996" lastFinishedPulling="2026-03-12 18:38:58.119511218 +0000 UTC m=+577.673491345" observedRunningTime="2026-03-12 18:38:58.544354758 +0000 UTC m=+578.098334915" watchObservedRunningTime="2026-03-12 18:38:58.549477996 +0000 UTC m=+578.103458103" Mar 12 18:39:21.190684 master-0 kubenswrapper[29097]: E0312 18:39:21.190618 29097 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33feec78_4592_4343_965b_aa1b7044fcf3.slice/crio-26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Error finding container 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Status 404 returned error can't find the container with id 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547 Mar 12 18:39:23.991485 master-0 kubenswrapper[29097]: I0312 18:39:23.991375 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-recorder-84787f6d97-649qb"] Mar 12 18:39:23.993275 master-0 kubenswrapper[29097]: I0312 18:39:23.993103 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-84787f6d97-649qb" Mar 12 18:39:24.008999 master-0 kubenswrapper[29097]: I0312 18:39:24.008935 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-84787f6d97-649qb"] Mar 12 18:39:24.096608 master-0 kubenswrapper[29097]: I0312 18:39:24.096378 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbq9h\" (UniqueName: \"kubernetes.io/projected/8be8e5ff-ec75-4eec-a9f3-e42975793d28-kube-api-access-dbq9h\") pod \"nova-console-recorder-84787f6d97-649qb\" (UID: \"8be8e5ff-ec75-4eec-a9f3-e42975793d28\") " pod="sushy-emulator/nova-console-recorder-84787f6d97-649qb" Mar 12 18:39:24.096608 master-0 kubenswrapper[29097]: I0312 18:39:24.096454 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/8be8e5ff-ec75-4eec-a9f3-e42975793d28-nova-console-recordings-pv\") pod \"nova-console-recorder-84787f6d97-649qb\" (UID: \"8be8e5ff-ec75-4eec-a9f3-e42975793d28\") " pod="sushy-emulator/nova-console-recorder-84787f6d97-649qb" Mar 12 18:39:24.096608 master-0 kubenswrapper[29097]: I0312 18:39:24.096498 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/8be8e5ff-ec75-4eec-a9f3-e42975793d28-os-client-config\") pod \"nova-console-recorder-84787f6d97-649qb\" (UID: \"8be8e5ff-ec75-4eec-a9f3-e42975793d28\") " pod="sushy-emulator/nova-console-recorder-84787f6d97-649qb" Mar 12 18:39:24.198058 master-0 kubenswrapper[29097]: I0312 18:39:24.197995 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbq9h\" (UniqueName: \"kubernetes.io/projected/8be8e5ff-ec75-4eec-a9f3-e42975793d28-kube-api-access-dbq9h\") pod \"nova-console-recorder-84787f6d97-649qb\" (UID: \"8be8e5ff-ec75-4eec-a9f3-e42975793d28\") " pod="sushy-emulator/nova-console-recorder-84787f6d97-649qb" Mar 12 18:39:24.198058 master-0 kubenswrapper[29097]: I0312 18:39:24.198061 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/8be8e5ff-ec75-4eec-a9f3-e42975793d28-nova-console-recordings-pv\") pod \"nova-console-recorder-84787f6d97-649qb\" (UID: \"8be8e5ff-ec75-4eec-a9f3-e42975793d28\") " pod="sushy-emulator/nova-console-recorder-84787f6d97-649qb" Mar 12 18:39:24.198329 master-0 kubenswrapper[29097]: I0312 18:39:24.198107 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/8be8e5ff-ec75-4eec-a9f3-e42975793d28-os-client-config\") pod \"nova-console-recorder-84787f6d97-649qb\" (UID: \"8be8e5ff-ec75-4eec-a9f3-e42975793d28\") " pod="sushy-emulator/nova-console-recorder-84787f6d97-649qb" Mar 12 18:39:24.203473 master-0 kubenswrapper[29097]: I0312 18:39:24.203423 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/8be8e5ff-ec75-4eec-a9f3-e42975793d28-os-client-config\") pod \"nova-console-recorder-84787f6d97-649qb\" (UID: \"8be8e5ff-ec75-4eec-a9f3-e42975793d28\") " pod="sushy-emulator/nova-console-recorder-84787f6d97-649qb" Mar 12 18:39:24.219378 master-0 kubenswrapper[29097]: I0312 18:39:24.219324 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbq9h\" (UniqueName: \"kubernetes.io/projected/8be8e5ff-ec75-4eec-a9f3-e42975793d28-kube-api-access-dbq9h\") pod \"nova-console-recorder-84787f6d97-649qb\" (UID: \"8be8e5ff-ec75-4eec-a9f3-e42975793d28\") " pod="sushy-emulator/nova-console-recorder-84787f6d97-649qb" Mar 12 18:39:24.880990 master-0 kubenswrapper[29097]: I0312 18:39:24.880918 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/8be8e5ff-ec75-4eec-a9f3-e42975793d28-nova-console-recordings-pv\") pod \"nova-console-recorder-84787f6d97-649qb\" (UID: \"8be8e5ff-ec75-4eec-a9f3-e42975793d28\") " pod="sushy-emulator/nova-console-recorder-84787f6d97-649qb" Mar 12 18:39:24.912100 master-0 kubenswrapper[29097]: I0312 18:39:24.912027 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-84787f6d97-649qb" Mar 12 18:39:25.383437 master-0 kubenswrapper[29097]: I0312 18:39:25.383368 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-84787f6d97-649qb"] Mar 12 18:39:25.385733 master-0 kubenswrapper[29097]: W0312 18:39:25.385665 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8be8e5ff_ec75_4eec_a9f3_e42975793d28.slice/crio-eb40cd8590ca13985b295301c8bfe54734cc5fde6c8cd19cb4db3080520ca39c WatchSource:0}: Error finding container eb40cd8590ca13985b295301c8bfe54734cc5fde6c8cd19cb4db3080520ca39c: Status 404 returned error can't find the container with id eb40cd8590ca13985b295301c8bfe54734cc5fde6c8cd19cb4db3080520ca39c Mar 12 18:39:25.751693 master-0 kubenswrapper[29097]: I0312 18:39:25.751503 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-84787f6d97-649qb" event={"ID":"8be8e5ff-ec75-4eec-a9f3-e42975793d28","Type":"ContainerStarted","Data":"eb40cd8590ca13985b295301c8bfe54734cc5fde6c8cd19cb4db3080520ca39c"} Mar 12 18:39:33.810291 master-0 kubenswrapper[29097]: I0312 18:39:33.810151 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-84787f6d97-649qb" event={"ID":"8be8e5ff-ec75-4eec-a9f3-e42975793d28","Type":"ContainerStarted","Data":"c6f86c32bca7cda18698a397ab220267e402aa17efc38f72afc32b9f779a9136"} Mar 12 18:39:34.818579 master-0 kubenswrapper[29097]: I0312 18:39:34.818498 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-84787f6d97-649qb" event={"ID":"8be8e5ff-ec75-4eec-a9f3-e42975793d28","Type":"ContainerStarted","Data":"6df5a7df161992eefb256c40e487afe9bb489c8e1c6179978d1a8d16e3c758ae"} Mar 12 18:39:34.856833 master-0 kubenswrapper[29097]: I0312 18:39:34.856691 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-recorder-84787f6d97-649qb" podStartSLOduration=3.126763328 podStartE2EDuration="11.856654147s" podCreationTimestamp="2026-03-12 18:39:23 +0000 UTC" firstStartedPulling="2026-03-12 18:39:25.389496113 +0000 UTC m=+604.943476220" lastFinishedPulling="2026-03-12 18:39:34.119386912 +0000 UTC m=+613.673367039" observedRunningTime="2026-03-12 18:39:34.837427657 +0000 UTC m=+614.391407824" watchObservedRunningTime="2026-03-12 18:39:34.856654147 +0000 UTC m=+614.410634314" Mar 12 18:40:02.750613 master-0 kubenswrapper[29097]: I0312 18:40:02.750549 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xjfw5"] Mar 12 18:40:02.752253 master-0 kubenswrapper[29097]: I0312 18:40:02.752220 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xjfw5" Mar 12 18:40:02.767862 master-0 kubenswrapper[29097]: I0312 18:40:02.767818 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xjfw5"] Mar 12 18:40:02.866659 master-0 kubenswrapper[29097]: I0312 18:40:02.866585 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvbf6\" (UniqueName: \"kubernetes.io/projected/89b32560-2166-48aa-a8f5-3f0cf9507a2f-kube-api-access-hvbf6\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xjfw5\" (UID: \"89b32560-2166-48aa-a8f5-3f0cf9507a2f\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xjfw5" Mar 12 18:40:02.866659 master-0 kubenswrapper[29097]: I0312 18:40:02.866663 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/89b32560-2166-48aa-a8f5-3f0cf9507a2f-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xjfw5\" (UID: \"89b32560-2166-48aa-a8f5-3f0cf9507a2f\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xjfw5" Mar 12 18:40:02.866951 master-0 kubenswrapper[29097]: I0312 18:40:02.866742 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/89b32560-2166-48aa-a8f5-3f0cf9507a2f-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xjfw5\" (UID: \"89b32560-2166-48aa-a8f5-3f0cf9507a2f\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xjfw5" Mar 12 18:40:02.968769 master-0 kubenswrapper[29097]: I0312 18:40:02.968710 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvbf6\" (UniqueName: \"kubernetes.io/projected/89b32560-2166-48aa-a8f5-3f0cf9507a2f-kube-api-access-hvbf6\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xjfw5\" (UID: \"89b32560-2166-48aa-a8f5-3f0cf9507a2f\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xjfw5" Mar 12 18:40:02.968974 master-0 kubenswrapper[29097]: I0312 18:40:02.968776 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/89b32560-2166-48aa-a8f5-3f0cf9507a2f-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xjfw5\" (UID: \"89b32560-2166-48aa-a8f5-3f0cf9507a2f\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xjfw5" Mar 12 18:40:02.968974 master-0 kubenswrapper[29097]: I0312 18:40:02.968803 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/89b32560-2166-48aa-a8f5-3f0cf9507a2f-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xjfw5\" (UID: \"89b32560-2166-48aa-a8f5-3f0cf9507a2f\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xjfw5" Mar 12 18:40:02.969472 master-0 kubenswrapper[29097]: I0312 18:40:02.969426 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/89b32560-2166-48aa-a8f5-3f0cf9507a2f-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xjfw5\" (UID: \"89b32560-2166-48aa-a8f5-3f0cf9507a2f\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xjfw5" Mar 12 18:40:02.969472 master-0 kubenswrapper[29097]: I0312 18:40:02.969461 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/89b32560-2166-48aa-a8f5-3f0cf9507a2f-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xjfw5\" (UID: \"89b32560-2166-48aa-a8f5-3f0cf9507a2f\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xjfw5" Mar 12 18:40:02.989328 master-0 kubenswrapper[29097]: I0312 18:40:02.989238 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvbf6\" (UniqueName: \"kubernetes.io/projected/89b32560-2166-48aa-a8f5-3f0cf9507a2f-kube-api-access-hvbf6\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xjfw5\" (UID: \"89b32560-2166-48aa-a8f5-3f0cf9507a2f\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xjfw5" Mar 12 18:40:03.069763 master-0 kubenswrapper[29097]: I0312 18:40:03.069689 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xjfw5" Mar 12 18:40:03.502059 master-0 kubenswrapper[29097]: I0312 18:40:03.502006 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xjfw5"] Mar 12 18:40:03.502603 master-0 kubenswrapper[29097]: W0312 18:40:03.502562 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89b32560_2166_48aa_a8f5_3f0cf9507a2f.slice/crio-775e9655fdbc8a6c325d78c16cb7c845ae4d7906604c823bb39174f1f48c88d4 WatchSource:0}: Error finding container 775e9655fdbc8a6c325d78c16cb7c845ae4d7906604c823bb39174f1f48c88d4: Status 404 returned error can't find the container with id 775e9655fdbc8a6c325d78c16cb7c845ae4d7906604c823bb39174f1f48c88d4 Mar 12 18:40:04.083973 master-0 kubenswrapper[29097]: I0312 18:40:04.083920 29097 generic.go:334] "Generic (PLEG): container finished" podID="89b32560-2166-48aa-a8f5-3f0cf9507a2f" containerID="365385d81f6b965acbb8c8cde099d6da6cbf8247e12ebaf241ed8b1acce38b5d" exitCode=0 Mar 12 18:40:04.083973 master-0 kubenswrapper[29097]: I0312 18:40:04.083964 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xjfw5" event={"ID":"89b32560-2166-48aa-a8f5-3f0cf9507a2f","Type":"ContainerDied","Data":"365385d81f6b965acbb8c8cde099d6da6cbf8247e12ebaf241ed8b1acce38b5d"} Mar 12 18:40:04.084660 master-0 kubenswrapper[29097]: I0312 18:40:04.083990 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xjfw5" event={"ID":"89b32560-2166-48aa-a8f5-3f0cf9507a2f","Type":"ContainerStarted","Data":"775e9655fdbc8a6c325d78c16cb7c845ae4d7906604c823bb39174f1f48c88d4"} Mar 12 18:40:06.101606 master-0 kubenswrapper[29097]: I0312 18:40:06.101541 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xjfw5" event={"ID":"89b32560-2166-48aa-a8f5-3f0cf9507a2f","Type":"ContainerStarted","Data":"775a20e4313d4da8f52780f802dfd7d0083de0c5539f797102c622b7001669fe"} Mar 12 18:40:07.114033 master-0 kubenswrapper[29097]: I0312 18:40:07.113954 29097 generic.go:334] "Generic (PLEG): container finished" podID="89b32560-2166-48aa-a8f5-3f0cf9507a2f" containerID="775a20e4313d4da8f52780f802dfd7d0083de0c5539f797102c622b7001669fe" exitCode=0 Mar 12 18:40:07.114033 master-0 kubenswrapper[29097]: I0312 18:40:07.114010 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xjfw5" event={"ID":"89b32560-2166-48aa-a8f5-3f0cf9507a2f","Type":"ContainerDied","Data":"775a20e4313d4da8f52780f802dfd7d0083de0c5539f797102c622b7001669fe"} Mar 12 18:40:08.124303 master-0 kubenswrapper[29097]: I0312 18:40:08.124197 29097 generic.go:334] "Generic (PLEG): container finished" podID="89b32560-2166-48aa-a8f5-3f0cf9507a2f" containerID="0dfd788fe98823e0d1bcf8dcb56c4248a5c280e34c0cda932982bc7c041eab33" exitCode=0 Mar 12 18:40:08.124303 master-0 kubenswrapper[29097]: I0312 18:40:08.124278 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xjfw5" event={"ID":"89b32560-2166-48aa-a8f5-3f0cf9507a2f","Type":"ContainerDied","Data":"0dfd788fe98823e0d1bcf8dcb56c4248a5c280e34c0cda932982bc7c041eab33"} Mar 12 18:40:09.515232 master-0 kubenswrapper[29097]: I0312 18:40:09.515167 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xjfw5" Mar 12 18:40:09.598768 master-0 kubenswrapper[29097]: I0312 18:40:09.598681 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/89b32560-2166-48aa-a8f5-3f0cf9507a2f-bundle\") pod \"89b32560-2166-48aa-a8f5-3f0cf9507a2f\" (UID: \"89b32560-2166-48aa-a8f5-3f0cf9507a2f\") " Mar 12 18:40:09.599032 master-0 kubenswrapper[29097]: I0312 18:40:09.598852 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvbf6\" (UniqueName: \"kubernetes.io/projected/89b32560-2166-48aa-a8f5-3f0cf9507a2f-kube-api-access-hvbf6\") pod \"89b32560-2166-48aa-a8f5-3f0cf9507a2f\" (UID: \"89b32560-2166-48aa-a8f5-3f0cf9507a2f\") " Mar 12 18:40:09.599032 master-0 kubenswrapper[29097]: I0312 18:40:09.598944 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/89b32560-2166-48aa-a8f5-3f0cf9507a2f-util\") pod \"89b32560-2166-48aa-a8f5-3f0cf9507a2f\" (UID: \"89b32560-2166-48aa-a8f5-3f0cf9507a2f\") " Mar 12 18:40:09.599831 master-0 kubenswrapper[29097]: I0312 18:40:09.599777 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89b32560-2166-48aa-a8f5-3f0cf9507a2f-bundle" (OuterVolumeSpecName: "bundle") pod "89b32560-2166-48aa-a8f5-3f0cf9507a2f" (UID: "89b32560-2166-48aa-a8f5-3f0cf9507a2f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:40:09.607403 master-0 kubenswrapper[29097]: I0312 18:40:09.607355 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89b32560-2166-48aa-a8f5-3f0cf9507a2f-kube-api-access-hvbf6" (OuterVolumeSpecName: "kube-api-access-hvbf6") pod "89b32560-2166-48aa-a8f5-3f0cf9507a2f" (UID: "89b32560-2166-48aa-a8f5-3f0cf9507a2f"). InnerVolumeSpecName "kube-api-access-hvbf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:40:09.617999 master-0 kubenswrapper[29097]: I0312 18:40:09.617930 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89b32560-2166-48aa-a8f5-3f0cf9507a2f-util" (OuterVolumeSpecName: "util") pod "89b32560-2166-48aa-a8f5-3f0cf9507a2f" (UID: "89b32560-2166-48aa-a8f5-3f0cf9507a2f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:40:09.700965 master-0 kubenswrapper[29097]: I0312 18:40:09.700875 29097 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/89b32560-2166-48aa-a8f5-3f0cf9507a2f-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:40:09.700965 master-0 kubenswrapper[29097]: I0312 18:40:09.700919 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvbf6\" (UniqueName: \"kubernetes.io/projected/89b32560-2166-48aa-a8f5-3f0cf9507a2f-kube-api-access-hvbf6\") on node \"master-0\" DevicePath \"\"" Mar 12 18:40:09.700965 master-0 kubenswrapper[29097]: I0312 18:40:09.700934 29097 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/89b32560-2166-48aa-a8f5-3f0cf9507a2f-util\") on node \"master-0\" DevicePath \"\"" Mar 12 18:40:10.147302 master-0 kubenswrapper[29097]: I0312 18:40:10.147211 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xjfw5" event={"ID":"89b32560-2166-48aa-a8f5-3f0cf9507a2f","Type":"ContainerDied","Data":"775e9655fdbc8a6c325d78c16cb7c845ae4d7906604c823bb39174f1f48c88d4"} Mar 12 18:40:10.147555 master-0 kubenswrapper[29097]: I0312 18:40:10.147305 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xjfw5" Mar 12 18:40:10.147555 master-0 kubenswrapper[29097]: I0312 18:40:10.147302 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="775e9655fdbc8a6c325d78c16cb7c845ae4d7906604c823bb39174f1f48c88d4" Mar 12 18:40:17.367099 master-0 kubenswrapper[29097]: I0312 18:40:17.367019 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/lvms-operator-575866c55b-kl7nz"] Mar 12 18:40:17.367911 master-0 kubenswrapper[29097]: E0312 18:40:17.367554 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89b32560-2166-48aa-a8f5-3f0cf9507a2f" containerName="pull" Mar 12 18:40:17.367911 master-0 kubenswrapper[29097]: I0312 18:40:17.367575 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="89b32560-2166-48aa-a8f5-3f0cf9507a2f" containerName="pull" Mar 12 18:40:17.367911 master-0 kubenswrapper[29097]: E0312 18:40:17.367603 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89b32560-2166-48aa-a8f5-3f0cf9507a2f" containerName="extract" Mar 12 18:40:17.367911 master-0 kubenswrapper[29097]: I0312 18:40:17.367612 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="89b32560-2166-48aa-a8f5-3f0cf9507a2f" containerName="extract" Mar 12 18:40:17.367911 master-0 kubenswrapper[29097]: E0312 18:40:17.367634 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89b32560-2166-48aa-a8f5-3f0cf9507a2f" containerName="util" Mar 12 18:40:17.367911 master-0 kubenswrapper[29097]: I0312 18:40:17.367643 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="89b32560-2166-48aa-a8f5-3f0cf9507a2f" containerName="util" Mar 12 18:40:17.367911 master-0 kubenswrapper[29097]: I0312 18:40:17.367878 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="89b32560-2166-48aa-a8f5-3f0cf9507a2f" containerName="extract" Mar 12 18:40:17.368809 master-0 kubenswrapper[29097]: I0312 18:40:17.368788 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-575866c55b-kl7nz" Mar 12 18:40:17.371442 master-0 kubenswrapper[29097]: I0312 18:40:17.371409 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-webhook-server-cert" Mar 12 18:40:17.371939 master-0 kubenswrapper[29097]: I0312 18:40:17.371885 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-metrics-cert" Mar 12 18:40:17.372106 master-0 kubenswrapper[29097]: I0312 18:40:17.372080 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-service-cert" Mar 12 18:40:17.372269 master-0 kubenswrapper[29097]: I0312 18:40:17.372251 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Mar 12 18:40:17.372478 master-0 kubenswrapper[29097]: I0312 18:40:17.372318 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Mar 12 18:40:17.385096 master-0 kubenswrapper[29097]: I0312 18:40:17.385012 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-575866c55b-kl7nz"] Mar 12 18:40:17.458960 master-0 kubenswrapper[29097]: I0312 18:40:17.458085 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/193fa4bf-df77-42ea-8f0f-e3c7dcaf39c0-metrics-cert\") pod \"lvms-operator-575866c55b-kl7nz\" (UID: \"193fa4bf-df77-42ea-8f0f-e3c7dcaf39c0\") " pod="openshift-storage/lvms-operator-575866c55b-kl7nz" Mar 12 18:40:17.458960 master-0 kubenswrapper[29097]: I0312 18:40:17.458174 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/193fa4bf-df77-42ea-8f0f-e3c7dcaf39c0-webhook-cert\") pod \"lvms-operator-575866c55b-kl7nz\" (UID: \"193fa4bf-df77-42ea-8f0f-e3c7dcaf39c0\") " pod="openshift-storage/lvms-operator-575866c55b-kl7nz" Mar 12 18:40:17.458960 master-0 kubenswrapper[29097]: I0312 18:40:17.458364 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vxt8\" (UniqueName: \"kubernetes.io/projected/193fa4bf-df77-42ea-8f0f-e3c7dcaf39c0-kube-api-access-6vxt8\") pod \"lvms-operator-575866c55b-kl7nz\" (UID: \"193fa4bf-df77-42ea-8f0f-e3c7dcaf39c0\") " pod="openshift-storage/lvms-operator-575866c55b-kl7nz" Mar 12 18:40:17.458960 master-0 kubenswrapper[29097]: I0312 18:40:17.458660 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/193fa4bf-df77-42ea-8f0f-e3c7dcaf39c0-socket-dir\") pod \"lvms-operator-575866c55b-kl7nz\" (UID: \"193fa4bf-df77-42ea-8f0f-e3c7dcaf39c0\") " pod="openshift-storage/lvms-operator-575866c55b-kl7nz" Mar 12 18:40:17.458960 master-0 kubenswrapper[29097]: I0312 18:40:17.458829 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/193fa4bf-df77-42ea-8f0f-e3c7dcaf39c0-apiservice-cert\") pod \"lvms-operator-575866c55b-kl7nz\" (UID: \"193fa4bf-df77-42ea-8f0f-e3c7dcaf39c0\") " pod="openshift-storage/lvms-operator-575866c55b-kl7nz" Mar 12 18:40:17.560932 master-0 kubenswrapper[29097]: I0312 18:40:17.560855 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/193fa4bf-df77-42ea-8f0f-e3c7dcaf39c0-metrics-cert\") pod \"lvms-operator-575866c55b-kl7nz\" (UID: \"193fa4bf-df77-42ea-8f0f-e3c7dcaf39c0\") " pod="openshift-storage/lvms-operator-575866c55b-kl7nz" Mar 12 18:40:17.561286 master-0 kubenswrapper[29097]: I0312 18:40:17.560964 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/193fa4bf-df77-42ea-8f0f-e3c7dcaf39c0-webhook-cert\") pod \"lvms-operator-575866c55b-kl7nz\" (UID: \"193fa4bf-df77-42ea-8f0f-e3c7dcaf39c0\") " pod="openshift-storage/lvms-operator-575866c55b-kl7nz" Mar 12 18:40:17.561286 master-0 kubenswrapper[29097]: I0312 18:40:17.561051 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vxt8\" (UniqueName: \"kubernetes.io/projected/193fa4bf-df77-42ea-8f0f-e3c7dcaf39c0-kube-api-access-6vxt8\") pod \"lvms-operator-575866c55b-kl7nz\" (UID: \"193fa4bf-df77-42ea-8f0f-e3c7dcaf39c0\") " pod="openshift-storage/lvms-operator-575866c55b-kl7nz" Mar 12 18:40:17.561286 master-0 kubenswrapper[29097]: I0312 18:40:17.561132 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/193fa4bf-df77-42ea-8f0f-e3c7dcaf39c0-socket-dir\") pod \"lvms-operator-575866c55b-kl7nz\" (UID: \"193fa4bf-df77-42ea-8f0f-e3c7dcaf39c0\") " pod="openshift-storage/lvms-operator-575866c55b-kl7nz" Mar 12 18:40:17.561957 master-0 kubenswrapper[29097]: I0312 18:40:17.561906 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/193fa4bf-df77-42ea-8f0f-e3c7dcaf39c0-apiservice-cert\") pod \"lvms-operator-575866c55b-kl7nz\" (UID: \"193fa4bf-df77-42ea-8f0f-e3c7dcaf39c0\") " pod="openshift-storage/lvms-operator-575866c55b-kl7nz" Mar 12 18:40:17.562109 master-0 kubenswrapper[29097]: I0312 18:40:17.562060 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/193fa4bf-df77-42ea-8f0f-e3c7dcaf39c0-socket-dir\") pod \"lvms-operator-575866c55b-kl7nz\" (UID: \"193fa4bf-df77-42ea-8f0f-e3c7dcaf39c0\") " pod="openshift-storage/lvms-operator-575866c55b-kl7nz" Mar 12 18:40:17.565479 master-0 kubenswrapper[29097]: I0312 18:40:17.565449 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/193fa4bf-df77-42ea-8f0f-e3c7dcaf39c0-metrics-cert\") pod \"lvms-operator-575866c55b-kl7nz\" (UID: \"193fa4bf-df77-42ea-8f0f-e3c7dcaf39c0\") " pod="openshift-storage/lvms-operator-575866c55b-kl7nz" Mar 12 18:40:17.569100 master-0 kubenswrapper[29097]: I0312 18:40:17.569064 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/193fa4bf-df77-42ea-8f0f-e3c7dcaf39c0-apiservice-cert\") pod \"lvms-operator-575866c55b-kl7nz\" (UID: \"193fa4bf-df77-42ea-8f0f-e3c7dcaf39c0\") " pod="openshift-storage/lvms-operator-575866c55b-kl7nz" Mar 12 18:40:17.569255 master-0 kubenswrapper[29097]: I0312 18:40:17.569220 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/193fa4bf-df77-42ea-8f0f-e3c7dcaf39c0-webhook-cert\") pod \"lvms-operator-575866c55b-kl7nz\" (UID: \"193fa4bf-df77-42ea-8f0f-e3c7dcaf39c0\") " pod="openshift-storage/lvms-operator-575866c55b-kl7nz" Mar 12 18:40:17.577758 master-0 kubenswrapper[29097]: I0312 18:40:17.577725 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vxt8\" (UniqueName: \"kubernetes.io/projected/193fa4bf-df77-42ea-8f0f-e3c7dcaf39c0-kube-api-access-6vxt8\") pod \"lvms-operator-575866c55b-kl7nz\" (UID: \"193fa4bf-df77-42ea-8f0f-e3c7dcaf39c0\") " pod="openshift-storage/lvms-operator-575866c55b-kl7nz" Mar 12 18:40:17.693905 master-0 kubenswrapper[29097]: I0312 18:40:17.693731 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-575866c55b-kl7nz" Mar 12 18:40:18.121942 master-0 kubenswrapper[29097]: I0312 18:40:18.121889 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-575866c55b-kl7nz"] Mar 12 18:40:18.885233 master-0 kubenswrapper[29097]: I0312 18:40:18.885146 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-575866c55b-kl7nz" event={"ID":"193fa4bf-df77-42ea-8f0f-e3c7dcaf39c0","Type":"ContainerStarted","Data":"5056aaac7b6b99b06450891442c0cadf08594d2b355a598e6bf9c29c438bbb7d"} Mar 12 18:40:21.200332 master-0 kubenswrapper[29097]: E0312 18:40:21.200273 29097 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33feec78_4592_4343_965b_aa1b7044fcf3.slice/crio-26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Error finding container 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Status 404 returned error can't find the container with id 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547 Mar 12 18:40:23.939698 master-0 kubenswrapper[29097]: I0312 18:40:23.939351 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-575866c55b-kl7nz" event={"ID":"193fa4bf-df77-42ea-8f0f-e3c7dcaf39c0","Type":"ContainerStarted","Data":"3decd657ebe3fb2bf1121e01f69f429ca9985f3086451330f962a6be1bbcd55c"} Mar 12 18:40:23.940305 master-0 kubenswrapper[29097]: I0312 18:40:23.939912 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/lvms-operator-575866c55b-kl7nz" Mar 12 18:40:23.986579 master-0 kubenswrapper[29097]: I0312 18:40:23.984309 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/lvms-operator-575866c55b-kl7nz" podStartSLOduration=1.37166694 podStartE2EDuration="6.984286824s" podCreationTimestamp="2026-03-12 18:40:17 +0000 UTC" firstStartedPulling="2026-03-12 18:40:18.128542064 +0000 UTC m=+657.682522161" lastFinishedPulling="2026-03-12 18:40:23.741161948 +0000 UTC m=+663.295142045" observedRunningTime="2026-03-12 18:40:23.982045788 +0000 UTC m=+663.536025905" watchObservedRunningTime="2026-03-12 18:40:23.984286824 +0000 UTC m=+663.538266941" Mar 12 18:40:24.949911 master-0 kubenswrapper[29097]: I0312 18:40:24.949846 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/lvms-operator-575866c55b-kl7nz" Mar 12 18:40:28.512726 master-0 kubenswrapper[29097]: I0312 18:40:28.512674 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g97bl"] Mar 12 18:40:28.514678 master-0 kubenswrapper[29097]: I0312 18:40:28.514127 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g97bl" Mar 12 18:40:28.534130 master-0 kubenswrapper[29097]: I0312 18:40:28.534064 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g97bl"] Mar 12 18:40:28.640216 master-0 kubenswrapper[29097]: I0312 18:40:28.640160 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vvfv\" (UniqueName: \"kubernetes.io/projected/cd31043d-b8a7-4d44-8404-c7605b0f163e-kube-api-access-6vvfv\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g97bl\" (UID: \"cd31043d-b8a7-4d44-8404-c7605b0f163e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g97bl" Mar 12 18:40:28.640216 master-0 kubenswrapper[29097]: I0312 18:40:28.640217 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cd31043d-b8a7-4d44-8404-c7605b0f163e-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g97bl\" (UID: \"cd31043d-b8a7-4d44-8404-c7605b0f163e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g97bl" Mar 12 18:40:28.640435 master-0 kubenswrapper[29097]: I0312 18:40:28.640295 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cd31043d-b8a7-4d44-8404-c7605b0f163e-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g97bl\" (UID: \"cd31043d-b8a7-4d44-8404-c7605b0f163e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g97bl" Mar 12 18:40:28.741732 master-0 kubenswrapper[29097]: I0312 18:40:28.741649 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cd31043d-b8a7-4d44-8404-c7605b0f163e-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g97bl\" (UID: \"cd31043d-b8a7-4d44-8404-c7605b0f163e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g97bl" Mar 12 18:40:28.741732 master-0 kubenswrapper[29097]: I0312 18:40:28.741729 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cd31043d-b8a7-4d44-8404-c7605b0f163e-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g97bl\" (UID: \"cd31043d-b8a7-4d44-8404-c7605b0f163e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g97bl" Mar 12 18:40:28.742169 master-0 kubenswrapper[29097]: I0312 18:40:28.741826 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vvfv\" (UniqueName: \"kubernetes.io/projected/cd31043d-b8a7-4d44-8404-c7605b0f163e-kube-api-access-6vvfv\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g97bl\" (UID: \"cd31043d-b8a7-4d44-8404-c7605b0f163e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g97bl" Mar 12 18:40:28.742620 master-0 kubenswrapper[29097]: I0312 18:40:28.742577 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cd31043d-b8a7-4d44-8404-c7605b0f163e-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g97bl\" (UID: \"cd31043d-b8a7-4d44-8404-c7605b0f163e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g97bl" Mar 12 18:40:28.742923 master-0 kubenswrapper[29097]: I0312 18:40:28.742877 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cd31043d-b8a7-4d44-8404-c7605b0f163e-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g97bl\" (UID: \"cd31043d-b8a7-4d44-8404-c7605b0f163e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g97bl" Mar 12 18:40:28.768695 master-0 kubenswrapper[29097]: I0312 18:40:28.768154 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vvfv\" (UniqueName: \"kubernetes.io/projected/cd31043d-b8a7-4d44-8404-c7605b0f163e-kube-api-access-6vvfv\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g97bl\" (UID: \"cd31043d-b8a7-4d44-8404-c7605b0f163e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g97bl" Mar 12 18:40:28.828423 master-0 kubenswrapper[29097]: I0312 18:40:28.828342 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g97bl" Mar 12 18:40:29.258977 master-0 kubenswrapper[29097]: I0312 18:40:29.258814 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g97bl"] Mar 12 18:40:29.266997 master-0 kubenswrapper[29097]: W0312 18:40:29.266615 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd31043d_b8a7_4d44_8404_c7605b0f163e.slice/crio-78a042b7c3b1ef8383538749926fe0c592f99fa40998791c793fa37917ce9ff0 WatchSource:0}: Error finding container 78a042b7c3b1ef8383538749926fe0c592f99fa40998791c793fa37917ce9ff0: Status 404 returned error can't find the container with id 78a042b7c3b1ef8383538749926fe0c592f99fa40998791c793fa37917ce9ff0 Mar 12 18:40:29.507822 master-0 kubenswrapper[29097]: I0312 18:40:29.507765 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jccgq"] Mar 12 18:40:29.509721 master-0 kubenswrapper[29097]: I0312 18:40:29.509615 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jccgq" Mar 12 18:40:29.520215 master-0 kubenswrapper[29097]: I0312 18:40:29.520161 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jccgq"] Mar 12 18:40:29.558565 master-0 kubenswrapper[29097]: I0312 18:40:29.558497 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/231f4d11-dbec-4a3f-b32d-6c04e30506d8-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jccgq\" (UID: \"231f4d11-dbec-4a3f-b32d-6c04e30506d8\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jccgq" Mar 12 18:40:29.558730 master-0 kubenswrapper[29097]: I0312 18:40:29.558572 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtzgn\" (UniqueName: \"kubernetes.io/projected/231f4d11-dbec-4a3f-b32d-6c04e30506d8-kube-api-access-jtzgn\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jccgq\" (UID: \"231f4d11-dbec-4a3f-b32d-6c04e30506d8\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jccgq" Mar 12 18:40:29.558789 master-0 kubenswrapper[29097]: I0312 18:40:29.558748 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/231f4d11-dbec-4a3f-b32d-6c04e30506d8-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jccgq\" (UID: \"231f4d11-dbec-4a3f-b32d-6c04e30506d8\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jccgq" Mar 12 18:40:29.663069 master-0 kubenswrapper[29097]: I0312 18:40:29.660315 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/231f4d11-dbec-4a3f-b32d-6c04e30506d8-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jccgq\" (UID: \"231f4d11-dbec-4a3f-b32d-6c04e30506d8\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jccgq" Mar 12 18:40:29.663069 master-0 kubenswrapper[29097]: I0312 18:40:29.660371 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtzgn\" (UniqueName: \"kubernetes.io/projected/231f4d11-dbec-4a3f-b32d-6c04e30506d8-kube-api-access-jtzgn\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jccgq\" (UID: \"231f4d11-dbec-4a3f-b32d-6c04e30506d8\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jccgq" Mar 12 18:40:29.663069 master-0 kubenswrapper[29097]: I0312 18:40:29.660426 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/231f4d11-dbec-4a3f-b32d-6c04e30506d8-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jccgq\" (UID: \"231f4d11-dbec-4a3f-b32d-6c04e30506d8\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jccgq" Mar 12 18:40:29.663069 master-0 kubenswrapper[29097]: I0312 18:40:29.660987 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/231f4d11-dbec-4a3f-b32d-6c04e30506d8-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jccgq\" (UID: \"231f4d11-dbec-4a3f-b32d-6c04e30506d8\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jccgq" Mar 12 18:40:29.663069 master-0 kubenswrapper[29097]: I0312 18:40:29.661240 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/231f4d11-dbec-4a3f-b32d-6c04e30506d8-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jccgq\" (UID: \"231f4d11-dbec-4a3f-b32d-6c04e30506d8\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jccgq" Mar 12 18:40:29.689005 master-0 kubenswrapper[29097]: I0312 18:40:29.688933 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtzgn\" (UniqueName: \"kubernetes.io/projected/231f4d11-dbec-4a3f-b32d-6c04e30506d8-kube-api-access-jtzgn\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jccgq\" (UID: \"231f4d11-dbec-4a3f-b32d-6c04e30506d8\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jccgq" Mar 12 18:40:29.841407 master-0 kubenswrapper[29097]: I0312 18:40:29.841339 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jccgq" Mar 12 18:40:29.988918 master-0 kubenswrapper[29097]: I0312 18:40:29.988861 29097 generic.go:334] "Generic (PLEG): container finished" podID="cd31043d-b8a7-4d44-8404-c7605b0f163e" containerID="013a4473e59852e155e78287b555d20f56709554e79865d55098b3b6b1b5efc5" exitCode=0 Mar 12 18:40:29.989100 master-0 kubenswrapper[29097]: I0312 18:40:29.988924 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g97bl" event={"ID":"cd31043d-b8a7-4d44-8404-c7605b0f163e","Type":"ContainerDied","Data":"013a4473e59852e155e78287b555d20f56709554e79865d55098b3b6b1b5efc5"} Mar 12 18:40:29.989100 master-0 kubenswrapper[29097]: I0312 18:40:29.988962 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g97bl" event={"ID":"cd31043d-b8a7-4d44-8404-c7605b0f163e","Type":"ContainerStarted","Data":"78a042b7c3b1ef8383538749926fe0c592f99fa40998791c793fa37917ce9ff0"} Mar 12 18:40:30.311393 master-0 kubenswrapper[29097]: I0312 18:40:30.311325 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jccgq"] Mar 12 18:40:30.356200 master-0 kubenswrapper[29097]: I0312 18:40:30.356146 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874sqlln"] Mar 12 18:40:30.358100 master-0 kubenswrapper[29097]: I0312 18:40:30.358055 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874sqlln" Mar 12 18:40:30.362601 master-0 kubenswrapper[29097]: I0312 18:40:30.362505 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874sqlln"] Mar 12 18:40:30.475253 master-0 kubenswrapper[29097]: I0312 18:40:30.475191 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g62h\" (UniqueName: \"kubernetes.io/projected/16dd9815-f78f-4503-a2a2-ea9a9e47a885-kube-api-access-6g62h\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874sqlln\" (UID: \"16dd9815-f78f-4503-a2a2-ea9a9e47a885\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874sqlln" Mar 12 18:40:30.475405 master-0 kubenswrapper[29097]: I0312 18:40:30.475281 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16dd9815-f78f-4503-a2a2-ea9a9e47a885-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874sqlln\" (UID: \"16dd9815-f78f-4503-a2a2-ea9a9e47a885\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874sqlln" Mar 12 18:40:30.475405 master-0 kubenswrapper[29097]: I0312 18:40:30.475388 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16dd9815-f78f-4503-a2a2-ea9a9e47a885-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874sqlln\" (UID: \"16dd9815-f78f-4503-a2a2-ea9a9e47a885\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874sqlln" Mar 12 18:40:30.577338 master-0 kubenswrapper[29097]: I0312 18:40:30.577186 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g62h\" (UniqueName: \"kubernetes.io/projected/16dd9815-f78f-4503-a2a2-ea9a9e47a885-kube-api-access-6g62h\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874sqlln\" (UID: \"16dd9815-f78f-4503-a2a2-ea9a9e47a885\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874sqlln" Mar 12 18:40:30.577338 master-0 kubenswrapper[29097]: I0312 18:40:30.577267 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16dd9815-f78f-4503-a2a2-ea9a9e47a885-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874sqlln\" (UID: \"16dd9815-f78f-4503-a2a2-ea9a9e47a885\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874sqlln" Mar 12 18:40:30.577338 master-0 kubenswrapper[29097]: I0312 18:40:30.577325 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16dd9815-f78f-4503-a2a2-ea9a9e47a885-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874sqlln\" (UID: \"16dd9815-f78f-4503-a2a2-ea9a9e47a885\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874sqlln" Mar 12 18:40:30.577989 master-0 kubenswrapper[29097]: I0312 18:40:30.577842 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16dd9815-f78f-4503-a2a2-ea9a9e47a885-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874sqlln\" (UID: \"16dd9815-f78f-4503-a2a2-ea9a9e47a885\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874sqlln" Mar 12 18:40:30.578352 master-0 kubenswrapper[29097]: I0312 18:40:30.578329 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16dd9815-f78f-4503-a2a2-ea9a9e47a885-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874sqlln\" (UID: \"16dd9815-f78f-4503-a2a2-ea9a9e47a885\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874sqlln" Mar 12 18:40:30.597254 master-0 kubenswrapper[29097]: I0312 18:40:30.597208 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g62h\" (UniqueName: \"kubernetes.io/projected/16dd9815-f78f-4503-a2a2-ea9a9e47a885-kube-api-access-6g62h\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874sqlln\" (UID: \"16dd9815-f78f-4503-a2a2-ea9a9e47a885\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874sqlln" Mar 12 18:40:30.768850 master-0 kubenswrapper[29097]: I0312 18:40:30.768789 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874sqlln" Mar 12 18:40:30.998533 master-0 kubenswrapper[29097]: I0312 18:40:30.998400 29097 generic.go:334] "Generic (PLEG): container finished" podID="231f4d11-dbec-4a3f-b32d-6c04e30506d8" containerID="771d01289e550d3c860ea1bb7d4860ad819ba0057c1c33f63e006dc88c2fb583" exitCode=0 Mar 12 18:40:30.999344 master-0 kubenswrapper[29097]: I0312 18:40:30.998474 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jccgq" event={"ID":"231f4d11-dbec-4a3f-b32d-6c04e30506d8","Type":"ContainerDied","Data":"771d01289e550d3c860ea1bb7d4860ad819ba0057c1c33f63e006dc88c2fb583"} Mar 12 18:40:30.999410 master-0 kubenswrapper[29097]: I0312 18:40:30.999353 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jccgq" event={"ID":"231f4d11-dbec-4a3f-b32d-6c04e30506d8","Type":"ContainerStarted","Data":"2e768897a5144b111a049d7f9a6bcdd2fff0f61d0ad99d65bc58300f7637378c"} Mar 12 18:40:31.935595 master-0 kubenswrapper[29097]: I0312 18:40:31.934636 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874sqlln"] Mar 12 18:40:32.008788 master-0 kubenswrapper[29097]: I0312 18:40:32.008320 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874sqlln" event={"ID":"16dd9815-f78f-4503-a2a2-ea9a9e47a885","Type":"ContainerStarted","Data":"15bac2aa5745c83071134660a8cd2d80f7990ff4163381db64c250d64a739fe4"} Mar 12 18:40:33.023999 master-0 kubenswrapper[29097]: I0312 18:40:33.023939 29097 generic.go:334] "Generic (PLEG): container finished" podID="16dd9815-f78f-4503-a2a2-ea9a9e47a885" containerID="56aaddab23a0d61d14e663f0d8eb75fc17e0f203772ad42cd00b9629fe9e7835" exitCode=0 Mar 12 18:40:33.023999 master-0 kubenswrapper[29097]: I0312 18:40:33.023989 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874sqlln" event={"ID":"16dd9815-f78f-4503-a2a2-ea9a9e47a885","Type":"ContainerDied","Data":"56aaddab23a0d61d14e663f0d8eb75fc17e0f203772ad42cd00b9629fe9e7835"} Mar 12 18:40:37.057151 master-0 kubenswrapper[29097]: I0312 18:40:37.057065 29097 generic.go:334] "Generic (PLEG): container finished" podID="231f4d11-dbec-4a3f-b32d-6c04e30506d8" containerID="fe06fae4c8ceeb715b54ca3918b09e9f676dfd7c8876bade08eb677ab9d5feca" exitCode=0 Mar 12 18:40:37.058170 master-0 kubenswrapper[29097]: I0312 18:40:37.057170 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jccgq" event={"ID":"231f4d11-dbec-4a3f-b32d-6c04e30506d8","Type":"ContainerDied","Data":"fe06fae4c8ceeb715b54ca3918b09e9f676dfd7c8876bade08eb677ab9d5feca"} Mar 12 18:40:37.061182 master-0 kubenswrapper[29097]: I0312 18:40:37.061147 29097 generic.go:334] "Generic (PLEG): container finished" podID="cd31043d-b8a7-4d44-8404-c7605b0f163e" containerID="6c7e42e24dfe6a97f7a6fe7cac8feac9f00365b518fe2a825d1160cb02d78923" exitCode=0 Mar 12 18:40:37.061271 master-0 kubenswrapper[29097]: I0312 18:40:37.061195 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g97bl" event={"ID":"cd31043d-b8a7-4d44-8404-c7605b0f163e","Type":"ContainerDied","Data":"6c7e42e24dfe6a97f7a6fe7cac8feac9f00365b518fe2a825d1160cb02d78923"} Mar 12 18:40:37.064052 master-0 kubenswrapper[29097]: I0312 18:40:37.063988 29097 generic.go:334] "Generic (PLEG): container finished" podID="16dd9815-f78f-4503-a2a2-ea9a9e47a885" containerID="c0ac4631719e57e5da1acefe5deff9ccf2ebf023e4d8c29d2f5f9e34bda0d97b" exitCode=0 Mar 12 18:40:37.064052 master-0 kubenswrapper[29097]: I0312 18:40:37.064044 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874sqlln" event={"ID":"16dd9815-f78f-4503-a2a2-ea9a9e47a885","Type":"ContainerDied","Data":"c0ac4631719e57e5da1acefe5deff9ccf2ebf023e4d8c29d2f5f9e34bda0d97b"} Mar 12 18:40:38.075920 master-0 kubenswrapper[29097]: I0312 18:40:38.075872 29097 generic.go:334] "Generic (PLEG): container finished" podID="231f4d11-dbec-4a3f-b32d-6c04e30506d8" containerID="ccc3882e0980e9def6a2537b57ba52a1c8f5ec8b8ed915d027034a3ce5bb527b" exitCode=0 Mar 12 18:40:38.076442 master-0 kubenswrapper[29097]: I0312 18:40:38.075942 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jccgq" event={"ID":"231f4d11-dbec-4a3f-b32d-6c04e30506d8","Type":"ContainerDied","Data":"ccc3882e0980e9def6a2537b57ba52a1c8f5ec8b8ed915d027034a3ce5bb527b"} Mar 12 18:40:38.078390 master-0 kubenswrapper[29097]: I0312 18:40:38.078363 29097 generic.go:334] "Generic (PLEG): container finished" podID="cd31043d-b8a7-4d44-8404-c7605b0f163e" containerID="aff0bf7cae0ca9160fe03af4ce10f43505fce1fa916ed216ac68ad8282d61977" exitCode=0 Mar 12 18:40:38.078470 master-0 kubenswrapper[29097]: I0312 18:40:38.078415 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g97bl" event={"ID":"cd31043d-b8a7-4d44-8404-c7605b0f163e","Type":"ContainerDied","Data":"aff0bf7cae0ca9160fe03af4ce10f43505fce1fa916ed216ac68ad8282d61977"} Mar 12 18:40:38.080811 master-0 kubenswrapper[29097]: I0312 18:40:38.080779 29097 generic.go:334] "Generic (PLEG): container finished" podID="16dd9815-f78f-4503-a2a2-ea9a9e47a885" containerID="fb3982ba1fd204e2456d1723dff3d130292d249dfcf5c56174721c28ddcb8b1e" exitCode=0 Mar 12 18:40:38.080919 master-0 kubenswrapper[29097]: I0312 18:40:38.080856 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874sqlln" event={"ID":"16dd9815-f78f-4503-a2a2-ea9a9e47a885","Type":"ContainerDied","Data":"fb3982ba1fd204e2456d1723dff3d130292d249dfcf5c56174721c28ddcb8b1e"} Mar 12 18:40:38.922342 master-0 kubenswrapper[29097]: I0312 18:40:38.922267 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xp6s2"] Mar 12 18:40:38.924218 master-0 kubenswrapper[29097]: I0312 18:40:38.924182 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xp6s2" Mar 12 18:40:38.933263 master-0 kubenswrapper[29097]: I0312 18:40:38.933220 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xp6s2"] Mar 12 18:40:39.016557 master-0 kubenswrapper[29097]: I0312 18:40:39.015575 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xp6s2\" (UID: \"a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xp6s2" Mar 12 18:40:39.016557 master-0 kubenswrapper[29097]: I0312 18:40:39.015658 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq8lm\" (UniqueName: \"kubernetes.io/projected/a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2-kube-api-access-fq8lm\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xp6s2\" (UID: \"a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xp6s2" Mar 12 18:40:39.016557 master-0 kubenswrapper[29097]: I0312 18:40:39.015759 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xp6s2\" (UID: \"a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xp6s2" Mar 12 18:40:39.116856 master-0 kubenswrapper[29097]: I0312 18:40:39.116800 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xp6s2\" (UID: \"a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xp6s2" Mar 12 18:40:39.117433 master-0 kubenswrapper[29097]: I0312 18:40:39.116865 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq8lm\" (UniqueName: \"kubernetes.io/projected/a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2-kube-api-access-fq8lm\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xp6s2\" (UID: \"a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xp6s2" Mar 12 18:40:39.117433 master-0 kubenswrapper[29097]: I0312 18:40:39.116934 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xp6s2\" (UID: \"a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xp6s2" Mar 12 18:40:39.117433 master-0 kubenswrapper[29097]: I0312 18:40:39.117364 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xp6s2\" (UID: \"a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xp6s2" Mar 12 18:40:39.117640 master-0 kubenswrapper[29097]: I0312 18:40:39.117592 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xp6s2\" (UID: \"a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xp6s2" Mar 12 18:40:39.151664 master-0 kubenswrapper[29097]: I0312 18:40:39.151599 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq8lm\" (UniqueName: \"kubernetes.io/projected/a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2-kube-api-access-fq8lm\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xp6s2\" (UID: \"a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xp6s2" Mar 12 18:40:39.242116 master-0 kubenswrapper[29097]: I0312 18:40:39.242043 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xp6s2" Mar 12 18:40:39.650618 master-0 kubenswrapper[29097]: I0312 18:40:39.650559 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jccgq" Mar 12 18:40:39.658595 master-0 kubenswrapper[29097]: I0312 18:40:39.658177 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g97bl" Mar 12 18:40:39.660809 master-0 kubenswrapper[29097]: I0312 18:40:39.660772 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874sqlln" Mar 12 18:40:39.737985 master-0 kubenswrapper[29097]: I0312 18:40:39.737868 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtzgn\" (UniqueName: \"kubernetes.io/projected/231f4d11-dbec-4a3f-b32d-6c04e30506d8-kube-api-access-jtzgn\") pod \"231f4d11-dbec-4a3f-b32d-6c04e30506d8\" (UID: \"231f4d11-dbec-4a3f-b32d-6c04e30506d8\") " Mar 12 18:40:39.738371 master-0 kubenswrapper[29097]: I0312 18:40:39.738354 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cd31043d-b8a7-4d44-8404-c7605b0f163e-util\") pod \"cd31043d-b8a7-4d44-8404-c7605b0f163e\" (UID: \"cd31043d-b8a7-4d44-8404-c7605b0f163e\") " Mar 12 18:40:39.738471 master-0 kubenswrapper[29097]: I0312 18:40:39.738458 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cd31043d-b8a7-4d44-8404-c7605b0f163e-bundle\") pod \"cd31043d-b8a7-4d44-8404-c7605b0f163e\" (UID: \"cd31043d-b8a7-4d44-8404-c7605b0f163e\") " Mar 12 18:40:39.738594 master-0 kubenswrapper[29097]: I0312 18:40:39.738581 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vvfv\" (UniqueName: \"kubernetes.io/projected/cd31043d-b8a7-4d44-8404-c7605b0f163e-kube-api-access-6vvfv\") pod \"cd31043d-b8a7-4d44-8404-c7605b0f163e\" (UID: \"cd31043d-b8a7-4d44-8404-c7605b0f163e\") " Mar 12 18:40:39.738700 master-0 kubenswrapper[29097]: I0312 18:40:39.738688 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g62h\" (UniqueName: \"kubernetes.io/projected/16dd9815-f78f-4503-a2a2-ea9a9e47a885-kube-api-access-6g62h\") pod \"16dd9815-f78f-4503-a2a2-ea9a9e47a885\" (UID: \"16dd9815-f78f-4503-a2a2-ea9a9e47a885\") " Mar 12 18:40:39.738815 master-0 kubenswrapper[29097]: I0312 18:40:39.738799 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/231f4d11-dbec-4a3f-b32d-6c04e30506d8-bundle\") pod \"231f4d11-dbec-4a3f-b32d-6c04e30506d8\" (UID: \"231f4d11-dbec-4a3f-b32d-6c04e30506d8\") " Mar 12 18:40:39.738939 master-0 kubenswrapper[29097]: I0312 18:40:39.738928 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16dd9815-f78f-4503-a2a2-ea9a9e47a885-util\") pod \"16dd9815-f78f-4503-a2a2-ea9a9e47a885\" (UID: \"16dd9815-f78f-4503-a2a2-ea9a9e47a885\") " Mar 12 18:40:39.739040 master-0 kubenswrapper[29097]: I0312 18:40:39.739029 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/231f4d11-dbec-4a3f-b32d-6c04e30506d8-util\") pod \"231f4d11-dbec-4a3f-b32d-6c04e30506d8\" (UID: \"231f4d11-dbec-4a3f-b32d-6c04e30506d8\") " Mar 12 18:40:39.739142 master-0 kubenswrapper[29097]: I0312 18:40:39.739130 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16dd9815-f78f-4503-a2a2-ea9a9e47a885-bundle\") pod \"16dd9815-f78f-4503-a2a2-ea9a9e47a885\" (UID: \"16dd9815-f78f-4503-a2a2-ea9a9e47a885\") " Mar 12 18:40:39.739377 master-0 kubenswrapper[29097]: I0312 18:40:39.739331 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd31043d-b8a7-4d44-8404-c7605b0f163e-bundle" (OuterVolumeSpecName: "bundle") pod "cd31043d-b8a7-4d44-8404-c7605b0f163e" (UID: "cd31043d-b8a7-4d44-8404-c7605b0f163e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:40:39.741021 master-0 kubenswrapper[29097]: I0312 18:40:39.740194 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16dd9815-f78f-4503-a2a2-ea9a9e47a885-bundle" (OuterVolumeSpecName: "bundle") pod "16dd9815-f78f-4503-a2a2-ea9a9e47a885" (UID: "16dd9815-f78f-4503-a2a2-ea9a9e47a885"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:40:39.741021 master-0 kubenswrapper[29097]: I0312 18:40:39.740698 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/231f4d11-dbec-4a3f-b32d-6c04e30506d8-bundle" (OuterVolumeSpecName: "bundle") pod "231f4d11-dbec-4a3f-b32d-6c04e30506d8" (UID: "231f4d11-dbec-4a3f-b32d-6c04e30506d8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:40:39.745246 master-0 kubenswrapper[29097]: I0312 18:40:39.742999 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd31043d-b8a7-4d44-8404-c7605b0f163e-kube-api-access-6vvfv" (OuterVolumeSpecName: "kube-api-access-6vvfv") pod "cd31043d-b8a7-4d44-8404-c7605b0f163e" (UID: "cd31043d-b8a7-4d44-8404-c7605b0f163e"). InnerVolumeSpecName "kube-api-access-6vvfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:40:39.745495 master-0 kubenswrapper[29097]: I0312 18:40:39.745448 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16dd9815-f78f-4503-a2a2-ea9a9e47a885-util" (OuterVolumeSpecName: "util") pod "16dd9815-f78f-4503-a2a2-ea9a9e47a885" (UID: "16dd9815-f78f-4503-a2a2-ea9a9e47a885"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:40:39.745825 master-0 kubenswrapper[29097]: I0312 18:40:39.745781 29097 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cd31043d-b8a7-4d44-8404-c7605b0f163e-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:40:39.747899 master-0 kubenswrapper[29097]: I0312 18:40:39.747825 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16dd9815-f78f-4503-a2a2-ea9a9e47a885-kube-api-access-6g62h" (OuterVolumeSpecName: "kube-api-access-6g62h") pod "16dd9815-f78f-4503-a2a2-ea9a9e47a885" (UID: "16dd9815-f78f-4503-a2a2-ea9a9e47a885"). InnerVolumeSpecName "kube-api-access-6g62h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:40:39.748493 master-0 kubenswrapper[29097]: I0312 18:40:39.748000 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/231f4d11-dbec-4a3f-b32d-6c04e30506d8-kube-api-access-jtzgn" (OuterVolumeSpecName: "kube-api-access-jtzgn") pod "231f4d11-dbec-4a3f-b32d-6c04e30506d8" (UID: "231f4d11-dbec-4a3f-b32d-6c04e30506d8"). InnerVolumeSpecName "kube-api-access-jtzgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:40:39.751772 master-0 kubenswrapper[29097]: I0312 18:40:39.751679 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd31043d-b8a7-4d44-8404-c7605b0f163e-util" (OuterVolumeSpecName: "util") pod "cd31043d-b8a7-4d44-8404-c7605b0f163e" (UID: "cd31043d-b8a7-4d44-8404-c7605b0f163e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:40:39.752114 master-0 kubenswrapper[29097]: I0312 18:40:39.752072 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/231f4d11-dbec-4a3f-b32d-6c04e30506d8-util" (OuterVolumeSpecName: "util") pod "231f4d11-dbec-4a3f-b32d-6c04e30506d8" (UID: "231f4d11-dbec-4a3f-b32d-6c04e30506d8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:40:39.788464 master-0 kubenswrapper[29097]: W0312 18:40:39.788401 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7bfa6d1_5d2a_44b3_9c4f_94ed2fe3e4a2.slice/crio-8cf143ff19b93c26fe1ed50d683b9eaba426bd230306a3f32cd40bfb6f06ab91 WatchSource:0}: Error finding container 8cf143ff19b93c26fe1ed50d683b9eaba426bd230306a3f32cd40bfb6f06ab91: Status 404 returned error can't find the container with id 8cf143ff19b93c26fe1ed50d683b9eaba426bd230306a3f32cd40bfb6f06ab91 Mar 12 18:40:39.793247 master-0 kubenswrapper[29097]: I0312 18:40:39.793180 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xp6s2"] Mar 12 18:40:39.849923 master-0 kubenswrapper[29097]: I0312 18:40:39.849851 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtzgn\" (UniqueName: \"kubernetes.io/projected/231f4d11-dbec-4a3f-b32d-6c04e30506d8-kube-api-access-jtzgn\") on node \"master-0\" DevicePath \"\"" Mar 12 18:40:39.850068 master-0 kubenswrapper[29097]: I0312 18:40:39.849938 29097 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cd31043d-b8a7-4d44-8404-c7605b0f163e-util\") on node \"master-0\" DevicePath \"\"" Mar 12 18:40:39.850068 master-0 kubenswrapper[29097]: I0312 18:40:39.849963 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vvfv\" (UniqueName: \"kubernetes.io/projected/cd31043d-b8a7-4d44-8404-c7605b0f163e-kube-api-access-6vvfv\") on node \"master-0\" DevicePath \"\"" Mar 12 18:40:39.850068 master-0 kubenswrapper[29097]: I0312 18:40:39.849985 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g62h\" (UniqueName: \"kubernetes.io/projected/16dd9815-f78f-4503-a2a2-ea9a9e47a885-kube-api-access-6g62h\") on node \"master-0\" DevicePath \"\"" Mar 12 18:40:39.850068 master-0 kubenswrapper[29097]: I0312 18:40:39.850006 29097 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/231f4d11-dbec-4a3f-b32d-6c04e30506d8-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:40:39.850068 master-0 kubenswrapper[29097]: I0312 18:40:39.850027 29097 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16dd9815-f78f-4503-a2a2-ea9a9e47a885-util\") on node \"master-0\" DevicePath \"\"" Mar 12 18:40:39.850068 master-0 kubenswrapper[29097]: I0312 18:40:39.850046 29097 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/231f4d11-dbec-4a3f-b32d-6c04e30506d8-util\") on node \"master-0\" DevicePath \"\"" Mar 12 18:40:39.850068 master-0 kubenswrapper[29097]: I0312 18:40:39.850064 29097 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16dd9815-f78f-4503-a2a2-ea9a9e47a885-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:40:40.109442 master-0 kubenswrapper[29097]: I0312 18:40:40.109337 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jccgq" Mar 12 18:40:40.109442 master-0 kubenswrapper[29097]: I0312 18:40:40.109347 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jccgq" event={"ID":"231f4d11-dbec-4a3f-b32d-6c04e30506d8","Type":"ContainerDied","Data":"2e768897a5144b111a049d7f9a6bcdd2fff0f61d0ad99d65bc58300f7637378c"} Mar 12 18:40:40.109909 master-0 kubenswrapper[29097]: I0312 18:40:40.109454 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e768897a5144b111a049d7f9a6bcdd2fff0f61d0ad99d65bc58300f7637378c" Mar 12 18:40:40.118646 master-0 kubenswrapper[29097]: I0312 18:40:40.115656 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g97bl" event={"ID":"cd31043d-b8a7-4d44-8404-c7605b0f163e","Type":"ContainerDied","Data":"78a042b7c3b1ef8383538749926fe0c592f99fa40998791c793fa37917ce9ff0"} Mar 12 18:40:40.118646 master-0 kubenswrapper[29097]: I0312 18:40:40.115703 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78a042b7c3b1ef8383538749926fe0c592f99fa40998791c793fa37917ce9ff0" Mar 12 18:40:40.118646 master-0 kubenswrapper[29097]: I0312 18:40:40.115709 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5g97bl" Mar 12 18:40:40.118646 master-0 kubenswrapper[29097]: I0312 18:40:40.118118 29097 generic.go:334] "Generic (PLEG): container finished" podID="a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2" containerID="9c14ba8ac2c1a2421482958b4aa3ff456da3bb7776fc24b92cc4f3070daa9458" exitCode=0 Mar 12 18:40:40.118646 master-0 kubenswrapper[29097]: I0312 18:40:40.118176 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xp6s2" event={"ID":"a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2","Type":"ContainerDied","Data":"9c14ba8ac2c1a2421482958b4aa3ff456da3bb7776fc24b92cc4f3070daa9458"} Mar 12 18:40:40.118646 master-0 kubenswrapper[29097]: I0312 18:40:40.118202 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xp6s2" event={"ID":"a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2","Type":"ContainerStarted","Data":"8cf143ff19b93c26fe1ed50d683b9eaba426bd230306a3f32cd40bfb6f06ab91"} Mar 12 18:40:40.121666 master-0 kubenswrapper[29097]: I0312 18:40:40.121029 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874sqlln" event={"ID":"16dd9815-f78f-4503-a2a2-ea9a9e47a885","Type":"ContainerDied","Data":"15bac2aa5745c83071134660a8cd2d80f7990ff4163381db64c250d64a739fe4"} Mar 12 18:40:40.121666 master-0 kubenswrapper[29097]: I0312 18:40:40.121077 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15bac2aa5745c83071134660a8cd2d80f7990ff4163381db64c250d64a739fe4" Mar 12 18:40:40.121666 master-0 kubenswrapper[29097]: I0312 18:40:40.121168 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874sqlln" Mar 12 18:40:42.016906 master-0 kubenswrapper[29097]: I0312 18:40:42.016761 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-8n4w5"] Mar 12 18:40:42.017534 master-0 kubenswrapper[29097]: E0312 18:40:42.017175 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="231f4d11-dbec-4a3f-b32d-6c04e30506d8" containerName="util" Mar 12 18:40:42.017534 master-0 kubenswrapper[29097]: I0312 18:40:42.017193 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="231f4d11-dbec-4a3f-b32d-6c04e30506d8" containerName="util" Mar 12 18:40:42.017534 master-0 kubenswrapper[29097]: E0312 18:40:42.017217 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd31043d-b8a7-4d44-8404-c7605b0f163e" containerName="pull" Mar 12 18:40:42.017534 master-0 kubenswrapper[29097]: I0312 18:40:42.017228 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd31043d-b8a7-4d44-8404-c7605b0f163e" containerName="pull" Mar 12 18:40:42.017534 master-0 kubenswrapper[29097]: E0312 18:40:42.017242 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16dd9815-f78f-4503-a2a2-ea9a9e47a885" containerName="pull" Mar 12 18:40:42.017534 master-0 kubenswrapper[29097]: I0312 18:40:42.017250 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="16dd9815-f78f-4503-a2a2-ea9a9e47a885" containerName="pull" Mar 12 18:40:42.017534 master-0 kubenswrapper[29097]: E0312 18:40:42.017260 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd31043d-b8a7-4d44-8404-c7605b0f163e" containerName="extract" Mar 12 18:40:42.017534 master-0 kubenswrapper[29097]: I0312 18:40:42.017270 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd31043d-b8a7-4d44-8404-c7605b0f163e" containerName="extract" Mar 12 18:40:42.017534 master-0 kubenswrapper[29097]: E0312 18:40:42.017283 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16dd9815-f78f-4503-a2a2-ea9a9e47a885" containerName="util" Mar 12 18:40:42.017534 master-0 kubenswrapper[29097]: I0312 18:40:42.017291 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="16dd9815-f78f-4503-a2a2-ea9a9e47a885" containerName="util" Mar 12 18:40:42.017534 master-0 kubenswrapper[29097]: E0312 18:40:42.017309 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="231f4d11-dbec-4a3f-b32d-6c04e30506d8" containerName="pull" Mar 12 18:40:42.017534 master-0 kubenswrapper[29097]: I0312 18:40:42.017317 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="231f4d11-dbec-4a3f-b32d-6c04e30506d8" containerName="pull" Mar 12 18:40:42.017534 master-0 kubenswrapper[29097]: E0312 18:40:42.017338 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd31043d-b8a7-4d44-8404-c7605b0f163e" containerName="util" Mar 12 18:40:42.017534 master-0 kubenswrapper[29097]: I0312 18:40:42.017347 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd31043d-b8a7-4d44-8404-c7605b0f163e" containerName="util" Mar 12 18:40:42.017534 master-0 kubenswrapper[29097]: E0312 18:40:42.017368 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="231f4d11-dbec-4a3f-b32d-6c04e30506d8" containerName="extract" Mar 12 18:40:42.017534 master-0 kubenswrapper[29097]: I0312 18:40:42.017376 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="231f4d11-dbec-4a3f-b32d-6c04e30506d8" containerName="extract" Mar 12 18:40:42.017534 master-0 kubenswrapper[29097]: E0312 18:40:42.017385 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16dd9815-f78f-4503-a2a2-ea9a9e47a885" containerName="extract" Mar 12 18:40:42.017534 master-0 kubenswrapper[29097]: I0312 18:40:42.017394 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="16dd9815-f78f-4503-a2a2-ea9a9e47a885" containerName="extract" Mar 12 18:40:42.018228 master-0 kubenswrapper[29097]: I0312 18:40:42.017591 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="231f4d11-dbec-4a3f-b32d-6c04e30506d8" containerName="extract" Mar 12 18:40:42.018228 master-0 kubenswrapper[29097]: I0312 18:40:42.017617 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd31043d-b8a7-4d44-8404-c7605b0f163e" containerName="extract" Mar 12 18:40:42.018228 master-0 kubenswrapper[29097]: I0312 18:40:42.017655 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="16dd9815-f78f-4503-a2a2-ea9a9e47a885" containerName="extract" Mar 12 18:40:42.018228 master-0 kubenswrapper[29097]: I0312 18:40:42.018186 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-8n4w5" Mar 12 18:40:42.020617 master-0 kubenswrapper[29097]: I0312 18:40:42.020522 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 12 18:40:42.021164 master-0 kubenswrapper[29097]: I0312 18:40:42.021131 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 12 18:40:42.040252 master-0 kubenswrapper[29097]: I0312 18:40:42.040198 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-8n4w5"] Mar 12 18:40:42.088720 master-0 kubenswrapper[29097]: I0312 18:40:42.088621 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5mw6\" (UniqueName: \"kubernetes.io/projected/5ecc5d29-ffb0-45b2-b157-e3ec2c6567fc-kube-api-access-c5mw6\") pod \"nmstate-operator-796d4cfff4-8n4w5\" (UID: \"5ecc5d29-ffb0-45b2-b157-e3ec2c6567fc\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-8n4w5" Mar 12 18:40:42.151770 master-0 kubenswrapper[29097]: I0312 18:40:42.151712 29097 generic.go:334] "Generic (PLEG): container finished" podID="a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2" containerID="605d7884d3da6f3777d565dc849ab6406e59ff8bc5d49fdd344fcc0691e5ead3" exitCode=0 Mar 12 18:40:42.151770 master-0 kubenswrapper[29097]: I0312 18:40:42.151769 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xp6s2" event={"ID":"a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2","Type":"ContainerDied","Data":"605d7884d3da6f3777d565dc849ab6406e59ff8bc5d49fdd344fcc0691e5ead3"} Mar 12 18:40:42.190625 master-0 kubenswrapper[29097]: I0312 18:40:42.190261 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5mw6\" (UniqueName: \"kubernetes.io/projected/5ecc5d29-ffb0-45b2-b157-e3ec2c6567fc-kube-api-access-c5mw6\") pod \"nmstate-operator-796d4cfff4-8n4w5\" (UID: \"5ecc5d29-ffb0-45b2-b157-e3ec2c6567fc\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-8n4w5" Mar 12 18:40:42.210433 master-0 kubenswrapper[29097]: I0312 18:40:42.210364 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5mw6\" (UniqueName: \"kubernetes.io/projected/5ecc5d29-ffb0-45b2-b157-e3ec2c6567fc-kube-api-access-c5mw6\") pod \"nmstate-operator-796d4cfff4-8n4w5\" (UID: \"5ecc5d29-ffb0-45b2-b157-e3ec2c6567fc\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-8n4w5" Mar 12 18:40:42.338543 master-0 kubenswrapper[29097]: I0312 18:40:42.335881 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-8n4w5" Mar 12 18:40:42.680782 master-0 kubenswrapper[29097]: I0312 18:40:42.680623 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-8n4w5"] Mar 12 18:40:43.162774 master-0 kubenswrapper[29097]: I0312 18:40:43.162718 29097 generic.go:334] "Generic (PLEG): container finished" podID="a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2" containerID="b034c01ee74fc659b78e02d04423cca0bddb04bf19ebc24cbe8bbe0bfe7dad19" exitCode=0 Mar 12 18:40:43.163328 master-0 kubenswrapper[29097]: I0312 18:40:43.162798 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xp6s2" event={"ID":"a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2","Type":"ContainerDied","Data":"b034c01ee74fc659b78e02d04423cca0bddb04bf19ebc24cbe8bbe0bfe7dad19"} Mar 12 18:40:43.164863 master-0 kubenswrapper[29097]: I0312 18:40:43.164833 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-8n4w5" event={"ID":"5ecc5d29-ffb0-45b2-b157-e3ec2c6567fc","Type":"ContainerStarted","Data":"6df817ea77857c30ce99266ccd0904036110299cfd7d454d46fcf27c42dbd71f"} Mar 12 18:40:44.547885 master-0 kubenswrapper[29097]: I0312 18:40:44.547817 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xp6s2" Mar 12 18:40:44.636662 master-0 kubenswrapper[29097]: I0312 18:40:44.636614 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fq8lm\" (UniqueName: \"kubernetes.io/projected/a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2-kube-api-access-fq8lm\") pod \"a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2\" (UID: \"a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2\") " Mar 12 18:40:44.636865 master-0 kubenswrapper[29097]: I0312 18:40:44.636799 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2-bundle\") pod \"a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2\" (UID: \"a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2\") " Mar 12 18:40:44.636865 master-0 kubenswrapper[29097]: I0312 18:40:44.636841 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2-util\") pod \"a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2\" (UID: \"a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2\") " Mar 12 18:40:44.639296 master-0 kubenswrapper[29097]: I0312 18:40:44.639240 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2-kube-api-access-fq8lm" (OuterVolumeSpecName: "kube-api-access-fq8lm") pod "a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2" (UID: "a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2"). InnerVolumeSpecName "kube-api-access-fq8lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:40:44.639762 master-0 kubenswrapper[29097]: I0312 18:40:44.639738 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2-bundle" (OuterVolumeSpecName: "bundle") pod "a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2" (UID: "a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:40:44.657956 master-0 kubenswrapper[29097]: I0312 18:40:44.657901 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2-util" (OuterVolumeSpecName: "util") pod "a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2" (UID: "a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:40:44.739535 master-0 kubenswrapper[29097]: I0312 18:40:44.738860 29097 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:40:44.739535 master-0 kubenswrapper[29097]: I0312 18:40:44.738897 29097 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2-util\") on node \"master-0\" DevicePath \"\"" Mar 12 18:40:44.739535 master-0 kubenswrapper[29097]: I0312 18:40:44.738908 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fq8lm\" (UniqueName: \"kubernetes.io/projected/a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2-kube-api-access-fq8lm\") on node \"master-0\" DevicePath \"\"" Mar 12 18:40:45.192306 master-0 kubenswrapper[29097]: I0312 18:40:45.192218 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xp6s2" event={"ID":"a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2","Type":"ContainerDied","Data":"8cf143ff19b93c26fe1ed50d683b9eaba426bd230306a3f32cd40bfb6f06ab91"} Mar 12 18:40:45.192306 master-0 kubenswrapper[29097]: I0312 18:40:45.192284 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cf143ff19b93c26fe1ed50d683b9eaba426bd230306a3f32cd40bfb6f06ab91" Mar 12 18:40:45.192659 master-0 kubenswrapper[29097]: I0312 18:40:45.192380 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xp6s2" Mar 12 18:40:46.206650 master-0 kubenswrapper[29097]: I0312 18:40:46.206505 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-8n4w5" event={"ID":"5ecc5d29-ffb0-45b2-b157-e3ec2c6567fc","Type":"ContainerStarted","Data":"6a447c23e9e6f24c96c1871c5fb1f53ff4b099cef978f1afb4735dfbe2681c50"} Mar 12 18:40:46.239195 master-0 kubenswrapper[29097]: I0312 18:40:46.239056 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-796d4cfff4-8n4w5" podStartSLOduration=2.470950843 podStartE2EDuration="5.239026267s" podCreationTimestamp="2026-03-12 18:40:41 +0000 UTC" firstStartedPulling="2026-03-12 18:40:42.710672194 +0000 UTC m=+682.264652291" lastFinishedPulling="2026-03-12 18:40:45.478747628 +0000 UTC m=+685.032727715" observedRunningTime="2026-03-12 18:40:46.233167291 +0000 UTC m=+685.787147478" watchObservedRunningTime="2026-03-12 18:40:46.239026267 +0000 UTC m=+685.793006434" Mar 12 18:40:51.295937 master-0 kubenswrapper[29097]: I0312 18:40:51.295864 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-jlvkz"] Mar 12 18:40:51.296652 master-0 kubenswrapper[29097]: E0312 18:40:51.296160 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2" containerName="extract" Mar 12 18:40:51.296652 master-0 kubenswrapper[29097]: I0312 18:40:51.296174 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2" containerName="extract" Mar 12 18:40:51.296652 master-0 kubenswrapper[29097]: E0312 18:40:51.296193 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2" containerName="util" Mar 12 18:40:51.296652 master-0 kubenswrapper[29097]: I0312 18:40:51.296199 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2" containerName="util" Mar 12 18:40:51.296652 master-0 kubenswrapper[29097]: E0312 18:40:51.296228 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2" containerName="pull" Mar 12 18:40:51.296652 master-0 kubenswrapper[29097]: I0312 18:40:51.296234 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2" containerName="pull" Mar 12 18:40:51.296652 master-0 kubenswrapper[29097]: I0312 18:40:51.296387 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7bfa6d1-5d2a-44b3-9c4f-94ed2fe3e4a2" containerName="extract" Mar 12 18:40:51.296926 master-0 kubenswrapper[29097]: I0312 18:40:51.296881 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-jlvkz" Mar 12 18:40:51.299434 master-0 kubenswrapper[29097]: I0312 18:40:51.299379 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Mar 12 18:40:51.299555 master-0 kubenswrapper[29097]: I0312 18:40:51.299518 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Mar 12 18:40:51.319178 master-0 kubenswrapper[29097]: I0312 18:40:51.319115 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-jlvkz"] Mar 12 18:40:51.355894 master-0 kubenswrapper[29097]: I0312 18:40:51.354468 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/02eb0fe5-c524-4b72-a874-b912374814d2-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-jlvkz\" (UID: \"02eb0fe5-c524-4b72-a874-b912374814d2\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-jlvkz" Mar 12 18:40:51.355894 master-0 kubenswrapper[29097]: I0312 18:40:51.354542 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lzlm\" (UniqueName: \"kubernetes.io/projected/02eb0fe5-c524-4b72-a874-b912374814d2-kube-api-access-8lzlm\") pod \"cert-manager-operator-controller-manager-66c8bdd694-jlvkz\" (UID: \"02eb0fe5-c524-4b72-a874-b912374814d2\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-jlvkz" Mar 12 18:40:51.455763 master-0 kubenswrapper[29097]: I0312 18:40:51.455696 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/02eb0fe5-c524-4b72-a874-b912374814d2-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-jlvkz\" (UID: \"02eb0fe5-c524-4b72-a874-b912374814d2\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-jlvkz" Mar 12 18:40:51.455763 master-0 kubenswrapper[29097]: I0312 18:40:51.455763 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lzlm\" (UniqueName: \"kubernetes.io/projected/02eb0fe5-c524-4b72-a874-b912374814d2-kube-api-access-8lzlm\") pod \"cert-manager-operator-controller-manager-66c8bdd694-jlvkz\" (UID: \"02eb0fe5-c524-4b72-a874-b912374814d2\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-jlvkz" Mar 12 18:40:51.456500 master-0 kubenswrapper[29097]: I0312 18:40:51.456471 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/02eb0fe5-c524-4b72-a874-b912374814d2-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-jlvkz\" (UID: \"02eb0fe5-c524-4b72-a874-b912374814d2\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-jlvkz" Mar 12 18:40:51.485114 master-0 kubenswrapper[29097]: I0312 18:40:51.485061 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lzlm\" (UniqueName: \"kubernetes.io/projected/02eb0fe5-c524-4b72-a874-b912374814d2-kube-api-access-8lzlm\") pod \"cert-manager-operator-controller-manager-66c8bdd694-jlvkz\" (UID: \"02eb0fe5-c524-4b72-a874-b912374814d2\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-jlvkz" Mar 12 18:40:51.613765 master-0 kubenswrapper[29097]: I0312 18:40:51.613603 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-jlvkz" Mar 12 18:40:52.278132 master-0 kubenswrapper[29097]: I0312 18:40:52.278071 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-jlvkz"] Mar 12 18:40:53.253949 master-0 kubenswrapper[29097]: I0312 18:40:53.253873 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-jlvkz" event={"ID":"02eb0fe5-c524-4b72-a874-b912374814d2","Type":"ContainerStarted","Data":"c07cd96dc346a8dea1cf09acb3407280208fa153c006d1fe9361b2b3030248f9"} Mar 12 18:40:56.282917 master-0 kubenswrapper[29097]: I0312 18:40:56.282675 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-jlvkz" event={"ID":"02eb0fe5-c524-4b72-a874-b912374814d2","Type":"ContainerStarted","Data":"7d14ceefdc33f0620bbdd999dc362b0646f8dc0866a856343e6ce41bd17f8e72"} Mar 12 18:40:56.309353 master-0 kubenswrapper[29097]: I0312 18:40:56.309235 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-jlvkz" podStartSLOduration=1.763086742 podStartE2EDuration="5.309219707s" podCreationTimestamp="2026-03-12 18:40:51 +0000 UTC" firstStartedPulling="2026-03-12 18:40:52.280772728 +0000 UTC m=+691.834752835" lastFinishedPulling="2026-03-12 18:40:55.826905673 +0000 UTC m=+695.380885800" observedRunningTime="2026-03-12 18:40:56.30412107 +0000 UTC m=+695.858101207" watchObservedRunningTime="2026-03-12 18:40:56.309219707 +0000 UTC m=+695.863199814" Mar 12 18:41:00.954846 master-0 kubenswrapper[29097]: I0312 18:41:00.954789 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-hxsm5"] Mar 12 18:41:00.955683 master-0 kubenswrapper[29097]: I0312 18:41:00.955650 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-hxsm5" Mar 12 18:41:00.957852 master-0 kubenswrapper[29097]: I0312 18:41:00.957817 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 12 18:41:00.960960 master-0 kubenswrapper[29097]: I0312 18:41:00.960920 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 12 18:41:00.968189 master-0 kubenswrapper[29097]: I0312 18:41:00.968135 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-hxsm5"] Mar 12 18:41:01.029087 master-0 kubenswrapper[29097]: I0312 18:41:01.028329 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bngft\" (UniqueName: \"kubernetes.io/projected/a4e486b8-44cc-4085-a834-701ce17bf26e-kube-api-access-bngft\") pod \"cert-manager-webhook-6888856db4-hxsm5\" (UID: \"a4e486b8-44cc-4085-a834-701ce17bf26e\") " pod="cert-manager/cert-manager-webhook-6888856db4-hxsm5" Mar 12 18:41:01.029087 master-0 kubenswrapper[29097]: I0312 18:41:01.028459 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a4e486b8-44cc-4085-a834-701ce17bf26e-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-hxsm5\" (UID: \"a4e486b8-44cc-4085-a834-701ce17bf26e\") " pod="cert-manager/cert-manager-webhook-6888856db4-hxsm5" Mar 12 18:41:01.130152 master-0 kubenswrapper[29097]: I0312 18:41:01.130107 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bngft\" (UniqueName: \"kubernetes.io/projected/a4e486b8-44cc-4085-a834-701ce17bf26e-kube-api-access-bngft\") pod \"cert-manager-webhook-6888856db4-hxsm5\" (UID: \"a4e486b8-44cc-4085-a834-701ce17bf26e\") " pod="cert-manager/cert-manager-webhook-6888856db4-hxsm5" Mar 12 18:41:01.130383 master-0 kubenswrapper[29097]: I0312 18:41:01.130203 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a4e486b8-44cc-4085-a834-701ce17bf26e-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-hxsm5\" (UID: \"a4e486b8-44cc-4085-a834-701ce17bf26e\") " pod="cert-manager/cert-manager-webhook-6888856db4-hxsm5" Mar 12 18:41:01.153018 master-0 kubenswrapper[29097]: I0312 18:41:01.152937 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a4e486b8-44cc-4085-a834-701ce17bf26e-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-hxsm5\" (UID: \"a4e486b8-44cc-4085-a834-701ce17bf26e\") " pod="cert-manager/cert-manager-webhook-6888856db4-hxsm5" Mar 12 18:41:01.165744 master-0 kubenswrapper[29097]: I0312 18:41:01.165700 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bngft\" (UniqueName: \"kubernetes.io/projected/a4e486b8-44cc-4085-a834-701ce17bf26e-kube-api-access-bngft\") pod \"cert-manager-webhook-6888856db4-hxsm5\" (UID: \"a4e486b8-44cc-4085-a834-701ce17bf26e\") " pod="cert-manager/cert-manager-webhook-6888856db4-hxsm5" Mar 12 18:41:01.272680 master-0 kubenswrapper[29097]: I0312 18:41:01.272334 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-hxsm5" Mar 12 18:41:01.514581 master-0 kubenswrapper[29097]: I0312 18:41:01.499808 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-7l8b5"] Mar 12 18:41:01.514581 master-0 kubenswrapper[29097]: I0312 18:41:01.500778 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-7l8b5" Mar 12 18:41:01.532627 master-0 kubenswrapper[29097]: I0312 18:41:01.529188 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-7l8b5"] Mar 12 18:41:01.653561 master-0 kubenswrapper[29097]: I0312 18:41:01.653313 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncdrj\" (UniqueName: \"kubernetes.io/projected/7f671a3b-ea65-4659-bc7a-ae9d4dfb8e24-kube-api-access-ncdrj\") pod \"cert-manager-cainjector-5545bd876-7l8b5\" (UID: \"7f671a3b-ea65-4659-bc7a-ae9d4dfb8e24\") " pod="cert-manager/cert-manager-cainjector-5545bd876-7l8b5" Mar 12 18:41:01.653561 master-0 kubenswrapper[29097]: I0312 18:41:01.653416 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7f671a3b-ea65-4659-bc7a-ae9d4dfb8e24-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-7l8b5\" (UID: \"7f671a3b-ea65-4659-bc7a-ae9d4dfb8e24\") " pod="cert-manager/cert-manager-cainjector-5545bd876-7l8b5" Mar 12 18:41:01.754553 master-0 kubenswrapper[29097]: I0312 18:41:01.754480 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7f671a3b-ea65-4659-bc7a-ae9d4dfb8e24-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-7l8b5\" (UID: \"7f671a3b-ea65-4659-bc7a-ae9d4dfb8e24\") " pod="cert-manager/cert-manager-cainjector-5545bd876-7l8b5" Mar 12 18:41:01.754790 master-0 kubenswrapper[29097]: I0312 18:41:01.754590 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncdrj\" (UniqueName: \"kubernetes.io/projected/7f671a3b-ea65-4659-bc7a-ae9d4dfb8e24-kube-api-access-ncdrj\") pod \"cert-manager-cainjector-5545bd876-7l8b5\" (UID: \"7f671a3b-ea65-4659-bc7a-ae9d4dfb8e24\") " pod="cert-manager/cert-manager-cainjector-5545bd876-7l8b5" Mar 12 18:41:01.780077 master-0 kubenswrapper[29097]: I0312 18:41:01.779980 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7f671a3b-ea65-4659-bc7a-ae9d4dfb8e24-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-7l8b5\" (UID: \"7f671a3b-ea65-4659-bc7a-ae9d4dfb8e24\") " pod="cert-manager/cert-manager-cainjector-5545bd876-7l8b5" Mar 12 18:41:01.785556 master-0 kubenswrapper[29097]: I0312 18:41:01.784560 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncdrj\" (UniqueName: \"kubernetes.io/projected/7f671a3b-ea65-4659-bc7a-ae9d4dfb8e24-kube-api-access-ncdrj\") pod \"cert-manager-cainjector-5545bd876-7l8b5\" (UID: \"7f671a3b-ea65-4659-bc7a-ae9d4dfb8e24\") " pod="cert-manager/cert-manager-cainjector-5545bd876-7l8b5" Mar 12 18:41:01.841628 master-0 kubenswrapper[29097]: I0312 18:41:01.841037 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-7l8b5" Mar 12 18:41:01.870311 master-0 kubenswrapper[29097]: I0312 18:41:01.870236 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-hxsm5"] Mar 12 18:41:02.324464 master-0 kubenswrapper[29097]: I0312 18:41:02.324402 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-hxsm5" event={"ID":"a4e486b8-44cc-4085-a834-701ce17bf26e","Type":"ContainerStarted","Data":"de0f7c607ab4b497eb72b916fb31115368b02905c1ea310d3e2b3ff35fa092ce"} Mar 12 18:41:02.351492 master-0 kubenswrapper[29097]: I0312 18:41:02.351436 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-7l8b5"] Mar 12 18:41:02.360733 master-0 kubenswrapper[29097]: W0312 18:41:02.360675 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f671a3b_ea65_4659_bc7a_ae9d4dfb8e24.slice/crio-a57f2bee837ca3d21491aaabe1f8eb8cbdebb0097bbfa09db2a28ec35b141a4e WatchSource:0}: Error finding container a57f2bee837ca3d21491aaabe1f8eb8cbdebb0097bbfa09db2a28ec35b141a4e: Status 404 returned error can't find the container with id a57f2bee837ca3d21491aaabe1f8eb8cbdebb0097bbfa09db2a28ec35b141a4e Mar 12 18:41:03.336756 master-0 kubenswrapper[29097]: I0312 18:41:03.336631 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-7l8b5" event={"ID":"7f671a3b-ea65-4659-bc7a-ae9d4dfb8e24","Type":"ContainerStarted","Data":"a57f2bee837ca3d21491aaabe1f8eb8cbdebb0097bbfa09db2a28ec35b141a4e"} Mar 12 18:41:05.011695 master-0 kubenswrapper[29097]: I0312 18:41:05.011643 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6759bbdbf5-h458c"] Mar 12 18:41:05.015527 master-0 kubenswrapper[29097]: I0312 18:41:05.012607 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6759bbdbf5-h458c" Mar 12 18:41:05.019280 master-0 kubenswrapper[29097]: I0312 18:41:05.016012 29097 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 12 18:41:05.019280 master-0 kubenswrapper[29097]: I0312 18:41:05.016248 29097 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 12 18:41:05.019280 master-0 kubenswrapper[29097]: I0312 18:41:05.016418 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 12 18:41:05.019280 master-0 kubenswrapper[29097]: I0312 18:41:05.016540 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 12 18:41:05.039564 master-0 kubenswrapper[29097]: I0312 18:41:05.035282 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6759bbdbf5-h458c"] Mar 12 18:41:05.130538 master-0 kubenswrapper[29097]: I0312 18:41:05.130095 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/868ee97c-e6d1-48d8-9fd0-cf9b246480cb-webhook-cert\") pod \"metallb-operator-controller-manager-6759bbdbf5-h458c\" (UID: \"868ee97c-e6d1-48d8-9fd0-cf9b246480cb\") " pod="metallb-system/metallb-operator-controller-manager-6759bbdbf5-h458c" Mar 12 18:41:05.130538 master-0 kubenswrapper[29097]: I0312 18:41:05.130145 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smzzg\" (UniqueName: \"kubernetes.io/projected/868ee97c-e6d1-48d8-9fd0-cf9b246480cb-kube-api-access-smzzg\") pod \"metallb-operator-controller-manager-6759bbdbf5-h458c\" (UID: \"868ee97c-e6d1-48d8-9fd0-cf9b246480cb\") " pod="metallb-system/metallb-operator-controller-manager-6759bbdbf5-h458c" Mar 12 18:41:05.130538 master-0 kubenswrapper[29097]: I0312 18:41:05.130202 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/868ee97c-e6d1-48d8-9fd0-cf9b246480cb-apiservice-cert\") pod \"metallb-operator-controller-manager-6759bbdbf5-h458c\" (UID: \"868ee97c-e6d1-48d8-9fd0-cf9b246480cb\") " pod="metallb-system/metallb-operator-controller-manager-6759bbdbf5-h458c" Mar 12 18:41:05.231538 master-0 kubenswrapper[29097]: I0312 18:41:05.231375 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/868ee97c-e6d1-48d8-9fd0-cf9b246480cb-apiservice-cert\") pod \"metallb-operator-controller-manager-6759bbdbf5-h458c\" (UID: \"868ee97c-e6d1-48d8-9fd0-cf9b246480cb\") " pod="metallb-system/metallb-operator-controller-manager-6759bbdbf5-h458c" Mar 12 18:41:05.231780 master-0 kubenswrapper[29097]: I0312 18:41:05.231634 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/868ee97c-e6d1-48d8-9fd0-cf9b246480cb-webhook-cert\") pod \"metallb-operator-controller-manager-6759bbdbf5-h458c\" (UID: \"868ee97c-e6d1-48d8-9fd0-cf9b246480cb\") " pod="metallb-system/metallb-operator-controller-manager-6759bbdbf5-h458c" Mar 12 18:41:05.231780 master-0 kubenswrapper[29097]: I0312 18:41:05.231657 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smzzg\" (UniqueName: \"kubernetes.io/projected/868ee97c-e6d1-48d8-9fd0-cf9b246480cb-kube-api-access-smzzg\") pod \"metallb-operator-controller-manager-6759bbdbf5-h458c\" (UID: \"868ee97c-e6d1-48d8-9fd0-cf9b246480cb\") " pod="metallb-system/metallb-operator-controller-manager-6759bbdbf5-h458c" Mar 12 18:41:05.235529 master-0 kubenswrapper[29097]: I0312 18:41:05.234884 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/868ee97c-e6d1-48d8-9fd0-cf9b246480cb-webhook-cert\") pod \"metallb-operator-controller-manager-6759bbdbf5-h458c\" (UID: \"868ee97c-e6d1-48d8-9fd0-cf9b246480cb\") " pod="metallb-system/metallb-operator-controller-manager-6759bbdbf5-h458c" Mar 12 18:41:05.243532 master-0 kubenswrapper[29097]: I0312 18:41:05.236467 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/868ee97c-e6d1-48d8-9fd0-cf9b246480cb-apiservice-cert\") pod \"metallb-operator-controller-manager-6759bbdbf5-h458c\" (UID: \"868ee97c-e6d1-48d8-9fd0-cf9b246480cb\") " pod="metallb-system/metallb-operator-controller-manager-6759bbdbf5-h458c" Mar 12 18:41:05.254538 master-0 kubenswrapper[29097]: I0312 18:41:05.250051 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smzzg\" (UniqueName: \"kubernetes.io/projected/868ee97c-e6d1-48d8-9fd0-cf9b246480cb-kube-api-access-smzzg\") pod \"metallb-operator-controller-manager-6759bbdbf5-h458c\" (UID: \"868ee97c-e6d1-48d8-9fd0-cf9b246480cb\") " pod="metallb-system/metallb-operator-controller-manager-6759bbdbf5-h458c" Mar 12 18:41:05.381410 master-0 kubenswrapper[29097]: I0312 18:41:05.381353 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6759bbdbf5-h458c" Mar 12 18:41:05.673904 master-0 kubenswrapper[29097]: I0312 18:41:05.672767 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-cff58f8c6-zmgcc"] Mar 12 18:41:05.675171 master-0 kubenswrapper[29097]: I0312 18:41:05.675133 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-cff58f8c6-zmgcc" Mar 12 18:41:05.679979 master-0 kubenswrapper[29097]: I0312 18:41:05.679929 29097 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 12 18:41:05.680300 master-0 kubenswrapper[29097]: I0312 18:41:05.680270 29097 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 12 18:41:05.683029 master-0 kubenswrapper[29097]: I0312 18:41:05.681365 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-cff58f8c6-zmgcc"] Mar 12 18:41:05.746378 master-0 kubenswrapper[29097]: I0312 18:41:05.744589 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nzk9\" (UniqueName: \"kubernetes.io/projected/8f450cb2-6f8f-455f-9dce-db01d41482ad-kube-api-access-7nzk9\") pod \"metallb-operator-webhook-server-cff58f8c6-zmgcc\" (UID: \"8f450cb2-6f8f-455f-9dce-db01d41482ad\") " pod="metallb-system/metallb-operator-webhook-server-cff58f8c6-zmgcc" Mar 12 18:41:05.746378 master-0 kubenswrapper[29097]: I0312 18:41:05.744687 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8f450cb2-6f8f-455f-9dce-db01d41482ad-webhook-cert\") pod \"metallb-operator-webhook-server-cff58f8c6-zmgcc\" (UID: \"8f450cb2-6f8f-455f-9dce-db01d41482ad\") " pod="metallb-system/metallb-operator-webhook-server-cff58f8c6-zmgcc" Mar 12 18:41:05.746378 master-0 kubenswrapper[29097]: I0312 18:41:05.744707 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8f450cb2-6f8f-455f-9dce-db01d41482ad-apiservice-cert\") pod \"metallb-operator-webhook-server-cff58f8c6-zmgcc\" (UID: \"8f450cb2-6f8f-455f-9dce-db01d41482ad\") " pod="metallb-system/metallb-operator-webhook-server-cff58f8c6-zmgcc" Mar 12 18:41:05.846556 master-0 kubenswrapper[29097]: I0312 18:41:05.846444 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nzk9\" (UniqueName: \"kubernetes.io/projected/8f450cb2-6f8f-455f-9dce-db01d41482ad-kube-api-access-7nzk9\") pod \"metallb-operator-webhook-server-cff58f8c6-zmgcc\" (UID: \"8f450cb2-6f8f-455f-9dce-db01d41482ad\") " pod="metallb-system/metallb-operator-webhook-server-cff58f8c6-zmgcc" Mar 12 18:41:05.846748 master-0 kubenswrapper[29097]: I0312 18:41:05.846605 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8f450cb2-6f8f-455f-9dce-db01d41482ad-webhook-cert\") pod \"metallb-operator-webhook-server-cff58f8c6-zmgcc\" (UID: \"8f450cb2-6f8f-455f-9dce-db01d41482ad\") " pod="metallb-system/metallb-operator-webhook-server-cff58f8c6-zmgcc" Mar 12 18:41:05.846748 master-0 kubenswrapper[29097]: I0312 18:41:05.846669 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8f450cb2-6f8f-455f-9dce-db01d41482ad-apiservice-cert\") pod \"metallb-operator-webhook-server-cff58f8c6-zmgcc\" (UID: \"8f450cb2-6f8f-455f-9dce-db01d41482ad\") " pod="metallb-system/metallb-operator-webhook-server-cff58f8c6-zmgcc" Mar 12 18:41:05.856440 master-0 kubenswrapper[29097]: I0312 18:41:05.856381 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8f450cb2-6f8f-455f-9dce-db01d41482ad-webhook-cert\") pod \"metallb-operator-webhook-server-cff58f8c6-zmgcc\" (UID: \"8f450cb2-6f8f-455f-9dce-db01d41482ad\") " pod="metallb-system/metallb-operator-webhook-server-cff58f8c6-zmgcc" Mar 12 18:41:05.867464 master-0 kubenswrapper[29097]: I0312 18:41:05.867418 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nzk9\" (UniqueName: \"kubernetes.io/projected/8f450cb2-6f8f-455f-9dce-db01d41482ad-kube-api-access-7nzk9\") pod \"metallb-operator-webhook-server-cff58f8c6-zmgcc\" (UID: \"8f450cb2-6f8f-455f-9dce-db01d41482ad\") " pod="metallb-system/metallb-operator-webhook-server-cff58f8c6-zmgcc" Mar 12 18:41:05.878365 master-0 kubenswrapper[29097]: I0312 18:41:05.878108 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8f450cb2-6f8f-455f-9dce-db01d41482ad-apiservice-cert\") pod \"metallb-operator-webhook-server-cff58f8c6-zmgcc\" (UID: \"8f450cb2-6f8f-455f-9dce-db01d41482ad\") " pod="metallb-system/metallb-operator-webhook-server-cff58f8c6-zmgcc" Mar 12 18:41:05.969642 master-0 kubenswrapper[29097]: I0312 18:41:05.963931 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6759bbdbf5-h458c"] Mar 12 18:41:05.969642 master-0 kubenswrapper[29097]: W0312 18:41:05.968580 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod868ee97c_e6d1_48d8_9fd0_cf9b246480cb.slice/crio-e2711de74ad80b8ca94eea3a9a9ed22b28021228c7a2a5673b81e99f9c180f1a WatchSource:0}: Error finding container e2711de74ad80b8ca94eea3a9a9ed22b28021228c7a2a5673b81e99f9c180f1a: Status 404 returned error can't find the container with id e2711de74ad80b8ca94eea3a9a9ed22b28021228c7a2a5673b81e99f9c180f1a Mar 12 18:41:06.007040 master-0 kubenswrapper[29097]: I0312 18:41:06.006958 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-cff58f8c6-zmgcc" Mar 12 18:41:06.373792 master-0 kubenswrapper[29097]: I0312 18:41:06.373733 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6759bbdbf5-h458c" event={"ID":"868ee97c-e6d1-48d8-9fd0-cf9b246480cb","Type":"ContainerStarted","Data":"e2711de74ad80b8ca94eea3a9a9ed22b28021228c7a2a5673b81e99f9c180f1a"} Mar 12 18:41:06.425388 master-0 kubenswrapper[29097]: I0312 18:41:06.424630 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-cff58f8c6-zmgcc"] Mar 12 18:41:06.430632 master-0 kubenswrapper[29097]: W0312 18:41:06.430084 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f450cb2_6f8f_455f_9dce_db01d41482ad.slice/crio-664e8be249867511713c45155e1cac81356ce95bbb3d6ef30f6bb8b433fe3e78 WatchSource:0}: Error finding container 664e8be249867511713c45155e1cac81356ce95bbb3d6ef30f6bb8b433fe3e78: Status 404 returned error can't find the container with id 664e8be249867511713c45155e1cac81356ce95bbb3d6ef30f6bb8b433fe3e78 Mar 12 18:41:07.383133 master-0 kubenswrapper[29097]: I0312 18:41:07.383042 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-cff58f8c6-zmgcc" event={"ID":"8f450cb2-6f8f-455f-9dce-db01d41482ad","Type":"ContainerStarted","Data":"664e8be249867511713c45155e1cac81356ce95bbb3d6ef30f6bb8b433fe3e78"} Mar 12 18:41:10.147469 master-0 kubenswrapper[29097]: I0312 18:41:10.147409 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-v4s9q"] Mar 12 18:41:10.148830 master-0 kubenswrapper[29097]: I0312 18:41:10.148806 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v4s9q" Mar 12 18:41:10.154465 master-0 kubenswrapper[29097]: I0312 18:41:10.154196 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Mar 12 18:41:10.154465 master-0 kubenswrapper[29097]: I0312 18:41:10.154375 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Mar 12 18:41:10.163960 master-0 kubenswrapper[29097]: I0312 18:41:10.163837 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-v4s9q"] Mar 12 18:41:10.261626 master-0 kubenswrapper[29097]: I0312 18:41:10.261567 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbbpf\" (UniqueName: \"kubernetes.io/projected/d97f72ef-7b66-40cf-a952-0bc79fb29dc8-kube-api-access-vbbpf\") pod \"obo-prometheus-operator-68bc856cb9-v4s9q\" (UID: \"d97f72ef-7b66-40cf-a952-0bc79fb29dc8\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v4s9q" Mar 12 18:41:10.271310 master-0 kubenswrapper[29097]: I0312 18:41:10.270768 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5d4cf4f7f6-khbkn"] Mar 12 18:41:10.271918 master-0 kubenswrapper[29097]: I0312 18:41:10.271890 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d4cf4f7f6-khbkn" Mar 12 18:41:10.282987 master-0 kubenswrapper[29097]: I0312 18:41:10.282933 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Mar 12 18:41:10.299825 master-0 kubenswrapper[29097]: I0312 18:41:10.299580 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5d4cf4f7f6-4wnq6"] Mar 12 18:41:10.306146 master-0 kubenswrapper[29097]: I0312 18:41:10.300865 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d4cf4f7f6-4wnq6" Mar 12 18:41:10.306146 master-0 kubenswrapper[29097]: I0312 18:41:10.301269 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5d4cf4f7f6-khbkn"] Mar 12 18:41:10.322203 master-0 kubenswrapper[29097]: I0312 18:41:10.322150 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5d4cf4f7f6-4wnq6"] Mar 12 18:41:10.363891 master-0 kubenswrapper[29097]: I0312 18:41:10.363842 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e40dc5bd-5284-4d08-bae8-e9039e4eeed9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5d4cf4f7f6-4wnq6\" (UID: \"e40dc5bd-5284-4d08-bae8-e9039e4eeed9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d4cf4f7f6-4wnq6" Mar 12 18:41:10.364022 master-0 kubenswrapper[29097]: I0312 18:41:10.363917 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fd3ae287-2b11-48b2-91c4-27be1c1fab35-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5d4cf4f7f6-khbkn\" (UID: \"fd3ae287-2b11-48b2-91c4-27be1c1fab35\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d4cf4f7f6-khbkn" Mar 12 18:41:10.364022 master-0 kubenswrapper[29097]: I0312 18:41:10.363988 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e40dc5bd-5284-4d08-bae8-e9039e4eeed9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5d4cf4f7f6-4wnq6\" (UID: \"e40dc5bd-5284-4d08-bae8-e9039e4eeed9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d4cf4f7f6-4wnq6" Mar 12 18:41:10.364091 master-0 kubenswrapper[29097]: I0312 18:41:10.364033 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fd3ae287-2b11-48b2-91c4-27be1c1fab35-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5d4cf4f7f6-khbkn\" (UID: \"fd3ae287-2b11-48b2-91c4-27be1c1fab35\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d4cf4f7f6-khbkn" Mar 12 18:41:10.364091 master-0 kubenswrapper[29097]: I0312 18:41:10.364063 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbbpf\" (UniqueName: \"kubernetes.io/projected/d97f72ef-7b66-40cf-a952-0bc79fb29dc8-kube-api-access-vbbpf\") pod \"obo-prometheus-operator-68bc856cb9-v4s9q\" (UID: \"d97f72ef-7b66-40cf-a952-0bc79fb29dc8\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v4s9q" Mar 12 18:41:10.389237 master-0 kubenswrapper[29097]: I0312 18:41:10.389182 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbbpf\" (UniqueName: \"kubernetes.io/projected/d97f72ef-7b66-40cf-a952-0bc79fb29dc8-kube-api-access-vbbpf\") pod \"obo-prometheus-operator-68bc856cb9-v4s9q\" (UID: \"d97f72ef-7b66-40cf-a952-0bc79fb29dc8\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v4s9q" Mar 12 18:41:10.471664 master-0 kubenswrapper[29097]: I0312 18:41:10.471285 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e40dc5bd-5284-4d08-bae8-e9039e4eeed9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5d4cf4f7f6-4wnq6\" (UID: \"e40dc5bd-5284-4d08-bae8-e9039e4eeed9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d4cf4f7f6-4wnq6" Mar 12 18:41:10.471664 master-0 kubenswrapper[29097]: I0312 18:41:10.471355 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fd3ae287-2b11-48b2-91c4-27be1c1fab35-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5d4cf4f7f6-khbkn\" (UID: \"fd3ae287-2b11-48b2-91c4-27be1c1fab35\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d4cf4f7f6-khbkn" Mar 12 18:41:10.471664 master-0 kubenswrapper[29097]: I0312 18:41:10.471397 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e40dc5bd-5284-4d08-bae8-e9039e4eeed9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5d4cf4f7f6-4wnq6\" (UID: \"e40dc5bd-5284-4d08-bae8-e9039e4eeed9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d4cf4f7f6-4wnq6" Mar 12 18:41:10.471664 master-0 kubenswrapper[29097]: I0312 18:41:10.471436 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fd3ae287-2b11-48b2-91c4-27be1c1fab35-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5d4cf4f7f6-khbkn\" (UID: \"fd3ae287-2b11-48b2-91c4-27be1c1fab35\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d4cf4f7f6-khbkn" Mar 12 18:41:10.473956 master-0 kubenswrapper[29097]: I0312 18:41:10.473929 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-b24pg"] Mar 12 18:41:10.475193 master-0 kubenswrapper[29097]: I0312 18:41:10.474766 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-b24pg" Mar 12 18:41:10.475193 master-0 kubenswrapper[29097]: I0312 18:41:10.475142 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e40dc5bd-5284-4d08-bae8-e9039e4eeed9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5d4cf4f7f6-4wnq6\" (UID: \"e40dc5bd-5284-4d08-bae8-e9039e4eeed9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d4cf4f7f6-4wnq6" Mar 12 18:41:10.476880 master-0 kubenswrapper[29097]: I0312 18:41:10.476827 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fd3ae287-2b11-48b2-91c4-27be1c1fab35-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5d4cf4f7f6-khbkn\" (UID: \"fd3ae287-2b11-48b2-91c4-27be1c1fab35\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d4cf4f7f6-khbkn" Mar 12 18:41:10.477096 master-0 kubenswrapper[29097]: I0312 18:41:10.477067 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fd3ae287-2b11-48b2-91c4-27be1c1fab35-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5d4cf4f7f6-khbkn\" (UID: \"fd3ae287-2b11-48b2-91c4-27be1c1fab35\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d4cf4f7f6-khbkn" Mar 12 18:41:10.481118 master-0 kubenswrapper[29097]: I0312 18:41:10.477108 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e40dc5bd-5284-4d08-bae8-e9039e4eeed9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5d4cf4f7f6-4wnq6\" (UID: \"e40dc5bd-5284-4d08-bae8-e9039e4eeed9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d4cf4f7f6-4wnq6" Mar 12 18:41:10.499711 master-0 kubenswrapper[29097]: I0312 18:41:10.497330 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Mar 12 18:41:10.499711 master-0 kubenswrapper[29097]: I0312 18:41:10.498320 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v4s9q" Mar 12 18:41:10.518160 master-0 kubenswrapper[29097]: I0312 18:41:10.512222 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-b24pg"] Mar 12 18:41:10.576589 master-0 kubenswrapper[29097]: I0312 18:41:10.572373 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkf99\" (UniqueName: \"kubernetes.io/projected/50a8a842-5771-40fe-855c-0751dfe4115b-kube-api-access-mkf99\") pod \"observability-operator-59bdc8b94-b24pg\" (UID: \"50a8a842-5771-40fe-855c-0751dfe4115b\") " pod="openshift-operators/observability-operator-59bdc8b94-b24pg" Mar 12 18:41:10.576589 master-0 kubenswrapper[29097]: I0312 18:41:10.572450 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/50a8a842-5771-40fe-855c-0751dfe4115b-observability-operator-tls\") pod \"observability-operator-59bdc8b94-b24pg\" (UID: \"50a8a842-5771-40fe-855c-0751dfe4115b\") " pod="openshift-operators/observability-operator-59bdc8b94-b24pg" Mar 12 18:41:10.664345 master-0 kubenswrapper[29097]: I0312 18:41:10.662718 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d4cf4f7f6-khbkn" Mar 12 18:41:10.683802 master-0 kubenswrapper[29097]: I0312 18:41:10.676189 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkf99\" (UniqueName: \"kubernetes.io/projected/50a8a842-5771-40fe-855c-0751dfe4115b-kube-api-access-mkf99\") pod \"observability-operator-59bdc8b94-b24pg\" (UID: \"50a8a842-5771-40fe-855c-0751dfe4115b\") " pod="openshift-operators/observability-operator-59bdc8b94-b24pg" Mar 12 18:41:10.683802 master-0 kubenswrapper[29097]: I0312 18:41:10.676255 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/50a8a842-5771-40fe-855c-0751dfe4115b-observability-operator-tls\") pod \"observability-operator-59bdc8b94-b24pg\" (UID: \"50a8a842-5771-40fe-855c-0751dfe4115b\") " pod="openshift-operators/observability-operator-59bdc8b94-b24pg" Mar 12 18:41:10.683802 master-0 kubenswrapper[29097]: I0312 18:41:10.677096 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d4cf4f7f6-4wnq6" Mar 12 18:41:10.695546 master-0 kubenswrapper[29097]: I0312 18:41:10.684582 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/50a8a842-5771-40fe-855c-0751dfe4115b-observability-operator-tls\") pod \"observability-operator-59bdc8b94-b24pg\" (UID: \"50a8a842-5771-40fe-855c-0751dfe4115b\") " pod="openshift-operators/observability-operator-59bdc8b94-b24pg" Mar 12 18:41:10.695546 master-0 kubenswrapper[29097]: I0312 18:41:10.692847 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-xgqbf"] Mar 12 18:41:10.703550 master-0 kubenswrapper[29097]: I0312 18:41:10.699962 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-xgqbf" Mar 12 18:41:10.722708 master-0 kubenswrapper[29097]: I0312 18:41:10.717830 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkf99\" (UniqueName: \"kubernetes.io/projected/50a8a842-5771-40fe-855c-0751dfe4115b-kube-api-access-mkf99\") pod \"observability-operator-59bdc8b94-b24pg\" (UID: \"50a8a842-5771-40fe-855c-0751dfe4115b\") " pod="openshift-operators/observability-operator-59bdc8b94-b24pg" Mar 12 18:41:10.785365 master-0 kubenswrapper[29097]: I0312 18:41:10.755390 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-xgqbf"] Mar 12 18:41:10.785365 master-0 kubenswrapper[29097]: I0312 18:41:10.778267 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l5vs\" (UniqueName: \"kubernetes.io/projected/c13d3e3e-a429-46ae-9b68-a9d7e77ca311-kube-api-access-6l5vs\") pod \"perses-operator-5bf474d74f-xgqbf\" (UID: \"c13d3e3e-a429-46ae-9b68-a9d7e77ca311\") " pod="openshift-operators/perses-operator-5bf474d74f-xgqbf" Mar 12 18:41:10.785365 master-0 kubenswrapper[29097]: I0312 18:41:10.778356 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c13d3e3e-a429-46ae-9b68-a9d7e77ca311-openshift-service-ca\") pod \"perses-operator-5bf474d74f-xgqbf\" (UID: \"c13d3e3e-a429-46ae-9b68-a9d7e77ca311\") " pod="openshift-operators/perses-operator-5bf474d74f-xgqbf" Mar 12 18:41:10.955279 master-0 kubenswrapper[29097]: I0312 18:41:10.952501 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l5vs\" (UniqueName: \"kubernetes.io/projected/c13d3e3e-a429-46ae-9b68-a9d7e77ca311-kube-api-access-6l5vs\") pod \"perses-operator-5bf474d74f-xgqbf\" (UID: \"c13d3e3e-a429-46ae-9b68-a9d7e77ca311\") " pod="openshift-operators/perses-operator-5bf474d74f-xgqbf" Mar 12 18:41:10.955279 master-0 kubenswrapper[29097]: I0312 18:41:10.952587 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c13d3e3e-a429-46ae-9b68-a9d7e77ca311-openshift-service-ca\") pod \"perses-operator-5bf474d74f-xgqbf\" (UID: \"c13d3e3e-a429-46ae-9b68-a9d7e77ca311\") " pod="openshift-operators/perses-operator-5bf474d74f-xgqbf" Mar 12 18:41:10.955279 master-0 kubenswrapper[29097]: I0312 18:41:10.954354 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c13d3e3e-a429-46ae-9b68-a9d7e77ca311-openshift-service-ca\") pod \"perses-operator-5bf474d74f-xgqbf\" (UID: \"c13d3e3e-a429-46ae-9b68-a9d7e77ca311\") " pod="openshift-operators/perses-operator-5bf474d74f-xgqbf" Mar 12 18:41:10.960540 master-0 kubenswrapper[29097]: I0312 18:41:10.958666 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-v4s9q"] Mar 12 18:41:11.000337 master-0 kubenswrapper[29097]: I0312 18:41:10.999718 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l5vs\" (UniqueName: \"kubernetes.io/projected/c13d3e3e-a429-46ae-9b68-a9d7e77ca311-kube-api-access-6l5vs\") pod \"perses-operator-5bf474d74f-xgqbf\" (UID: \"c13d3e3e-a429-46ae-9b68-a9d7e77ca311\") " pod="openshift-operators/perses-operator-5bf474d74f-xgqbf" Mar 12 18:41:11.013553 master-0 kubenswrapper[29097]: I0312 18:41:11.010066 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-b24pg" Mar 12 18:41:11.086466 master-0 kubenswrapper[29097]: I0312 18:41:11.083487 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-xgqbf" Mar 12 18:41:11.418279 master-0 kubenswrapper[29097]: I0312 18:41:11.414014 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5d4cf4f7f6-khbkn"] Mar 12 18:41:11.428759 master-0 kubenswrapper[29097]: I0312 18:41:11.424582 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v4s9q" event={"ID":"d97f72ef-7b66-40cf-a952-0bc79fb29dc8","Type":"ContainerStarted","Data":"10bba9f764735f31f9252b0784c9d7488be9b0526c6d37d7e3ef286fdadb39dd"} Mar 12 18:41:11.444994 master-0 kubenswrapper[29097]: I0312 18:41:11.444045 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-7l8b5" event={"ID":"7f671a3b-ea65-4659-bc7a-ae9d4dfb8e24","Type":"ContainerStarted","Data":"8cb55d2a6ba26c6ef45f9393b0511ae30f5e092916a76b38ecb3031bc82fcdc9"} Mar 12 18:41:11.465273 master-0 kubenswrapper[29097]: I0312 18:41:11.465226 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5d4cf4f7f6-4wnq6"] Mar 12 18:41:11.479015 master-0 kubenswrapper[29097]: I0312 18:41:11.478949 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-hxsm5" event={"ID":"a4e486b8-44cc-4085-a834-701ce17bf26e","Type":"ContainerStarted","Data":"807d024122547b822bf5e2bff4148b472aa1444b9bea0abd9a93486d76fa1375"} Mar 12 18:41:11.479833 master-0 kubenswrapper[29097]: I0312 18:41:11.479801 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-hxsm5" Mar 12 18:41:11.566492 master-0 kubenswrapper[29097]: I0312 18:41:11.565039 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-7l8b5" podStartSLOduration=2.495772311 podStartE2EDuration="10.565014477s" podCreationTimestamp="2026-03-12 18:41:01 +0000 UTC" firstStartedPulling="2026-03-12 18:41:02.362312511 +0000 UTC m=+701.916292608" lastFinishedPulling="2026-03-12 18:41:10.431554677 +0000 UTC m=+709.985534774" observedRunningTime="2026-03-12 18:41:11.477925074 +0000 UTC m=+711.031905171" watchObservedRunningTime="2026-03-12 18:41:11.565014477 +0000 UTC m=+711.118994574" Mar 12 18:41:11.571689 master-0 kubenswrapper[29097]: I0312 18:41:11.570983 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-hxsm5" podStartSLOduration=3.052637575 podStartE2EDuration="11.570964646s" podCreationTimestamp="2026-03-12 18:41:00 +0000 UTC" firstStartedPulling="2026-03-12 18:41:01.905996526 +0000 UTC m=+701.459976623" lastFinishedPulling="2026-03-12 18:41:10.424323597 +0000 UTC m=+709.978303694" observedRunningTime="2026-03-12 18:41:11.564882884 +0000 UTC m=+711.118862981" watchObservedRunningTime="2026-03-12 18:41:11.570964646 +0000 UTC m=+711.124944743" Mar 12 18:41:11.689374 master-0 kubenswrapper[29097]: I0312 18:41:11.689318 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-b24pg"] Mar 12 18:41:11.841112 master-0 kubenswrapper[29097]: I0312 18:41:11.840780 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-xgqbf"] Mar 12 18:41:11.857479 master-0 kubenswrapper[29097]: W0312 18:41:11.857391 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc13d3e3e_a429_46ae_9b68_a9d7e77ca311.slice/crio-26f8c0861d53b2299450408e672bf3f5e1a4b0c504e0cad29bb4bac7baa7be6a WatchSource:0}: Error finding container 26f8c0861d53b2299450408e672bf3f5e1a4b0c504e0cad29bb4bac7baa7be6a: Status 404 returned error can't find the container with id 26f8c0861d53b2299450408e672bf3f5e1a4b0c504e0cad29bb4bac7baa7be6a Mar 12 18:41:12.499061 master-0 kubenswrapper[29097]: I0312 18:41:12.498249 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-b24pg" event={"ID":"50a8a842-5771-40fe-855c-0751dfe4115b","Type":"ContainerStarted","Data":"40b34ab7efdd47f9788236a5ee550a604f69f6e67d551d8344ff459adb5a02d1"} Mar 12 18:41:12.501421 master-0 kubenswrapper[29097]: I0312 18:41:12.501243 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d4cf4f7f6-4wnq6" event={"ID":"e40dc5bd-5284-4d08-bae8-e9039e4eeed9","Type":"ContainerStarted","Data":"00e393a3e5579090f59bdb2bbcd7b8b44b83f8b8bc843d4a03a5f72fa0cd3760"} Mar 12 18:41:12.505600 master-0 kubenswrapper[29097]: I0312 18:41:12.504973 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-xgqbf" event={"ID":"c13d3e3e-a429-46ae-9b68-a9d7e77ca311","Type":"ContainerStarted","Data":"26f8c0861d53b2299450408e672bf3f5e1a4b0c504e0cad29bb4bac7baa7be6a"} Mar 12 18:41:12.507487 master-0 kubenswrapper[29097]: I0312 18:41:12.507463 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d4cf4f7f6-khbkn" event={"ID":"fd3ae287-2b11-48b2-91c4-27be1c1fab35","Type":"ContainerStarted","Data":"c6e297958ea0aefa0fa225eb0e9dde088ffe4543a080eead24d9839fde95e28b"} Mar 12 18:41:16.285413 master-0 kubenswrapper[29097]: I0312 18:41:16.285032 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-hxsm5" Mar 12 18:41:19.996732 master-0 kubenswrapper[29097]: I0312 18:41:19.994423 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-nfb95"] Mar 12 18:41:20.003567 master-0 kubenswrapper[29097]: I0312 18:41:20.003502 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-nfb95" Mar 12 18:41:20.008238 master-0 kubenswrapper[29097]: I0312 18:41:20.008205 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-nfb95"] Mar 12 18:41:20.097066 master-0 kubenswrapper[29097]: I0312 18:41:20.097016 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/860b0ba2-f36b-44ea-af9d-70ecea8fa0c1-bound-sa-token\") pod \"cert-manager-545d4d4674-nfb95\" (UID: \"860b0ba2-f36b-44ea-af9d-70ecea8fa0c1\") " pod="cert-manager/cert-manager-545d4d4674-nfb95" Mar 12 18:41:20.097365 master-0 kubenswrapper[29097]: I0312 18:41:20.097348 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjnhn\" (UniqueName: \"kubernetes.io/projected/860b0ba2-f36b-44ea-af9d-70ecea8fa0c1-kube-api-access-xjnhn\") pod \"cert-manager-545d4d4674-nfb95\" (UID: \"860b0ba2-f36b-44ea-af9d-70ecea8fa0c1\") " pod="cert-manager/cert-manager-545d4d4674-nfb95" Mar 12 18:41:20.198749 master-0 kubenswrapper[29097]: I0312 18:41:20.198684 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjnhn\" (UniqueName: \"kubernetes.io/projected/860b0ba2-f36b-44ea-af9d-70ecea8fa0c1-kube-api-access-xjnhn\") pod \"cert-manager-545d4d4674-nfb95\" (UID: \"860b0ba2-f36b-44ea-af9d-70ecea8fa0c1\") " pod="cert-manager/cert-manager-545d4d4674-nfb95" Mar 12 18:41:20.199007 master-0 kubenswrapper[29097]: I0312 18:41:20.198817 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/860b0ba2-f36b-44ea-af9d-70ecea8fa0c1-bound-sa-token\") pod \"cert-manager-545d4d4674-nfb95\" (UID: \"860b0ba2-f36b-44ea-af9d-70ecea8fa0c1\") " pod="cert-manager/cert-manager-545d4d4674-nfb95" Mar 12 18:41:20.213295 master-0 kubenswrapper[29097]: I0312 18:41:20.213264 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/860b0ba2-f36b-44ea-af9d-70ecea8fa0c1-bound-sa-token\") pod \"cert-manager-545d4d4674-nfb95\" (UID: \"860b0ba2-f36b-44ea-af9d-70ecea8fa0c1\") " pod="cert-manager/cert-manager-545d4d4674-nfb95" Mar 12 18:41:20.217741 master-0 kubenswrapper[29097]: I0312 18:41:20.217612 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjnhn\" (UniqueName: \"kubernetes.io/projected/860b0ba2-f36b-44ea-af9d-70ecea8fa0c1-kube-api-access-xjnhn\") pod \"cert-manager-545d4d4674-nfb95\" (UID: \"860b0ba2-f36b-44ea-af9d-70ecea8fa0c1\") " pod="cert-manager/cert-manager-545d4d4674-nfb95" Mar 12 18:41:20.321711 master-0 kubenswrapper[29097]: I0312 18:41:20.321180 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-nfb95" Mar 12 18:41:21.201261 master-0 kubenswrapper[29097]: E0312 18:41:21.201215 29097 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33feec78_4592_4343_965b_aa1b7044fcf3.slice/crio-26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Error finding container 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Status 404 returned error can't find the container with id 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547 Mar 12 18:41:24.474036 master-0 kubenswrapper[29097]: I0312 18:41:24.473983 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-nfb95"] Mar 12 18:41:24.525994 master-0 kubenswrapper[29097]: W0312 18:41:24.525944 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod860b0ba2_f36b_44ea_af9d_70ecea8fa0c1.slice/crio-bfe7198fa38b15cd9a73685dd1ab164464685d381f95b7e883574310cb022aee WatchSource:0}: Error finding container bfe7198fa38b15cd9a73685dd1ab164464685d381f95b7e883574310cb022aee: Status 404 returned error can't find the container with id bfe7198fa38b15cd9a73685dd1ab164464685d381f95b7e883574310cb022aee Mar 12 18:41:24.649545 master-0 kubenswrapper[29097]: I0312 18:41:24.645789 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6759bbdbf5-h458c" event={"ID":"868ee97c-e6d1-48d8-9fd0-cf9b246480cb","Type":"ContainerStarted","Data":"bdea435ffd99a257a6f63e75d09d6ea97a8a06f3ca8636e35e51d053b60b05c4"} Mar 12 18:41:24.649545 master-0 kubenswrapper[29097]: I0312 18:41:24.646917 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6759bbdbf5-h458c" Mar 12 18:41:24.653552 master-0 kubenswrapper[29097]: I0312 18:41:24.652565 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-b24pg" event={"ID":"50a8a842-5771-40fe-855c-0751dfe4115b","Type":"ContainerStarted","Data":"fd519c9c10c9651b8464dda639f3d883e032b958ef81940a5fa0cccff61cc4da"} Mar 12 18:41:24.656544 master-0 kubenswrapper[29097]: I0312 18:41:24.654137 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-b24pg" Mar 12 18:41:24.660540 master-0 kubenswrapper[29097]: I0312 18:41:24.659859 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-b24pg" Mar 12 18:41:24.670653 master-0 kubenswrapper[29097]: I0312 18:41:24.670248 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-cff58f8c6-zmgcc" event={"ID":"8f450cb2-6f8f-455f-9dce-db01d41482ad","Type":"ContainerStarted","Data":"720e15233a55ee0917a8be18684c10d1f8b5b71cf0c41737bd1f3ca1bb658e93"} Mar 12 18:41:24.674537 master-0 kubenswrapper[29097]: I0312 18:41:24.670876 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-cff58f8c6-zmgcc" Mar 12 18:41:24.677545 master-0 kubenswrapper[29097]: I0312 18:41:24.676172 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v4s9q" event={"ID":"d97f72ef-7b66-40cf-a952-0bc79fb29dc8","Type":"ContainerStarted","Data":"fe0809ad086162e1d7f22f6faf9368c70e6af4f880dc3fa46901db62f50d0c8b"} Mar 12 18:41:24.681527 master-0 kubenswrapper[29097]: I0312 18:41:24.677733 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d4cf4f7f6-4wnq6" event={"ID":"e40dc5bd-5284-4d08-bae8-e9039e4eeed9","Type":"ContainerStarted","Data":"59b4981c2e2cc8661ba7551be6387ced5d7e63cd5e4c3c797197664976929cdb"} Mar 12 18:41:24.684547 master-0 kubenswrapper[29097]: I0312 18:41:24.682138 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-nfb95" event={"ID":"860b0ba2-f36b-44ea-af9d-70ecea8fa0c1","Type":"ContainerStarted","Data":"bfe7198fa38b15cd9a73685dd1ab164464685d381f95b7e883574310cb022aee"} Mar 12 18:41:24.684547 master-0 kubenswrapper[29097]: I0312 18:41:24.683490 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-xgqbf" event={"ID":"c13d3e3e-a429-46ae-9b68-a9d7e77ca311","Type":"ContainerStarted","Data":"6e2c6ae03f34317286d28b83b7df62683d73bc55276c53bcce4e908717f36460"} Mar 12 18:41:24.684547 master-0 kubenswrapper[29097]: I0312 18:41:24.684109 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-xgqbf" Mar 12 18:41:24.688542 master-0 kubenswrapper[29097]: I0312 18:41:24.686036 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d4cf4f7f6-khbkn" event={"ID":"fd3ae287-2b11-48b2-91c4-27be1c1fab35","Type":"ContainerStarted","Data":"3ed1e0d92b88a25e7bf9bcf56ff959b8e811020051fe7a927a49dc56b503b78a"} Mar 12 18:41:24.688542 master-0 kubenswrapper[29097]: I0312 18:41:24.688399 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6759bbdbf5-h458c" podStartSLOduration=4.277431463 podStartE2EDuration="20.688376666s" podCreationTimestamp="2026-03-12 18:41:04 +0000 UTC" firstStartedPulling="2026-03-12 18:41:05.976038483 +0000 UTC m=+705.530018590" lastFinishedPulling="2026-03-12 18:41:22.386983686 +0000 UTC m=+721.940963793" observedRunningTime="2026-03-12 18:41:24.675916105 +0000 UTC m=+724.229896212" watchObservedRunningTime="2026-03-12 18:41:24.688376666 +0000 UTC m=+724.242356773" Mar 12 18:41:24.770539 master-0 kubenswrapper[29097]: I0312 18:41:24.767682 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-b24pg" podStartSLOduration=2.507240276 podStartE2EDuration="14.767666224s" podCreationTimestamp="2026-03-12 18:41:10 +0000 UTC" firstStartedPulling="2026-03-12 18:41:11.70980127 +0000 UTC m=+711.263781367" lastFinishedPulling="2026-03-12 18:41:23.970227218 +0000 UTC m=+723.524207315" observedRunningTime="2026-03-12 18:41:24.715684647 +0000 UTC m=+724.269664764" watchObservedRunningTime="2026-03-12 18:41:24.767666224 +0000 UTC m=+724.321646321" Mar 12 18:41:24.806216 master-0 kubenswrapper[29097]: I0312 18:41:24.806125 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-cff58f8c6-zmgcc" podStartSLOduration=2.3022345619999998 podStartE2EDuration="19.806106163s" podCreationTimestamp="2026-03-12 18:41:05 +0000 UTC" firstStartedPulling="2026-03-12 18:41:06.435187919 +0000 UTC m=+705.989168036" lastFinishedPulling="2026-03-12 18:41:23.93905954 +0000 UTC m=+723.493039637" observedRunningTime="2026-03-12 18:41:24.768676349 +0000 UTC m=+724.322656446" watchObservedRunningTime="2026-03-12 18:41:24.806106163 +0000 UTC m=+724.360086270" Mar 12 18:41:24.816561 master-0 kubenswrapper[29097]: I0312 18:41:24.814573 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d4cf4f7f6-khbkn" podStartSLOduration=2.321583873 podStartE2EDuration="14.814553684s" podCreationTimestamp="2026-03-12 18:41:10 +0000 UTC" firstStartedPulling="2026-03-12 18:41:11.512266721 +0000 UTC m=+711.066246818" lastFinishedPulling="2026-03-12 18:41:24.005236532 +0000 UTC m=+723.559216629" observedRunningTime="2026-03-12 18:41:24.800933864 +0000 UTC m=+724.354913961" watchObservedRunningTime="2026-03-12 18:41:24.814553684 +0000 UTC m=+724.368533781" Mar 12 18:41:24.825945 master-0 kubenswrapper[29097]: I0312 18:41:24.825877 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-xgqbf" podStartSLOduration=2.747366217 podStartE2EDuration="14.825860406s" podCreationTimestamp="2026-03-12 18:41:10 +0000 UTC" firstStartedPulling="2026-03-12 18:41:11.864761556 +0000 UTC m=+711.418741653" lastFinishedPulling="2026-03-12 18:41:23.943255745 +0000 UTC m=+723.497235842" observedRunningTime="2026-03-12 18:41:24.823010105 +0000 UTC m=+724.376990202" watchObservedRunningTime="2026-03-12 18:41:24.825860406 +0000 UTC m=+724.379840513" Mar 12 18:41:24.846361 master-0 kubenswrapper[29097]: I0312 18:41:24.846279 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d4cf4f7f6-4wnq6" podStartSLOduration=2.366800602 podStartE2EDuration="14.846260345s" podCreationTimestamp="2026-03-12 18:41:10 +0000 UTC" firstStartedPulling="2026-03-12 18:41:11.499699298 +0000 UTC m=+711.053679395" lastFinishedPulling="2026-03-12 18:41:23.979159041 +0000 UTC m=+723.533139138" observedRunningTime="2026-03-12 18:41:24.841229249 +0000 UTC m=+724.395209346" watchObservedRunningTime="2026-03-12 18:41:24.846260345 +0000 UTC m=+724.400240432" Mar 12 18:41:24.886240 master-0 kubenswrapper[29097]: I0312 18:41:24.886161 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v4s9q" podStartSLOduration=1.900804265 podStartE2EDuration="14.88614446s" podCreationTimestamp="2026-03-12 18:41:10 +0000 UTC" firstStartedPulling="2026-03-12 18:41:10.984327069 +0000 UTC m=+710.538307166" lastFinishedPulling="2026-03-12 18:41:23.969667264 +0000 UTC m=+723.523647361" observedRunningTime="2026-03-12 18:41:24.881604836 +0000 UTC m=+724.435584933" watchObservedRunningTime="2026-03-12 18:41:24.88614446 +0000 UTC m=+724.440124557" Mar 12 18:41:25.695696 master-0 kubenswrapper[29097]: I0312 18:41:25.695639 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-nfb95" event={"ID":"860b0ba2-f36b-44ea-af9d-70ecea8fa0c1","Type":"ContainerStarted","Data":"6ae7794943c065d795a633739404097e8691d761834abdd263d227acc73eb2f5"} Mar 12 18:41:25.718971 master-0 kubenswrapper[29097]: I0312 18:41:25.718907 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-nfb95" podStartSLOduration=6.718888287 podStartE2EDuration="6.718888287s" podCreationTimestamp="2026-03-12 18:41:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:41:25.713489402 +0000 UTC m=+725.267469499" watchObservedRunningTime="2026-03-12 18:41:25.718888287 +0000 UTC m=+725.272868404" Mar 12 18:41:31.086147 master-0 kubenswrapper[29097]: I0312 18:41:31.086094 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-xgqbf" Mar 12 18:41:36.014054 master-0 kubenswrapper[29097]: I0312 18:41:36.014002 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-cff58f8c6-zmgcc" Mar 12 18:41:55.385505 master-0 kubenswrapper[29097]: I0312 18:41:55.385416 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6759bbdbf5-h458c" Mar 12 18:42:03.212434 master-0 kubenswrapper[29097]: I0312 18:42:03.212339 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-hz8wl"] Mar 12 18:42:03.214255 master-0 kubenswrapper[29097]: I0312 18:42:03.213691 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-hz8wl" Mar 12 18:42:03.215945 master-0 kubenswrapper[29097]: I0312 18:42:03.215897 29097 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 12 18:42:03.229816 master-0 kubenswrapper[29097]: I0312 18:42:03.228590 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-rc6s5"] Mar 12 18:42:03.233075 master-0 kubenswrapper[29097]: I0312 18:42:03.233024 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-rc6s5" Mar 12 18:42:03.237026 master-0 kubenswrapper[29097]: I0312 18:42:03.236235 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 12 18:42:03.237026 master-0 kubenswrapper[29097]: I0312 18:42:03.236459 29097 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 12 18:42:03.251793 master-0 kubenswrapper[29097]: I0312 18:42:03.250369 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/735c0a7b-f9ed-40b4-92a2-fd05a3991503-frr-startup\") pod \"frr-k8s-rc6s5\" (UID: \"735c0a7b-f9ed-40b4-92a2-fd05a3991503\") " pod="metallb-system/frr-k8s-rc6s5" Mar 12 18:42:03.251793 master-0 kubenswrapper[29097]: I0312 18:42:03.250452 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-hz8wl"] Mar 12 18:42:03.251793 master-0 kubenswrapper[29097]: I0312 18:42:03.250457 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mls5c\" (UniqueName: \"kubernetes.io/projected/bed3c813-ea3d-45fb-a830-59ad0830040a-kube-api-access-mls5c\") pod \"frr-k8s-webhook-server-bcc4b6f68-hz8wl\" (UID: \"bed3c813-ea3d-45fb-a830-59ad0830040a\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-hz8wl" Mar 12 18:42:03.251793 master-0 kubenswrapper[29097]: I0312 18:42:03.250610 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bjmj\" (UniqueName: \"kubernetes.io/projected/735c0a7b-f9ed-40b4-92a2-fd05a3991503-kube-api-access-6bjmj\") pod \"frr-k8s-rc6s5\" (UID: \"735c0a7b-f9ed-40b4-92a2-fd05a3991503\") " pod="metallb-system/frr-k8s-rc6s5" Mar 12 18:42:03.251793 master-0 kubenswrapper[29097]: I0312 18:42:03.250638 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/735c0a7b-f9ed-40b4-92a2-fd05a3991503-frr-sockets\") pod \"frr-k8s-rc6s5\" (UID: \"735c0a7b-f9ed-40b4-92a2-fd05a3991503\") " pod="metallb-system/frr-k8s-rc6s5" Mar 12 18:42:03.251793 master-0 kubenswrapper[29097]: I0312 18:42:03.250687 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/735c0a7b-f9ed-40b4-92a2-fd05a3991503-metrics\") pod \"frr-k8s-rc6s5\" (UID: \"735c0a7b-f9ed-40b4-92a2-fd05a3991503\") " pod="metallb-system/frr-k8s-rc6s5" Mar 12 18:42:03.251793 master-0 kubenswrapper[29097]: I0312 18:42:03.250714 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/735c0a7b-f9ed-40b4-92a2-fd05a3991503-frr-conf\") pod \"frr-k8s-rc6s5\" (UID: \"735c0a7b-f9ed-40b4-92a2-fd05a3991503\") " pod="metallb-system/frr-k8s-rc6s5" Mar 12 18:42:03.251793 master-0 kubenswrapper[29097]: I0312 18:42:03.250769 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/735c0a7b-f9ed-40b4-92a2-fd05a3991503-reloader\") pod \"frr-k8s-rc6s5\" (UID: \"735c0a7b-f9ed-40b4-92a2-fd05a3991503\") " pod="metallb-system/frr-k8s-rc6s5" Mar 12 18:42:03.251793 master-0 kubenswrapper[29097]: I0312 18:42:03.250822 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/735c0a7b-f9ed-40b4-92a2-fd05a3991503-metrics-certs\") pod \"frr-k8s-rc6s5\" (UID: \"735c0a7b-f9ed-40b4-92a2-fd05a3991503\") " pod="metallb-system/frr-k8s-rc6s5" Mar 12 18:42:03.251793 master-0 kubenswrapper[29097]: I0312 18:42:03.250860 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bed3c813-ea3d-45fb-a830-59ad0830040a-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-hz8wl\" (UID: \"bed3c813-ea3d-45fb-a830-59ad0830040a\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-hz8wl" Mar 12 18:42:03.332999 master-0 kubenswrapper[29097]: I0312 18:42:03.332891 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-49mjh"] Mar 12 18:42:03.342850 master-0 kubenswrapper[29097]: I0312 18:42:03.338641 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-49mjh" Mar 12 18:42:03.344276 master-0 kubenswrapper[29097]: I0312 18:42:03.344244 29097 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 12 18:42:03.344584 master-0 kubenswrapper[29097]: I0312 18:42:03.344544 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 12 18:42:03.344931 master-0 kubenswrapper[29097]: I0312 18:42:03.344913 29097 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 12 18:42:03.353029 master-0 kubenswrapper[29097]: I0312 18:42:03.352978 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/735c0a7b-f9ed-40b4-92a2-fd05a3991503-metrics\") pod \"frr-k8s-rc6s5\" (UID: \"735c0a7b-f9ed-40b4-92a2-fd05a3991503\") " pod="metallb-system/frr-k8s-rc6s5" Mar 12 18:42:03.353209 master-0 kubenswrapper[29097]: I0312 18:42:03.353043 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/735c0a7b-f9ed-40b4-92a2-fd05a3991503-frr-conf\") pod \"frr-k8s-rc6s5\" (UID: \"735c0a7b-f9ed-40b4-92a2-fd05a3991503\") " pod="metallb-system/frr-k8s-rc6s5" Mar 12 18:42:03.353209 master-0 kubenswrapper[29097]: I0312 18:42:03.353083 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/635ef9b0-f0bb-48be-a32c-99f2dc90e01f-memberlist\") pod \"speaker-49mjh\" (UID: \"635ef9b0-f0bb-48be-a32c-99f2dc90e01f\") " pod="metallb-system/speaker-49mjh" Mar 12 18:42:03.353209 master-0 kubenswrapper[29097]: I0312 18:42:03.353142 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/735c0a7b-f9ed-40b4-92a2-fd05a3991503-reloader\") pod \"frr-k8s-rc6s5\" (UID: \"735c0a7b-f9ed-40b4-92a2-fd05a3991503\") " pod="metallb-system/frr-k8s-rc6s5" Mar 12 18:42:03.353209 master-0 kubenswrapper[29097]: I0312 18:42:03.353174 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/735c0a7b-f9ed-40b4-92a2-fd05a3991503-metrics-certs\") pod \"frr-k8s-rc6s5\" (UID: \"735c0a7b-f9ed-40b4-92a2-fd05a3991503\") " pod="metallb-system/frr-k8s-rc6s5" Mar 12 18:42:03.353335 master-0 kubenswrapper[29097]: I0312 18:42:03.353220 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bed3c813-ea3d-45fb-a830-59ad0830040a-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-hz8wl\" (UID: \"bed3c813-ea3d-45fb-a830-59ad0830040a\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-hz8wl" Mar 12 18:42:03.353335 master-0 kubenswrapper[29097]: I0312 18:42:03.353247 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/635ef9b0-f0bb-48be-a32c-99f2dc90e01f-metallb-excludel2\") pod \"speaker-49mjh\" (UID: \"635ef9b0-f0bb-48be-a32c-99f2dc90e01f\") " pod="metallb-system/speaker-49mjh" Mar 12 18:42:03.353335 master-0 kubenswrapper[29097]: I0312 18:42:03.353304 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/735c0a7b-f9ed-40b4-92a2-fd05a3991503-frr-startup\") pod \"frr-k8s-rc6s5\" (UID: \"735c0a7b-f9ed-40b4-92a2-fd05a3991503\") " pod="metallb-system/frr-k8s-rc6s5" Mar 12 18:42:03.353442 master-0 kubenswrapper[29097]: I0312 18:42:03.353333 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98ltj\" (UniqueName: \"kubernetes.io/projected/635ef9b0-f0bb-48be-a32c-99f2dc90e01f-kube-api-access-98ltj\") pod \"speaker-49mjh\" (UID: \"635ef9b0-f0bb-48be-a32c-99f2dc90e01f\") " pod="metallb-system/speaker-49mjh" Mar 12 18:42:03.353479 master-0 kubenswrapper[29097]: I0312 18:42:03.353461 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mls5c\" (UniqueName: \"kubernetes.io/projected/bed3c813-ea3d-45fb-a830-59ad0830040a-kube-api-access-mls5c\") pod \"frr-k8s-webhook-server-bcc4b6f68-hz8wl\" (UID: \"bed3c813-ea3d-45fb-a830-59ad0830040a\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-hz8wl" Mar 12 18:42:03.353533 master-0 kubenswrapper[29097]: I0312 18:42:03.353494 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bjmj\" (UniqueName: \"kubernetes.io/projected/735c0a7b-f9ed-40b4-92a2-fd05a3991503-kube-api-access-6bjmj\") pod \"frr-k8s-rc6s5\" (UID: \"735c0a7b-f9ed-40b4-92a2-fd05a3991503\") " pod="metallb-system/frr-k8s-rc6s5" Mar 12 18:42:03.353571 master-0 kubenswrapper[29097]: I0312 18:42:03.353534 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/735c0a7b-f9ed-40b4-92a2-fd05a3991503-frr-sockets\") pod \"frr-k8s-rc6s5\" (UID: \"735c0a7b-f9ed-40b4-92a2-fd05a3991503\") " pod="metallb-system/frr-k8s-rc6s5" Mar 12 18:42:03.353618 master-0 kubenswrapper[29097]: I0312 18:42:03.353575 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/635ef9b0-f0bb-48be-a32c-99f2dc90e01f-metrics-certs\") pod \"speaker-49mjh\" (UID: \"635ef9b0-f0bb-48be-a32c-99f2dc90e01f\") " pod="metallb-system/speaker-49mjh" Mar 12 18:42:03.353881 master-0 kubenswrapper[29097]: E0312 18:42:03.353736 29097 secret.go:189] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Mar 12 18:42:03.353881 master-0 kubenswrapper[29097]: E0312 18:42:03.353804 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/735c0a7b-f9ed-40b4-92a2-fd05a3991503-metrics-certs podName:735c0a7b-f9ed-40b4-92a2-fd05a3991503 nodeName:}" failed. No retries permitted until 2026-03-12 18:42:03.853779804 +0000 UTC m=+763.407759911 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/735c0a7b-f9ed-40b4-92a2-fd05a3991503-metrics-certs") pod "frr-k8s-rc6s5" (UID: "735c0a7b-f9ed-40b4-92a2-fd05a3991503") : secret "frr-k8s-certs-secret" not found Mar 12 18:42:03.353986 master-0 kubenswrapper[29097]: I0312 18:42:03.353945 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/735c0a7b-f9ed-40b4-92a2-fd05a3991503-reloader\") pod \"frr-k8s-rc6s5\" (UID: \"735c0a7b-f9ed-40b4-92a2-fd05a3991503\") " pod="metallb-system/frr-k8s-rc6s5" Mar 12 18:42:03.354176 master-0 kubenswrapper[29097]: I0312 18:42:03.354151 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/735c0a7b-f9ed-40b4-92a2-fd05a3991503-metrics\") pod \"frr-k8s-rc6s5\" (UID: \"735c0a7b-f9ed-40b4-92a2-fd05a3991503\") " pod="metallb-system/frr-k8s-rc6s5" Mar 12 18:42:03.354360 master-0 kubenswrapper[29097]: I0312 18:42:03.354341 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/735c0a7b-f9ed-40b4-92a2-fd05a3991503-frr-conf\") pod \"frr-k8s-rc6s5\" (UID: \"735c0a7b-f9ed-40b4-92a2-fd05a3991503\") " pod="metallb-system/frr-k8s-rc6s5" Mar 12 18:42:03.355285 master-0 kubenswrapper[29097]: I0312 18:42:03.354747 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/735c0a7b-f9ed-40b4-92a2-fd05a3991503-frr-sockets\") pod \"frr-k8s-rc6s5\" (UID: \"735c0a7b-f9ed-40b4-92a2-fd05a3991503\") " pod="metallb-system/frr-k8s-rc6s5" Mar 12 18:42:03.356741 master-0 kubenswrapper[29097]: I0312 18:42:03.356714 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/735c0a7b-f9ed-40b4-92a2-fd05a3991503-frr-startup\") pod \"frr-k8s-rc6s5\" (UID: \"735c0a7b-f9ed-40b4-92a2-fd05a3991503\") " pod="metallb-system/frr-k8s-rc6s5" Mar 12 18:42:03.371400 master-0 kubenswrapper[29097]: I0312 18:42:03.367978 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bed3c813-ea3d-45fb-a830-59ad0830040a-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-hz8wl\" (UID: \"bed3c813-ea3d-45fb-a830-59ad0830040a\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-hz8wl" Mar 12 18:42:03.375677 master-0 kubenswrapper[29097]: I0312 18:42:03.375651 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mls5c\" (UniqueName: \"kubernetes.io/projected/bed3c813-ea3d-45fb-a830-59ad0830040a-kube-api-access-mls5c\") pod \"frr-k8s-webhook-server-bcc4b6f68-hz8wl\" (UID: \"bed3c813-ea3d-45fb-a830-59ad0830040a\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-hz8wl" Mar 12 18:42:03.393548 master-0 kubenswrapper[29097]: I0312 18:42:03.390648 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-dh446"] Mar 12 18:42:03.393548 master-0 kubenswrapper[29097]: I0312 18:42:03.393223 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-dh446" Mar 12 18:42:03.394314 master-0 kubenswrapper[29097]: I0312 18:42:03.394290 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bjmj\" (UniqueName: \"kubernetes.io/projected/735c0a7b-f9ed-40b4-92a2-fd05a3991503-kube-api-access-6bjmj\") pod \"frr-k8s-rc6s5\" (UID: \"735c0a7b-f9ed-40b4-92a2-fd05a3991503\") " pod="metallb-system/frr-k8s-rc6s5" Mar 12 18:42:03.397703 master-0 kubenswrapper[29097]: I0312 18:42:03.397686 29097 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 12 18:42:03.422636 master-0 kubenswrapper[29097]: I0312 18:42:03.421584 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-dh446"] Mar 12 18:42:03.457072 master-0 kubenswrapper[29097]: I0312 18:42:03.457033 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98ltj\" (UniqueName: \"kubernetes.io/projected/635ef9b0-f0bb-48be-a32c-99f2dc90e01f-kube-api-access-98ltj\") pod \"speaker-49mjh\" (UID: \"635ef9b0-f0bb-48be-a32c-99f2dc90e01f\") " pod="metallb-system/speaker-49mjh" Mar 12 18:42:03.457366 master-0 kubenswrapper[29097]: I0312 18:42:03.457351 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ffedbfc7-8b89-4bd0-8306-b5aa75878d42-cert\") pod \"controller-7bb4cc7c98-dh446\" (UID: \"ffedbfc7-8b89-4bd0-8306-b5aa75878d42\") " pod="metallb-system/controller-7bb4cc7c98-dh446" Mar 12 18:42:03.457628 master-0 kubenswrapper[29097]: I0312 18:42:03.457590 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ffedbfc7-8b89-4bd0-8306-b5aa75878d42-metrics-certs\") pod \"controller-7bb4cc7c98-dh446\" (UID: \"ffedbfc7-8b89-4bd0-8306-b5aa75878d42\") " pod="metallb-system/controller-7bb4cc7c98-dh446" Mar 12 18:42:03.457691 master-0 kubenswrapper[29097]: I0312 18:42:03.457661 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/635ef9b0-f0bb-48be-a32c-99f2dc90e01f-metrics-certs\") pod \"speaker-49mjh\" (UID: \"635ef9b0-f0bb-48be-a32c-99f2dc90e01f\") " pod="metallb-system/speaker-49mjh" Mar 12 18:42:03.457726 master-0 kubenswrapper[29097]: I0312 18:42:03.457716 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/635ef9b0-f0bb-48be-a32c-99f2dc90e01f-memberlist\") pod \"speaker-49mjh\" (UID: \"635ef9b0-f0bb-48be-a32c-99f2dc90e01f\") " pod="metallb-system/speaker-49mjh" Mar 12 18:42:03.457758 master-0 kubenswrapper[29097]: I0312 18:42:03.457744 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4twgg\" (UniqueName: \"kubernetes.io/projected/ffedbfc7-8b89-4bd0-8306-b5aa75878d42-kube-api-access-4twgg\") pod \"controller-7bb4cc7c98-dh446\" (UID: \"ffedbfc7-8b89-4bd0-8306-b5aa75878d42\") " pod="metallb-system/controller-7bb4cc7c98-dh446" Mar 12 18:42:03.457889 master-0 kubenswrapper[29097]: I0312 18:42:03.457867 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/635ef9b0-f0bb-48be-a32c-99f2dc90e01f-metallb-excludel2\") pod \"speaker-49mjh\" (UID: \"635ef9b0-f0bb-48be-a32c-99f2dc90e01f\") " pod="metallb-system/speaker-49mjh" Mar 12 18:42:03.457951 master-0 kubenswrapper[29097]: E0312 18:42:03.457927 29097 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 12 18:42:03.458008 master-0 kubenswrapper[29097]: E0312 18:42:03.457978 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/635ef9b0-f0bb-48be-a32c-99f2dc90e01f-memberlist podName:635ef9b0-f0bb-48be-a32c-99f2dc90e01f nodeName:}" failed. No retries permitted until 2026-03-12 18:42:03.957963754 +0000 UTC m=+763.511943851 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/635ef9b0-f0bb-48be-a32c-99f2dc90e01f-memberlist") pod "speaker-49mjh" (UID: "635ef9b0-f0bb-48be-a32c-99f2dc90e01f") : secret "metallb-memberlist" not found Mar 12 18:42:03.458092 master-0 kubenswrapper[29097]: E0312 18:42:03.458071 29097 secret.go:189] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Mar 12 18:42:03.458229 master-0 kubenswrapper[29097]: E0312 18:42:03.458214 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/635ef9b0-f0bb-48be-a32c-99f2dc90e01f-metrics-certs podName:635ef9b0-f0bb-48be-a32c-99f2dc90e01f nodeName:}" failed. No retries permitted until 2026-03-12 18:42:03.958188769 +0000 UTC m=+763.512168926 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/635ef9b0-f0bb-48be-a32c-99f2dc90e01f-metrics-certs") pod "speaker-49mjh" (UID: "635ef9b0-f0bb-48be-a32c-99f2dc90e01f") : secret "speaker-certs-secret" not found Mar 12 18:42:03.458806 master-0 kubenswrapper[29097]: I0312 18:42:03.458671 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/635ef9b0-f0bb-48be-a32c-99f2dc90e01f-metallb-excludel2\") pod \"speaker-49mjh\" (UID: \"635ef9b0-f0bb-48be-a32c-99f2dc90e01f\") " pod="metallb-system/speaker-49mjh" Mar 12 18:42:03.474890 master-0 kubenswrapper[29097]: I0312 18:42:03.474818 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98ltj\" (UniqueName: \"kubernetes.io/projected/635ef9b0-f0bb-48be-a32c-99f2dc90e01f-kube-api-access-98ltj\") pod \"speaker-49mjh\" (UID: \"635ef9b0-f0bb-48be-a32c-99f2dc90e01f\") " pod="metallb-system/speaker-49mjh" Mar 12 18:42:03.559597 master-0 kubenswrapper[29097]: I0312 18:42:03.559498 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ffedbfc7-8b89-4bd0-8306-b5aa75878d42-cert\") pod \"controller-7bb4cc7c98-dh446\" (UID: \"ffedbfc7-8b89-4bd0-8306-b5aa75878d42\") " pod="metallb-system/controller-7bb4cc7c98-dh446" Mar 12 18:42:03.559838 master-0 kubenswrapper[29097]: I0312 18:42:03.559613 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ffedbfc7-8b89-4bd0-8306-b5aa75878d42-metrics-certs\") pod \"controller-7bb4cc7c98-dh446\" (UID: \"ffedbfc7-8b89-4bd0-8306-b5aa75878d42\") " pod="metallb-system/controller-7bb4cc7c98-dh446" Mar 12 18:42:03.559838 master-0 kubenswrapper[29097]: I0312 18:42:03.559749 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4twgg\" (UniqueName: \"kubernetes.io/projected/ffedbfc7-8b89-4bd0-8306-b5aa75878d42-kube-api-access-4twgg\") pod \"controller-7bb4cc7c98-dh446\" (UID: \"ffedbfc7-8b89-4bd0-8306-b5aa75878d42\") " pod="metallb-system/controller-7bb4cc7c98-dh446" Mar 12 18:42:03.559838 master-0 kubenswrapper[29097]: E0312 18:42:03.559778 29097 secret.go:189] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Mar 12 18:42:03.560042 master-0 kubenswrapper[29097]: E0312 18:42:03.559845 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffedbfc7-8b89-4bd0-8306-b5aa75878d42-metrics-certs podName:ffedbfc7-8b89-4bd0-8306-b5aa75878d42 nodeName:}" failed. No retries permitted until 2026-03-12 18:42:04.059826045 +0000 UTC m=+763.613806142 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ffedbfc7-8b89-4bd0-8306-b5aa75878d42-metrics-certs") pod "controller-7bb4cc7c98-dh446" (UID: "ffedbfc7-8b89-4bd0-8306-b5aa75878d42") : secret "controller-certs-secret" not found Mar 12 18:42:03.562030 master-0 kubenswrapper[29097]: I0312 18:42:03.561969 29097 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 12 18:42:03.575827 master-0 kubenswrapper[29097]: I0312 18:42:03.573977 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ffedbfc7-8b89-4bd0-8306-b5aa75878d42-cert\") pod \"controller-7bb4cc7c98-dh446\" (UID: \"ffedbfc7-8b89-4bd0-8306-b5aa75878d42\") " pod="metallb-system/controller-7bb4cc7c98-dh446" Mar 12 18:42:03.586545 master-0 kubenswrapper[29097]: I0312 18:42:03.576950 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4twgg\" (UniqueName: \"kubernetes.io/projected/ffedbfc7-8b89-4bd0-8306-b5aa75878d42-kube-api-access-4twgg\") pod \"controller-7bb4cc7c98-dh446\" (UID: \"ffedbfc7-8b89-4bd0-8306-b5aa75878d42\") " pod="metallb-system/controller-7bb4cc7c98-dh446" Mar 12 18:42:03.619697 master-0 kubenswrapper[29097]: I0312 18:42:03.619635 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-hz8wl" Mar 12 18:42:03.867851 master-0 kubenswrapper[29097]: I0312 18:42:03.867784 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/735c0a7b-f9ed-40b4-92a2-fd05a3991503-metrics-certs\") pod \"frr-k8s-rc6s5\" (UID: \"735c0a7b-f9ed-40b4-92a2-fd05a3991503\") " pod="metallb-system/frr-k8s-rc6s5" Mar 12 18:42:03.872378 master-0 kubenswrapper[29097]: I0312 18:42:03.872341 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/735c0a7b-f9ed-40b4-92a2-fd05a3991503-metrics-certs\") pod \"frr-k8s-rc6s5\" (UID: \"735c0a7b-f9ed-40b4-92a2-fd05a3991503\") " pod="metallb-system/frr-k8s-rc6s5" Mar 12 18:42:03.969695 master-0 kubenswrapper[29097]: I0312 18:42:03.969601 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/635ef9b0-f0bb-48be-a32c-99f2dc90e01f-metrics-certs\") pod \"speaker-49mjh\" (UID: \"635ef9b0-f0bb-48be-a32c-99f2dc90e01f\") " pod="metallb-system/speaker-49mjh" Mar 12 18:42:03.969959 master-0 kubenswrapper[29097]: I0312 18:42:03.969718 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/635ef9b0-f0bb-48be-a32c-99f2dc90e01f-memberlist\") pod \"speaker-49mjh\" (UID: \"635ef9b0-f0bb-48be-a32c-99f2dc90e01f\") " pod="metallb-system/speaker-49mjh" Mar 12 18:42:03.969959 master-0 kubenswrapper[29097]: E0312 18:42:03.969867 29097 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 12 18:42:03.969959 master-0 kubenswrapper[29097]: E0312 18:42:03.969948 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/635ef9b0-f0bb-48be-a32c-99f2dc90e01f-memberlist podName:635ef9b0-f0bb-48be-a32c-99f2dc90e01f nodeName:}" failed. No retries permitted until 2026-03-12 18:42:04.969931437 +0000 UTC m=+764.523911524 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/635ef9b0-f0bb-48be-a32c-99f2dc90e01f-memberlist") pod "speaker-49mjh" (UID: "635ef9b0-f0bb-48be-a32c-99f2dc90e01f") : secret "metallb-memberlist" not found Mar 12 18:42:03.974708 master-0 kubenswrapper[29097]: I0312 18:42:03.974658 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/635ef9b0-f0bb-48be-a32c-99f2dc90e01f-metrics-certs\") pod \"speaker-49mjh\" (UID: \"635ef9b0-f0bb-48be-a32c-99f2dc90e01f\") " pod="metallb-system/speaker-49mjh" Mar 12 18:42:04.034062 master-0 kubenswrapper[29097]: I0312 18:42:04.033981 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-hz8wl"] Mar 12 18:42:04.037563 master-0 kubenswrapper[29097]: I0312 18:42:04.037467 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-rc6s5" Mar 12 18:42:04.052137 master-0 kubenswrapper[29097]: I0312 18:42:04.052053 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-hz8wl" event={"ID":"bed3c813-ea3d-45fb-a830-59ad0830040a","Type":"ContainerStarted","Data":"15d5120a057accdfdfda15342caa28fc2f9c546f2a94953e0230c4ffbe55f881"} Mar 12 18:42:04.071830 master-0 kubenswrapper[29097]: I0312 18:42:04.071757 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ffedbfc7-8b89-4bd0-8306-b5aa75878d42-metrics-certs\") pod \"controller-7bb4cc7c98-dh446\" (UID: \"ffedbfc7-8b89-4bd0-8306-b5aa75878d42\") " pod="metallb-system/controller-7bb4cc7c98-dh446" Mar 12 18:42:04.076994 master-0 kubenswrapper[29097]: I0312 18:42:04.076921 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ffedbfc7-8b89-4bd0-8306-b5aa75878d42-metrics-certs\") pod \"controller-7bb4cc7c98-dh446\" (UID: \"ffedbfc7-8b89-4bd0-8306-b5aa75878d42\") " pod="metallb-system/controller-7bb4cc7c98-dh446" Mar 12 18:42:04.107151 master-0 kubenswrapper[29097]: I0312 18:42:04.107085 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-dh446" Mar 12 18:42:04.556462 master-0 kubenswrapper[29097]: I0312 18:42:04.556411 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-dh446"] Mar 12 18:42:04.559681 master-0 kubenswrapper[29097]: W0312 18:42:04.559613 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffedbfc7_8b89_4bd0_8306_b5aa75878d42.slice/crio-57493da5589dc5e88c34b8ceb9b36582011abdca9d718e8ef1f9899ad01ea92e WatchSource:0}: Error finding container 57493da5589dc5e88c34b8ceb9b36582011abdca9d718e8ef1f9899ad01ea92e: Status 404 returned error can't find the container with id 57493da5589dc5e88c34b8ceb9b36582011abdca9d718e8ef1f9899ad01ea92e Mar 12 18:42:04.987310 master-0 kubenswrapper[29097]: I0312 18:42:04.987053 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/635ef9b0-f0bb-48be-a32c-99f2dc90e01f-memberlist\") pod \"speaker-49mjh\" (UID: \"635ef9b0-f0bb-48be-a32c-99f2dc90e01f\") " pod="metallb-system/speaker-49mjh" Mar 12 18:42:04.990273 master-0 kubenswrapper[29097]: I0312 18:42:04.990204 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/635ef9b0-f0bb-48be-a32c-99f2dc90e01f-memberlist\") pod \"speaker-49mjh\" (UID: \"635ef9b0-f0bb-48be-a32c-99f2dc90e01f\") " pod="metallb-system/speaker-49mjh" Mar 12 18:42:05.061065 master-0 kubenswrapper[29097]: I0312 18:42:05.061010 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rc6s5" event={"ID":"735c0a7b-f9ed-40b4-92a2-fd05a3991503","Type":"ContainerStarted","Data":"e4ce81df11501c94338066b7f5f9d70e955ec8252623e0f748871460038ecb39"} Mar 12 18:42:05.063099 master-0 kubenswrapper[29097]: I0312 18:42:05.063046 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-dh446" event={"ID":"ffedbfc7-8b89-4bd0-8306-b5aa75878d42","Type":"ContainerStarted","Data":"51e610ab6e369843a375fb6b661e7345dbfef710d8688ac47e16bcefa768ed68"} Mar 12 18:42:05.063178 master-0 kubenswrapper[29097]: I0312 18:42:05.063104 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-dh446" event={"ID":"ffedbfc7-8b89-4bd0-8306-b5aa75878d42","Type":"ContainerStarted","Data":"57493da5589dc5e88c34b8ceb9b36582011abdca9d718e8ef1f9899ad01ea92e"} Mar 12 18:42:05.260369 master-0 kubenswrapper[29097]: I0312 18:42:05.259818 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-49mjh" Mar 12 18:42:05.283198 master-0 kubenswrapper[29097]: W0312 18:42:05.283138 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod635ef9b0_f0bb_48be_a32c_99f2dc90e01f.slice/crio-ecfc01a09684e98b131ccf0601e1ee9ae02e13176878996c77ae664b71839506 WatchSource:0}: Error finding container ecfc01a09684e98b131ccf0601e1ee9ae02e13176878996c77ae664b71839506: Status 404 returned error can't find the container with id ecfc01a09684e98b131ccf0601e1ee9ae02e13176878996c77ae664b71839506 Mar 12 18:42:05.372538 master-0 kubenswrapper[29097]: I0312 18:42:05.370763 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-qzdrp"] Mar 12 18:42:05.372538 master-0 kubenswrapper[29097]: I0312 18:42:05.372432 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-qzdrp" Mar 12 18:42:05.396781 master-0 kubenswrapper[29097]: I0312 18:42:05.396559 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdcp2\" (UniqueName: \"kubernetes.io/projected/0052e280-cd09-40a9-a843-edcf5051927e-kube-api-access-jdcp2\") pod \"nmstate-metrics-9b8c8685d-qzdrp\" (UID: \"0052e280-cd09-40a9-a843-edcf5051927e\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-qzdrp" Mar 12 18:42:05.408708 master-0 kubenswrapper[29097]: I0312 18:42:05.408576 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-9bpjp"] Mar 12 18:42:05.413010 master-0 kubenswrapper[29097]: I0312 18:42:05.409547 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-9bpjp" Mar 12 18:42:05.414944 master-0 kubenswrapper[29097]: I0312 18:42:05.414524 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 12 18:42:05.428210 master-0 kubenswrapper[29097]: I0312 18:42:05.426833 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-qzdrp"] Mar 12 18:42:05.445818 master-0 kubenswrapper[29097]: I0312 18:42:05.445401 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-9bpjp"] Mar 12 18:42:05.488064 master-0 kubenswrapper[29097]: I0312 18:42:05.488014 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-wwfx4"] Mar 12 18:42:05.489058 master-0 kubenswrapper[29097]: I0312 18:42:05.489034 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wwfx4" Mar 12 18:42:05.519012 master-0 kubenswrapper[29097]: I0312 18:42:05.517742 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdcp2\" (UniqueName: \"kubernetes.io/projected/0052e280-cd09-40a9-a843-edcf5051927e-kube-api-access-jdcp2\") pod \"nmstate-metrics-9b8c8685d-qzdrp\" (UID: \"0052e280-cd09-40a9-a843-edcf5051927e\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-qzdrp" Mar 12 18:42:05.519012 master-0 kubenswrapper[29097]: I0312 18:42:05.517869 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff7s4\" (UniqueName: \"kubernetes.io/projected/9f83c98b-f576-41f7-827a-65585172c452-kube-api-access-ff7s4\") pod \"nmstate-webhook-5f558f5558-9bpjp\" (UID: \"9f83c98b-f576-41f7-827a-65585172c452\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-9bpjp" Mar 12 18:42:05.519012 master-0 kubenswrapper[29097]: I0312 18:42:05.517903 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9f83c98b-f576-41f7-827a-65585172c452-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-9bpjp\" (UID: \"9f83c98b-f576-41f7-827a-65585172c452\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-9bpjp" Mar 12 18:42:05.542222 master-0 kubenswrapper[29097]: I0312 18:42:05.542171 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdcp2\" (UniqueName: \"kubernetes.io/projected/0052e280-cd09-40a9-a843-edcf5051927e-kube-api-access-jdcp2\") pod \"nmstate-metrics-9b8c8685d-qzdrp\" (UID: \"0052e280-cd09-40a9-a843-edcf5051927e\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-qzdrp" Mar 12 18:42:05.603140 master-0 kubenswrapper[29097]: I0312 18:42:05.603061 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-2hhd7"] Mar 12 18:42:05.612296 master-0 kubenswrapper[29097]: I0312 18:42:05.604260 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-2hhd7" Mar 12 18:42:05.612296 master-0 kubenswrapper[29097]: I0312 18:42:05.605959 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 12 18:42:05.613979 master-0 kubenswrapper[29097]: I0312 18:42:05.613467 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 12 18:42:05.619331 master-0 kubenswrapper[29097]: I0312 18:42:05.619295 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9f83c98b-f576-41f7-827a-65585172c452-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-9bpjp\" (UID: \"9f83c98b-f576-41f7-827a-65585172c452\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-9bpjp" Mar 12 18:42:05.619464 master-0 kubenswrapper[29097]: I0312 18:42:05.619369 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/cef2870a-7ede-4f27-8d32-0aaa58024d6d-nmstate-lock\") pod \"nmstate-handler-wwfx4\" (UID: \"cef2870a-7ede-4f27-8d32-0aaa58024d6d\") " pod="openshift-nmstate/nmstate-handler-wwfx4" Mar 12 18:42:05.619464 master-0 kubenswrapper[29097]: I0312 18:42:05.619418 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmjds\" (UniqueName: \"kubernetes.io/projected/cef2870a-7ede-4f27-8d32-0aaa58024d6d-kube-api-access-fmjds\") pod \"nmstate-handler-wwfx4\" (UID: \"cef2870a-7ede-4f27-8d32-0aaa58024d6d\") " pod="openshift-nmstate/nmstate-handler-wwfx4" Mar 12 18:42:05.619464 master-0 kubenswrapper[29097]: I0312 18:42:05.619438 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/cef2870a-7ede-4f27-8d32-0aaa58024d6d-dbus-socket\") pod \"nmstate-handler-wwfx4\" (UID: \"cef2870a-7ede-4f27-8d32-0aaa58024d6d\") " pod="openshift-nmstate/nmstate-handler-wwfx4" Mar 12 18:42:05.619579 master-0 kubenswrapper[29097]: I0312 18:42:05.619478 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/cef2870a-7ede-4f27-8d32-0aaa58024d6d-ovs-socket\") pod \"nmstate-handler-wwfx4\" (UID: \"cef2870a-7ede-4f27-8d32-0aaa58024d6d\") " pod="openshift-nmstate/nmstate-handler-wwfx4" Mar 12 18:42:05.619579 master-0 kubenswrapper[29097]: I0312 18:42:05.619506 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff7s4\" (UniqueName: \"kubernetes.io/projected/9f83c98b-f576-41f7-827a-65585172c452-kube-api-access-ff7s4\") pod \"nmstate-webhook-5f558f5558-9bpjp\" (UID: \"9f83c98b-f576-41f7-827a-65585172c452\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-9bpjp" Mar 12 18:42:05.619848 master-0 kubenswrapper[29097]: E0312 18:42:05.619824 29097 secret.go:189] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Mar 12 18:42:05.619894 master-0 kubenswrapper[29097]: E0312 18:42:05.619871 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f83c98b-f576-41f7-827a-65585172c452-tls-key-pair podName:9f83c98b-f576-41f7-827a-65585172c452 nodeName:}" failed. No retries permitted until 2026-03-12 18:42:06.119856843 +0000 UTC m=+765.673836940 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/9f83c98b-f576-41f7-827a-65585172c452-tls-key-pair") pod "nmstate-webhook-5f558f5558-9bpjp" (UID: "9f83c98b-f576-41f7-827a-65585172c452") : secret "openshift-nmstate-webhook" not found Mar 12 18:42:05.620253 master-0 kubenswrapper[29097]: I0312 18:42:05.620223 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-2hhd7"] Mar 12 18:42:05.643548 master-0 kubenswrapper[29097]: I0312 18:42:05.643489 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff7s4\" (UniqueName: \"kubernetes.io/projected/9f83c98b-f576-41f7-827a-65585172c452-kube-api-access-ff7s4\") pod \"nmstate-webhook-5f558f5558-9bpjp\" (UID: \"9f83c98b-f576-41f7-827a-65585172c452\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-9bpjp" Mar 12 18:42:05.728561 master-0 kubenswrapper[29097]: I0312 18:42:05.727507 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/cef2870a-7ede-4f27-8d32-0aaa58024d6d-nmstate-lock\") pod \"nmstate-handler-wwfx4\" (UID: \"cef2870a-7ede-4f27-8d32-0aaa58024d6d\") " pod="openshift-nmstate/nmstate-handler-wwfx4" Mar 12 18:42:05.728561 master-0 kubenswrapper[29097]: I0312 18:42:05.727618 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmjds\" (UniqueName: \"kubernetes.io/projected/cef2870a-7ede-4f27-8d32-0aaa58024d6d-kube-api-access-fmjds\") pod \"nmstate-handler-wwfx4\" (UID: \"cef2870a-7ede-4f27-8d32-0aaa58024d6d\") " pod="openshift-nmstate/nmstate-handler-wwfx4" Mar 12 18:42:05.728561 master-0 kubenswrapper[29097]: I0312 18:42:05.727619 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/cef2870a-7ede-4f27-8d32-0aaa58024d6d-nmstate-lock\") pod \"nmstate-handler-wwfx4\" (UID: \"cef2870a-7ede-4f27-8d32-0aaa58024d6d\") " pod="openshift-nmstate/nmstate-handler-wwfx4" Mar 12 18:42:05.728561 master-0 kubenswrapper[29097]: I0312 18:42:05.727646 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/cef2870a-7ede-4f27-8d32-0aaa58024d6d-dbus-socket\") pod \"nmstate-handler-wwfx4\" (UID: \"cef2870a-7ede-4f27-8d32-0aaa58024d6d\") " pod="openshift-nmstate/nmstate-handler-wwfx4" Mar 12 18:42:05.728561 master-0 kubenswrapper[29097]: I0312 18:42:05.727674 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcjhd\" (UniqueName: \"kubernetes.io/projected/db7d78bb-1030-44b4-a4f1-b644a2ccb171-kube-api-access-tcjhd\") pod \"nmstate-console-plugin-86f58fcf4-2hhd7\" (UID: \"db7d78bb-1030-44b4-a4f1-b644a2ccb171\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-2hhd7" Mar 12 18:42:05.728561 master-0 kubenswrapper[29097]: I0312 18:42:05.727739 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/cef2870a-7ede-4f27-8d32-0aaa58024d6d-ovs-socket\") pod \"nmstate-handler-wwfx4\" (UID: \"cef2870a-7ede-4f27-8d32-0aaa58024d6d\") " pod="openshift-nmstate/nmstate-handler-wwfx4" Mar 12 18:42:05.728561 master-0 kubenswrapper[29097]: I0312 18:42:05.727777 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/db7d78bb-1030-44b4-a4f1-b644a2ccb171-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-2hhd7\" (UID: \"db7d78bb-1030-44b4-a4f1-b644a2ccb171\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-2hhd7" Mar 12 18:42:05.728561 master-0 kubenswrapper[29097]: I0312 18:42:05.727805 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/db7d78bb-1030-44b4-a4f1-b644a2ccb171-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-2hhd7\" (UID: \"db7d78bb-1030-44b4-a4f1-b644a2ccb171\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-2hhd7" Mar 12 18:42:05.728561 master-0 kubenswrapper[29097]: I0312 18:42:05.727933 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/cef2870a-7ede-4f27-8d32-0aaa58024d6d-dbus-socket\") pod \"nmstate-handler-wwfx4\" (UID: \"cef2870a-7ede-4f27-8d32-0aaa58024d6d\") " pod="openshift-nmstate/nmstate-handler-wwfx4" Mar 12 18:42:05.728561 master-0 kubenswrapper[29097]: I0312 18:42:05.727951 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/cef2870a-7ede-4f27-8d32-0aaa58024d6d-ovs-socket\") pod \"nmstate-handler-wwfx4\" (UID: \"cef2870a-7ede-4f27-8d32-0aaa58024d6d\") " pod="openshift-nmstate/nmstate-handler-wwfx4" Mar 12 18:42:05.743957 master-0 kubenswrapper[29097]: I0312 18:42:05.738866 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-qzdrp" Mar 12 18:42:05.757699 master-0 kubenswrapper[29097]: I0312 18:42:05.751499 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmjds\" (UniqueName: \"kubernetes.io/projected/cef2870a-7ede-4f27-8d32-0aaa58024d6d-kube-api-access-fmjds\") pod \"nmstate-handler-wwfx4\" (UID: \"cef2870a-7ede-4f27-8d32-0aaa58024d6d\") " pod="openshift-nmstate/nmstate-handler-wwfx4" Mar 12 18:42:05.810088 master-0 kubenswrapper[29097]: I0312 18:42:05.809967 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-54dfb9c5c7-4blxh"] Mar 12 18:42:05.815023 master-0 kubenswrapper[29097]: I0312 18:42:05.810991 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-54dfb9c5c7-4blxh" Mar 12 18:42:05.829729 master-0 kubenswrapper[29097]: I0312 18:42:05.829391 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-54dfb9c5c7-4blxh"] Mar 12 18:42:05.829923 master-0 kubenswrapper[29097]: I0312 18:42:05.829897 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcjhd\" (UniqueName: \"kubernetes.io/projected/db7d78bb-1030-44b4-a4f1-b644a2ccb171-kube-api-access-tcjhd\") pod \"nmstate-console-plugin-86f58fcf4-2hhd7\" (UID: \"db7d78bb-1030-44b4-a4f1-b644a2ccb171\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-2hhd7" Mar 12 18:42:05.830712 master-0 kubenswrapper[29097]: I0312 18:42:05.830674 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/db7d78bb-1030-44b4-a4f1-b644a2ccb171-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-2hhd7\" (UID: \"db7d78bb-1030-44b4-a4f1-b644a2ccb171\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-2hhd7" Mar 12 18:42:05.830791 master-0 kubenswrapper[29097]: I0312 18:42:05.830720 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/db7d78bb-1030-44b4-a4f1-b644a2ccb171-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-2hhd7\" (UID: \"db7d78bb-1030-44b4-a4f1-b644a2ccb171\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-2hhd7" Mar 12 18:42:05.831618 master-0 kubenswrapper[29097]: I0312 18:42:05.831589 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/db7d78bb-1030-44b4-a4f1-b644a2ccb171-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-2hhd7\" (UID: \"db7d78bb-1030-44b4-a4f1-b644a2ccb171\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-2hhd7" Mar 12 18:42:05.834590 master-0 kubenswrapper[29097]: I0312 18:42:05.834563 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/db7d78bb-1030-44b4-a4f1-b644a2ccb171-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-2hhd7\" (UID: \"db7d78bb-1030-44b4-a4f1-b644a2ccb171\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-2hhd7" Mar 12 18:42:05.860231 master-0 kubenswrapper[29097]: I0312 18:42:05.859348 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcjhd\" (UniqueName: \"kubernetes.io/projected/db7d78bb-1030-44b4-a4f1-b644a2ccb171-kube-api-access-tcjhd\") pod \"nmstate-console-plugin-86f58fcf4-2hhd7\" (UID: \"db7d78bb-1030-44b4-a4f1-b644a2ccb171\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-2hhd7" Mar 12 18:42:05.917418 master-0 kubenswrapper[29097]: I0312 18:42:05.916793 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wwfx4" Mar 12 18:42:05.939096 master-0 kubenswrapper[29097]: I0312 18:42:05.935985 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-2hhd7" Mar 12 18:42:05.958369 master-0 kubenswrapper[29097]: I0312 18:42:05.958302 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8a23285d-7a0d-4bf6-9e97-80a59a271486-oauth-serving-cert\") pod \"console-54dfb9c5c7-4blxh\" (UID: \"8a23285d-7a0d-4bf6-9e97-80a59a271486\") " pod="openshift-console/console-54dfb9c5c7-4blxh" Mar 12 18:42:05.958369 master-0 kubenswrapper[29097]: I0312 18:42:05.958376 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8a23285d-7a0d-4bf6-9e97-80a59a271486-console-config\") pod \"console-54dfb9c5c7-4blxh\" (UID: \"8a23285d-7a0d-4bf6-9e97-80a59a271486\") " pod="openshift-console/console-54dfb9c5c7-4blxh" Mar 12 18:42:05.958814 master-0 kubenswrapper[29097]: I0312 18:42:05.958452 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a23285d-7a0d-4bf6-9e97-80a59a271486-console-serving-cert\") pod \"console-54dfb9c5c7-4blxh\" (UID: \"8a23285d-7a0d-4bf6-9e97-80a59a271486\") " pod="openshift-console/console-54dfb9c5c7-4blxh" Mar 12 18:42:05.958814 master-0 kubenswrapper[29097]: I0312 18:42:05.958475 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a23285d-7a0d-4bf6-9e97-80a59a271486-trusted-ca-bundle\") pod \"console-54dfb9c5c7-4blxh\" (UID: \"8a23285d-7a0d-4bf6-9e97-80a59a271486\") " pod="openshift-console/console-54dfb9c5c7-4blxh" Mar 12 18:42:05.958814 master-0 kubenswrapper[29097]: I0312 18:42:05.958663 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8a23285d-7a0d-4bf6-9e97-80a59a271486-console-oauth-config\") pod \"console-54dfb9c5c7-4blxh\" (UID: \"8a23285d-7a0d-4bf6-9e97-80a59a271486\") " pod="openshift-console/console-54dfb9c5c7-4blxh" Mar 12 18:42:05.958949 master-0 kubenswrapper[29097]: I0312 18:42:05.958805 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8a23285d-7a0d-4bf6-9e97-80a59a271486-service-ca\") pod \"console-54dfb9c5c7-4blxh\" (UID: \"8a23285d-7a0d-4bf6-9e97-80a59a271486\") " pod="openshift-console/console-54dfb9c5c7-4blxh" Mar 12 18:42:05.958949 master-0 kubenswrapper[29097]: I0312 18:42:05.958932 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78jpn\" (UniqueName: \"kubernetes.io/projected/8a23285d-7a0d-4bf6-9e97-80a59a271486-kube-api-access-78jpn\") pod \"console-54dfb9c5c7-4blxh\" (UID: \"8a23285d-7a0d-4bf6-9e97-80a59a271486\") " pod="openshift-console/console-54dfb9c5c7-4blxh" Mar 12 18:42:06.062452 master-0 kubenswrapper[29097]: I0312 18:42:06.060783 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a23285d-7a0d-4bf6-9e97-80a59a271486-console-serving-cert\") pod \"console-54dfb9c5c7-4blxh\" (UID: \"8a23285d-7a0d-4bf6-9e97-80a59a271486\") " pod="openshift-console/console-54dfb9c5c7-4blxh" Mar 12 18:42:06.062452 master-0 kubenswrapper[29097]: I0312 18:42:06.060851 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a23285d-7a0d-4bf6-9e97-80a59a271486-trusted-ca-bundle\") pod \"console-54dfb9c5c7-4blxh\" (UID: \"8a23285d-7a0d-4bf6-9e97-80a59a271486\") " pod="openshift-console/console-54dfb9c5c7-4blxh" Mar 12 18:42:06.062452 master-0 kubenswrapper[29097]: I0312 18:42:06.060904 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8a23285d-7a0d-4bf6-9e97-80a59a271486-console-oauth-config\") pod \"console-54dfb9c5c7-4blxh\" (UID: \"8a23285d-7a0d-4bf6-9e97-80a59a271486\") " pod="openshift-console/console-54dfb9c5c7-4blxh" Mar 12 18:42:06.062452 master-0 kubenswrapper[29097]: I0312 18:42:06.060937 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8a23285d-7a0d-4bf6-9e97-80a59a271486-service-ca\") pod \"console-54dfb9c5c7-4blxh\" (UID: \"8a23285d-7a0d-4bf6-9e97-80a59a271486\") " pod="openshift-console/console-54dfb9c5c7-4blxh" Mar 12 18:42:06.062452 master-0 kubenswrapper[29097]: I0312 18:42:06.060980 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78jpn\" (UniqueName: \"kubernetes.io/projected/8a23285d-7a0d-4bf6-9e97-80a59a271486-kube-api-access-78jpn\") pod \"console-54dfb9c5c7-4blxh\" (UID: \"8a23285d-7a0d-4bf6-9e97-80a59a271486\") " pod="openshift-console/console-54dfb9c5c7-4blxh" Mar 12 18:42:06.062452 master-0 kubenswrapper[29097]: I0312 18:42:06.061079 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8a23285d-7a0d-4bf6-9e97-80a59a271486-oauth-serving-cert\") pod \"console-54dfb9c5c7-4blxh\" (UID: \"8a23285d-7a0d-4bf6-9e97-80a59a271486\") " pod="openshift-console/console-54dfb9c5c7-4blxh" Mar 12 18:42:06.062452 master-0 kubenswrapper[29097]: I0312 18:42:06.061109 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8a23285d-7a0d-4bf6-9e97-80a59a271486-console-config\") pod \"console-54dfb9c5c7-4blxh\" (UID: \"8a23285d-7a0d-4bf6-9e97-80a59a271486\") " pod="openshift-console/console-54dfb9c5c7-4blxh" Mar 12 18:42:06.062452 master-0 kubenswrapper[29097]: I0312 18:42:06.062192 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8a23285d-7a0d-4bf6-9e97-80a59a271486-console-config\") pod \"console-54dfb9c5c7-4blxh\" (UID: \"8a23285d-7a0d-4bf6-9e97-80a59a271486\") " pod="openshift-console/console-54dfb9c5c7-4blxh" Mar 12 18:42:06.063199 master-0 kubenswrapper[29097]: I0312 18:42:06.063151 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8a23285d-7a0d-4bf6-9e97-80a59a271486-service-ca\") pod \"console-54dfb9c5c7-4blxh\" (UID: \"8a23285d-7a0d-4bf6-9e97-80a59a271486\") " pod="openshift-console/console-54dfb9c5c7-4blxh" Mar 12 18:42:06.064220 master-0 kubenswrapper[29097]: I0312 18:42:06.063649 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8a23285d-7a0d-4bf6-9e97-80a59a271486-oauth-serving-cert\") pod \"console-54dfb9c5c7-4blxh\" (UID: \"8a23285d-7a0d-4bf6-9e97-80a59a271486\") " pod="openshift-console/console-54dfb9c5c7-4blxh" Mar 12 18:42:06.064663 master-0 kubenswrapper[29097]: I0312 18:42:06.064588 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a23285d-7a0d-4bf6-9e97-80a59a271486-trusted-ca-bundle\") pod \"console-54dfb9c5c7-4blxh\" (UID: \"8a23285d-7a0d-4bf6-9e97-80a59a271486\") " pod="openshift-console/console-54dfb9c5c7-4blxh" Mar 12 18:42:06.070590 master-0 kubenswrapper[29097]: I0312 18:42:06.068201 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a23285d-7a0d-4bf6-9e97-80a59a271486-console-serving-cert\") pod \"console-54dfb9c5c7-4blxh\" (UID: \"8a23285d-7a0d-4bf6-9e97-80a59a271486\") " pod="openshift-console/console-54dfb9c5c7-4blxh" Mar 12 18:42:06.070590 master-0 kubenswrapper[29097]: I0312 18:42:06.069798 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8a23285d-7a0d-4bf6-9e97-80a59a271486-console-oauth-config\") pod \"console-54dfb9c5c7-4blxh\" (UID: \"8a23285d-7a0d-4bf6-9e97-80a59a271486\") " pod="openshift-console/console-54dfb9c5c7-4blxh" Mar 12 18:42:06.074910 master-0 kubenswrapper[29097]: I0312 18:42:06.074818 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-49mjh" event={"ID":"635ef9b0-f0bb-48be-a32c-99f2dc90e01f","Type":"ContainerStarted","Data":"45a19c54d20979192191b1c1f38ea689f045159ce2c25d51006d9b0447194b53"} Mar 12 18:42:06.074910 master-0 kubenswrapper[29097]: I0312 18:42:06.074888 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-49mjh" event={"ID":"635ef9b0-f0bb-48be-a32c-99f2dc90e01f","Type":"ContainerStarted","Data":"ecfc01a09684e98b131ccf0601e1ee9ae02e13176878996c77ae664b71839506"} Mar 12 18:42:06.081932 master-0 kubenswrapper[29097]: I0312 18:42:06.081847 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wwfx4" event={"ID":"cef2870a-7ede-4f27-8d32-0aaa58024d6d","Type":"ContainerStarted","Data":"9a31f326f177c0fc98ffc74b35f31fd97f1ad6b57b710ecd9305d8b79c729481"} Mar 12 18:42:06.082318 master-0 kubenswrapper[29097]: I0312 18:42:06.082280 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78jpn\" (UniqueName: \"kubernetes.io/projected/8a23285d-7a0d-4bf6-9e97-80a59a271486-kube-api-access-78jpn\") pod \"console-54dfb9c5c7-4blxh\" (UID: \"8a23285d-7a0d-4bf6-9e97-80a59a271486\") " pod="openshift-console/console-54dfb9c5c7-4blxh" Mar 12 18:42:06.163751 master-0 kubenswrapper[29097]: I0312 18:42:06.163175 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-54dfb9c5c7-4blxh" Mar 12 18:42:06.165627 master-0 kubenswrapper[29097]: I0312 18:42:06.165545 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9f83c98b-f576-41f7-827a-65585172c452-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-9bpjp\" (UID: \"9f83c98b-f576-41f7-827a-65585172c452\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-9bpjp" Mar 12 18:42:06.171488 master-0 kubenswrapper[29097]: I0312 18:42:06.169615 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9f83c98b-f576-41f7-827a-65585172c452-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-9bpjp\" (UID: \"9f83c98b-f576-41f7-827a-65585172c452\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-9bpjp" Mar 12 18:42:06.239504 master-0 kubenswrapper[29097]: I0312 18:42:06.239447 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-qzdrp"] Mar 12 18:42:06.245369 master-0 kubenswrapper[29097]: W0312 18:42:06.244776 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0052e280_cd09_40a9_a843_edcf5051927e.slice/crio-1bde0bf00f4463204d734e0f5eb014c9efb67a2c02a63373be9d9982c4392d57 WatchSource:0}: Error finding container 1bde0bf00f4463204d734e0f5eb014c9efb67a2c02a63373be9d9982c4392d57: Status 404 returned error can't find the container with id 1bde0bf00f4463204d734e0f5eb014c9efb67a2c02a63373be9d9982c4392d57 Mar 12 18:42:06.349537 master-0 kubenswrapper[29097]: I0312 18:42:06.349407 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-9bpjp" Mar 12 18:42:06.439754 master-0 kubenswrapper[29097]: I0312 18:42:06.439709 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-2hhd7"] Mar 12 18:42:06.444084 master-0 kubenswrapper[29097]: W0312 18:42:06.444034 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb7d78bb_1030_44b4_a4f1_b644a2ccb171.slice/crio-867cd9c3856d0f50393cd0e0aca33159fc25a275117eda9df4413a5d8500a4dd WatchSource:0}: Error finding container 867cd9c3856d0f50393cd0e0aca33159fc25a275117eda9df4413a5d8500a4dd: Status 404 returned error can't find the container with id 867cd9c3856d0f50393cd0e0aca33159fc25a275117eda9df4413a5d8500a4dd Mar 12 18:42:06.592699 master-0 kubenswrapper[29097]: W0312 18:42:06.592476 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a23285d_7a0d_4bf6_9e97_80a59a271486.slice/crio-e9c731c0e09843a733d7c8f04f1c74221424a3bcad14ad9114880c24b4b5088c WatchSource:0}: Error finding container e9c731c0e09843a733d7c8f04f1c74221424a3bcad14ad9114880c24b4b5088c: Status 404 returned error can't find the container with id e9c731c0e09843a733d7c8f04f1c74221424a3bcad14ad9114880c24b4b5088c Mar 12 18:42:06.595608 master-0 kubenswrapper[29097]: I0312 18:42:06.595570 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-54dfb9c5c7-4blxh"] Mar 12 18:42:06.775183 master-0 kubenswrapper[29097]: I0312 18:42:06.775137 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-9bpjp"] Mar 12 18:42:06.776861 master-0 kubenswrapper[29097]: W0312 18:42:06.776817 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f83c98b_f576_41f7_827a_65585172c452.slice/crio-2cb59401a68bc6150eb9c3a035a8f469489eef270d9cc104ba7dc0bbe353e4ea WatchSource:0}: Error finding container 2cb59401a68bc6150eb9c3a035a8f469489eef270d9cc104ba7dc0bbe353e4ea: Status 404 returned error can't find the container with id 2cb59401a68bc6150eb9c3a035a8f469489eef270d9cc104ba7dc0bbe353e4ea Mar 12 18:42:07.092948 master-0 kubenswrapper[29097]: I0312 18:42:07.092897 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-qzdrp" event={"ID":"0052e280-cd09-40a9-a843-edcf5051927e","Type":"ContainerStarted","Data":"1bde0bf00f4463204d734e0f5eb014c9efb67a2c02a63373be9d9982c4392d57"} Mar 12 18:42:07.094361 master-0 kubenswrapper[29097]: I0312 18:42:07.094314 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-9bpjp" event={"ID":"9f83c98b-f576-41f7-827a-65585172c452","Type":"ContainerStarted","Data":"2cb59401a68bc6150eb9c3a035a8f469489eef270d9cc104ba7dc0bbe353e4ea"} Mar 12 18:42:07.099361 master-0 kubenswrapper[29097]: I0312 18:42:07.099312 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-54dfb9c5c7-4blxh" event={"ID":"8a23285d-7a0d-4bf6-9e97-80a59a271486","Type":"ContainerStarted","Data":"c17737850fa6728fbe09ce67bb59ccacd30564ca07503ad4403300b9147c5153"} Mar 12 18:42:07.099428 master-0 kubenswrapper[29097]: I0312 18:42:07.099383 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-54dfb9c5c7-4blxh" event={"ID":"8a23285d-7a0d-4bf6-9e97-80a59a271486","Type":"ContainerStarted","Data":"e9c731c0e09843a733d7c8f04f1c74221424a3bcad14ad9114880c24b4b5088c"} Mar 12 18:42:07.101791 master-0 kubenswrapper[29097]: I0312 18:42:07.101757 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-2hhd7" event={"ID":"db7d78bb-1030-44b4-a4f1-b644a2ccb171","Type":"ContainerStarted","Data":"867cd9c3856d0f50393cd0e0aca33159fc25a275117eda9df4413a5d8500a4dd"} Mar 12 18:42:07.299635 master-0 kubenswrapper[29097]: I0312 18:42:07.299555 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-54dfb9c5c7-4blxh" podStartSLOduration=2.299536271 podStartE2EDuration="2.299536271s" podCreationTimestamp="2026-03-12 18:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:42:07.286559897 +0000 UTC m=+766.840540054" watchObservedRunningTime="2026-03-12 18:42:07.299536271 +0000 UTC m=+766.853516378" Mar 12 18:42:09.127750 master-0 kubenswrapper[29097]: I0312 18:42:09.127668 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-dh446" event={"ID":"ffedbfc7-8b89-4bd0-8306-b5aa75878d42","Type":"ContainerStarted","Data":"d275d4c003fe26c07d3b0ddd2f7368fdbdd1d814a0a9303ab513940f7f145dbb"} Mar 12 18:42:09.128233 master-0 kubenswrapper[29097]: I0312 18:42:09.127773 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-dh446" Mar 12 18:42:09.129637 master-0 kubenswrapper[29097]: I0312 18:42:09.129583 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-49mjh" event={"ID":"635ef9b0-f0bb-48be-a32c-99f2dc90e01f","Type":"ContainerStarted","Data":"9b35c32a71f9dd864b0f6d3a13c932e2b7a8bbd854cc4a943350d3279ccf210e"} Mar 12 18:42:09.130706 master-0 kubenswrapper[29097]: I0312 18:42:09.129742 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-49mjh" Mar 12 18:42:09.152852 master-0 kubenswrapper[29097]: I0312 18:42:09.152766 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-dh446" podStartSLOduration=2.740501334 podStartE2EDuration="6.152745308s" podCreationTimestamp="2026-03-12 18:42:03 +0000 UTC" firstStartedPulling="2026-03-12 18:42:04.695651074 +0000 UTC m=+764.249631191" lastFinishedPulling="2026-03-12 18:42:08.107895068 +0000 UTC m=+767.661875165" observedRunningTime="2026-03-12 18:42:09.144928193 +0000 UTC m=+768.698908290" watchObservedRunningTime="2026-03-12 18:42:09.152745308 +0000 UTC m=+768.706725405" Mar 12 18:42:09.168466 master-0 kubenswrapper[29097]: I0312 18:42:09.168316 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-49mjh" podStartSLOduration=3.6992108630000002 podStartE2EDuration="6.168295656s" podCreationTimestamp="2026-03-12 18:42:03 +0000 UTC" firstStartedPulling="2026-03-12 18:42:05.672886386 +0000 UTC m=+765.226866483" lastFinishedPulling="2026-03-12 18:42:08.141971179 +0000 UTC m=+767.695951276" observedRunningTime="2026-03-12 18:42:09.162884361 +0000 UTC m=+768.716864458" watchObservedRunningTime="2026-03-12 18:42:09.168295656 +0000 UTC m=+768.722275753" Mar 12 18:42:13.185878 master-0 kubenswrapper[29097]: I0312 18:42:13.176841 29097 generic.go:334] "Generic (PLEG): container finished" podID="735c0a7b-f9ed-40b4-92a2-fd05a3991503" containerID="3056fcd4abf7def282d7a335f07bde36ac37c322558a46be680dcd52072ed035" exitCode=0 Mar 12 18:42:13.185878 master-0 kubenswrapper[29097]: I0312 18:42:13.176936 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rc6s5" event={"ID":"735c0a7b-f9ed-40b4-92a2-fd05a3991503","Type":"ContainerDied","Data":"3056fcd4abf7def282d7a335f07bde36ac37c322558a46be680dcd52072ed035"} Mar 12 18:42:13.194553 master-0 kubenswrapper[29097]: I0312 18:42:13.194012 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wwfx4" event={"ID":"cef2870a-7ede-4f27-8d32-0aaa58024d6d","Type":"ContainerStarted","Data":"ee6ae13ef3161035c7fea823a0c72311775ca4fcf53fea348e6a509f6d19fc2e"} Mar 12 18:42:13.194819 master-0 kubenswrapper[29097]: I0312 18:42:13.194775 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-wwfx4" Mar 12 18:42:13.201232 master-0 kubenswrapper[29097]: I0312 18:42:13.201163 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-hz8wl" event={"ID":"bed3c813-ea3d-45fb-a830-59ad0830040a","Type":"ContainerStarted","Data":"9ae8397a20d622a1574727f5fcf820eff24f91da23ca641b6cda32e0bd31591f"} Mar 12 18:42:13.201727 master-0 kubenswrapper[29097]: I0312 18:42:13.201674 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-hz8wl" Mar 12 18:42:13.203614 master-0 kubenswrapper[29097]: I0312 18:42:13.203567 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-2hhd7" event={"ID":"db7d78bb-1030-44b4-a4f1-b644a2ccb171","Type":"ContainerStarted","Data":"a12e7e0d96a354352c5aaf06891c3557b2808b3daebba7363232e5fbe959296d"} Mar 12 18:42:13.209699 master-0 kubenswrapper[29097]: I0312 18:42:13.209620 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-qzdrp" event={"ID":"0052e280-cd09-40a9-a843-edcf5051927e","Type":"ContainerStarted","Data":"53caecfa2a75dcb4dda34b510daa492c1b7182d18590572f59ede1ba68604d74"} Mar 12 18:42:13.209903 master-0 kubenswrapper[29097]: I0312 18:42:13.209706 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-qzdrp" event={"ID":"0052e280-cd09-40a9-a843-edcf5051927e","Type":"ContainerStarted","Data":"818f3fd54904bbe7a96e5e65487d513def349cfc78324853b393e2b4bbccbc61"} Mar 12 18:42:13.217200 master-0 kubenswrapper[29097]: I0312 18:42:13.217115 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-9bpjp" event={"ID":"9f83c98b-f576-41f7-827a-65585172c452","Type":"ContainerStarted","Data":"0803d12bd1ff513c2fc767c19b271ff476691b7e7195b1cb37092a608f335ff8"} Mar 12 18:42:13.217498 master-0 kubenswrapper[29097]: I0312 18:42:13.217365 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-9bpjp" Mar 12 18:42:13.239101 master-0 kubenswrapper[29097]: I0312 18:42:13.238264 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-2hhd7" podStartSLOduration=2.744503093 podStartE2EDuration="8.2382426s" podCreationTimestamp="2026-03-12 18:42:05 +0000 UTC" firstStartedPulling="2026-03-12 18:42:06.445997945 +0000 UTC m=+765.999978042" lastFinishedPulling="2026-03-12 18:42:11.939737442 +0000 UTC m=+771.493717549" observedRunningTime="2026-03-12 18:42:13.237636825 +0000 UTC m=+772.791616932" watchObservedRunningTime="2026-03-12 18:42:13.2382426 +0000 UTC m=+772.792222707" Mar 12 18:42:13.313539 master-0 kubenswrapper[29097]: I0312 18:42:13.309623 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-qzdrp" podStartSLOduration=2.6307172039999998 podStartE2EDuration="8.309594101s" podCreationTimestamp="2026-03-12 18:42:05 +0000 UTC" firstStartedPulling="2026-03-12 18:42:06.24661038 +0000 UTC m=+765.800590477" lastFinishedPulling="2026-03-12 18:42:11.925487267 +0000 UTC m=+771.479467374" observedRunningTime="2026-03-12 18:42:13.300384661 +0000 UTC m=+772.854364758" watchObservedRunningTime="2026-03-12 18:42:13.309594101 +0000 UTC m=+772.863574258" Mar 12 18:42:13.354953 master-0 kubenswrapper[29097]: I0312 18:42:13.354241 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-wwfx4" podStartSLOduration=2.395363302 podStartE2EDuration="8.354217594s" podCreationTimestamp="2026-03-12 18:42:05 +0000 UTC" firstStartedPulling="2026-03-12 18:42:05.997475994 +0000 UTC m=+765.551456091" lastFinishedPulling="2026-03-12 18:42:11.956330246 +0000 UTC m=+771.510310383" observedRunningTime="2026-03-12 18:42:13.344202154 +0000 UTC m=+772.898182261" watchObservedRunningTime="2026-03-12 18:42:13.354217594 +0000 UTC m=+772.908197691" Mar 12 18:42:13.402533 master-0 kubenswrapper[29097]: I0312 18:42:13.399762 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-hz8wl" podStartSLOduration=2.493917211 podStartE2EDuration="10.39974008s" podCreationTimestamp="2026-03-12 18:42:03 +0000 UTC" firstStartedPulling="2026-03-12 18:42:04.03137386 +0000 UTC m=+763.585353957" lastFinishedPulling="2026-03-12 18:42:11.937196719 +0000 UTC m=+771.491176826" observedRunningTime="2026-03-12 18:42:13.380592252 +0000 UTC m=+772.934572349" watchObservedRunningTime="2026-03-12 18:42:13.39974008 +0000 UTC m=+772.953720177" Mar 12 18:42:13.424535 master-0 kubenswrapper[29097]: I0312 18:42:13.418779 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f558f5558-9bpjp" podStartSLOduration=3.256069006 podStartE2EDuration="8.418759304s" podCreationTimestamp="2026-03-12 18:42:05 +0000 UTC" firstStartedPulling="2026-03-12 18:42:06.780030569 +0000 UTC m=+766.334010666" lastFinishedPulling="2026-03-12 18:42:11.942720857 +0000 UTC m=+771.496700964" observedRunningTime="2026-03-12 18:42:13.40698251 +0000 UTC m=+772.960962607" watchObservedRunningTime="2026-03-12 18:42:13.418759304 +0000 UTC m=+772.972739401" Mar 12 18:42:14.112101 master-0 kubenswrapper[29097]: I0312 18:42:14.112011 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-dh446" Mar 12 18:42:14.227711 master-0 kubenswrapper[29097]: I0312 18:42:14.227628 29097 generic.go:334] "Generic (PLEG): container finished" podID="735c0a7b-f9ed-40b4-92a2-fd05a3991503" containerID="d677f91946a67b0526dd9c9a47292d22affbf3eaaa9d4f6284f1db80806907cc" exitCode=0 Mar 12 18:42:14.229721 master-0 kubenswrapper[29097]: I0312 18:42:14.229642 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rc6s5" event={"ID":"735c0a7b-f9ed-40b4-92a2-fd05a3991503","Type":"ContainerDied","Data":"d677f91946a67b0526dd9c9a47292d22affbf3eaaa9d4f6284f1db80806907cc"} Mar 12 18:42:15.239449 master-0 kubenswrapper[29097]: I0312 18:42:15.239391 29097 generic.go:334] "Generic (PLEG): container finished" podID="735c0a7b-f9ed-40b4-92a2-fd05a3991503" containerID="73284c99d10e0ae590d7a177f92ae1b71ac48ca4641403e15af8883f12884d0b" exitCode=0 Mar 12 18:42:15.240709 master-0 kubenswrapper[29097]: I0312 18:42:15.240651 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rc6s5" event={"ID":"735c0a7b-f9ed-40b4-92a2-fd05a3991503","Type":"ContainerDied","Data":"73284c99d10e0ae590d7a177f92ae1b71ac48ca4641403e15af8883f12884d0b"} Mar 12 18:42:15.265286 master-0 kubenswrapper[29097]: I0312 18:42:15.265234 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-49mjh" Mar 12 18:42:16.165044 master-0 kubenswrapper[29097]: I0312 18:42:16.164589 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-54dfb9c5c7-4blxh" Mar 12 18:42:16.165044 master-0 kubenswrapper[29097]: I0312 18:42:16.164633 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-54dfb9c5c7-4blxh" Mar 12 18:42:16.171208 master-0 kubenswrapper[29097]: I0312 18:42:16.170632 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-54dfb9c5c7-4blxh" Mar 12 18:42:16.254021 master-0 kubenswrapper[29097]: I0312 18:42:16.253911 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rc6s5" event={"ID":"735c0a7b-f9ed-40b4-92a2-fd05a3991503","Type":"ContainerStarted","Data":"e2fe3725d77d2e363bf79ab5022477482cf10668378cbbb1926bb55c56cdb9a7"} Mar 12 18:42:16.254635 master-0 kubenswrapper[29097]: I0312 18:42:16.254019 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rc6s5" event={"ID":"735c0a7b-f9ed-40b4-92a2-fd05a3991503","Type":"ContainerStarted","Data":"fb08ef15a6a80f4c804e8c55ae90bd2171d640aa0e0afc24d78a04d5bdd18a3b"} Mar 12 18:42:16.254635 master-0 kubenswrapper[29097]: I0312 18:42:16.254064 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rc6s5" event={"ID":"735c0a7b-f9ed-40b4-92a2-fd05a3991503","Type":"ContainerStarted","Data":"ea9b54cd7d822b7be3f00792e67666cd7fa584711a6fb819b073b7d9205a6970"} Mar 12 18:42:16.254635 master-0 kubenswrapper[29097]: I0312 18:42:16.254091 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rc6s5" event={"ID":"735c0a7b-f9ed-40b4-92a2-fd05a3991503","Type":"ContainerStarted","Data":"2edd938ed6c782b66081277897d08740a1220af9b764dcc366008dcab27525de"} Mar 12 18:42:16.254635 master-0 kubenswrapper[29097]: I0312 18:42:16.254141 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rc6s5" event={"ID":"735c0a7b-f9ed-40b4-92a2-fd05a3991503","Type":"ContainerStarted","Data":"83a2fde4dc55a6b845e7a0bcc8ade421ab01152c9c34d887e2d490b3150c1ba1"} Mar 12 18:42:16.257934 master-0 kubenswrapper[29097]: I0312 18:42:16.257885 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-54dfb9c5c7-4blxh" Mar 12 18:42:16.382319 master-0 kubenswrapper[29097]: I0312 18:42:16.382188 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-56b9f847c7-f5n7l"] Mar 12 18:42:17.265503 master-0 kubenswrapper[29097]: I0312 18:42:17.265440 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rc6s5" event={"ID":"735c0a7b-f9ed-40b4-92a2-fd05a3991503","Type":"ContainerStarted","Data":"3176a616c3580fc69ea5aef55a7f7cd8da7845c691b6fc1a8cdcc6f4a0125c62"} Mar 12 18:42:17.294047 master-0 kubenswrapper[29097]: I0312 18:42:17.293961 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-rc6s5" podStartSLOduration=6.539784934 podStartE2EDuration="14.293938409s" podCreationTimestamp="2026-03-12 18:42:03 +0000 UTC" firstStartedPulling="2026-03-12 18:42:04.181927516 +0000 UTC m=+763.735907623" lastFinishedPulling="2026-03-12 18:42:11.936080991 +0000 UTC m=+771.490061098" observedRunningTime="2026-03-12 18:42:17.28594947 +0000 UTC m=+776.839929617" watchObservedRunningTime="2026-03-12 18:42:17.293938409 +0000 UTC m=+776.847918536" Mar 12 18:42:18.278910 master-0 kubenswrapper[29097]: I0312 18:42:18.278824 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-rc6s5" Mar 12 18:42:19.038542 master-0 kubenswrapper[29097]: I0312 18:42:19.038453 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-rc6s5" Mar 12 18:42:19.091036 master-0 kubenswrapper[29097]: I0312 18:42:19.090979 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-rc6s5" Mar 12 18:42:20.957164 master-0 kubenswrapper[29097]: I0312 18:42:20.957063 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-wwfx4" Mar 12 18:42:21.233615 master-0 kubenswrapper[29097]: E0312 18:42:21.233454 29097 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33feec78_4592_4343_965b_aa1b7044fcf3.slice/crio-26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Error finding container 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Status 404 returned error can't find the container with id 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547 Mar 12 18:42:21.776198 master-0 kubenswrapper[29097]: I0312 18:42:21.776146 29097 scope.go:117] "RemoveContainer" containerID="27ec01e446898cdb09325e858095825a7ec9b233787886936fcd21a787d5965b" Mar 12 18:42:23.626468 master-0 kubenswrapper[29097]: I0312 18:42:23.626382 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-hz8wl" Mar 12 18:42:26.359963 master-0 kubenswrapper[29097]: I0312 18:42:26.359882 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-9bpjp" Mar 12 18:42:31.163686 master-0 kubenswrapper[29097]: I0312 18:42:31.163590 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-xfrfc"] Mar 12 18:42:31.165670 master-0 kubenswrapper[29097]: I0312 18:42:31.165612 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.170485 master-0 kubenswrapper[29097]: I0312 18:42:31.170406 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Mar 12 18:42:31.179198 master-0 kubenswrapper[29097]: I0312 18:42:31.179135 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-xfrfc"] Mar 12 18:42:31.312658 master-0 kubenswrapper[29097]: I0312 18:42:31.312571 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/566222cd-f655-446d-bd01-59668ab1f8a9-metrics-cert\") pod \"vg-manager-xfrfc\" (UID: \"566222cd-f655-446d-bd01-59668ab1f8a9\") " pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.312951 master-0 kubenswrapper[29097]: I0312 18:42:31.312782 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpmhr\" (UniqueName: \"kubernetes.io/projected/566222cd-f655-446d-bd01-59668ab1f8a9-kube-api-access-wpmhr\") pod \"vg-manager-xfrfc\" (UID: \"566222cd-f655-446d-bd01-59668ab1f8a9\") " pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.312951 master-0 kubenswrapper[29097]: I0312 18:42:31.312879 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/566222cd-f655-446d-bd01-59668ab1f8a9-file-lock-dir\") pod \"vg-manager-xfrfc\" (UID: \"566222cd-f655-446d-bd01-59668ab1f8a9\") " pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.313061 master-0 kubenswrapper[29097]: I0312 18:42:31.312961 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/566222cd-f655-446d-bd01-59668ab1f8a9-csi-plugin-dir\") pod \"vg-manager-xfrfc\" (UID: \"566222cd-f655-446d-bd01-59668ab1f8a9\") " pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.313121 master-0 kubenswrapper[29097]: I0312 18:42:31.313098 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/566222cd-f655-446d-bd01-59668ab1f8a9-pod-volumes-dir\") pod \"vg-manager-xfrfc\" (UID: \"566222cd-f655-446d-bd01-59668ab1f8a9\") " pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.313237 master-0 kubenswrapper[29097]: I0312 18:42:31.313196 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/566222cd-f655-446d-bd01-59668ab1f8a9-run-udev\") pod \"vg-manager-xfrfc\" (UID: \"566222cd-f655-446d-bd01-59668ab1f8a9\") " pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.313343 master-0 kubenswrapper[29097]: I0312 18:42:31.313308 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/566222cd-f655-446d-bd01-59668ab1f8a9-sys\") pod \"vg-manager-xfrfc\" (UID: \"566222cd-f655-446d-bd01-59668ab1f8a9\") " pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.313415 master-0 kubenswrapper[29097]: I0312 18:42:31.313355 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/566222cd-f655-446d-bd01-59668ab1f8a9-registration-dir\") pod \"vg-manager-xfrfc\" (UID: \"566222cd-f655-446d-bd01-59668ab1f8a9\") " pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.313492 master-0 kubenswrapper[29097]: I0312 18:42:31.313455 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/566222cd-f655-446d-bd01-59668ab1f8a9-node-plugin-dir\") pod \"vg-manager-xfrfc\" (UID: \"566222cd-f655-446d-bd01-59668ab1f8a9\") " pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.313633 master-0 kubenswrapper[29097]: I0312 18:42:31.313554 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/566222cd-f655-446d-bd01-59668ab1f8a9-lvmd-config\") pod \"vg-manager-xfrfc\" (UID: \"566222cd-f655-446d-bd01-59668ab1f8a9\") " pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.313633 master-0 kubenswrapper[29097]: I0312 18:42:31.313590 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/566222cd-f655-446d-bd01-59668ab1f8a9-device-dir\") pod \"vg-manager-xfrfc\" (UID: \"566222cd-f655-446d-bd01-59668ab1f8a9\") " pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.415392 master-0 kubenswrapper[29097]: I0312 18:42:31.415204 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/566222cd-f655-446d-bd01-59668ab1f8a9-csi-plugin-dir\") pod \"vg-manager-xfrfc\" (UID: \"566222cd-f655-446d-bd01-59668ab1f8a9\") " pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.415392 master-0 kubenswrapper[29097]: I0312 18:42:31.415312 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/566222cd-f655-446d-bd01-59668ab1f8a9-pod-volumes-dir\") pod \"vg-manager-xfrfc\" (UID: \"566222cd-f655-446d-bd01-59668ab1f8a9\") " pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.415392 master-0 kubenswrapper[29097]: I0312 18:42:31.415367 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/566222cd-f655-446d-bd01-59668ab1f8a9-run-udev\") pod \"vg-manager-xfrfc\" (UID: \"566222cd-f655-446d-bd01-59668ab1f8a9\") " pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.415872 master-0 kubenswrapper[29097]: I0312 18:42:31.415449 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/566222cd-f655-446d-bd01-59668ab1f8a9-sys\") pod \"vg-manager-xfrfc\" (UID: \"566222cd-f655-446d-bd01-59668ab1f8a9\") " pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.415872 master-0 kubenswrapper[29097]: I0312 18:42:31.415482 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/566222cd-f655-446d-bd01-59668ab1f8a9-registration-dir\") pod \"vg-manager-xfrfc\" (UID: \"566222cd-f655-446d-bd01-59668ab1f8a9\") " pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.415872 master-0 kubenswrapper[29097]: I0312 18:42:31.415602 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/566222cd-f655-446d-bd01-59668ab1f8a9-node-plugin-dir\") pod \"vg-manager-xfrfc\" (UID: \"566222cd-f655-446d-bd01-59668ab1f8a9\") " pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.415872 master-0 kubenswrapper[29097]: I0312 18:42:31.415619 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/566222cd-f655-446d-bd01-59668ab1f8a9-csi-plugin-dir\") pod \"vg-manager-xfrfc\" (UID: \"566222cd-f655-446d-bd01-59668ab1f8a9\") " pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.415872 master-0 kubenswrapper[29097]: I0312 18:42:31.415665 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/566222cd-f655-446d-bd01-59668ab1f8a9-lvmd-config\") pod \"vg-manager-xfrfc\" (UID: \"566222cd-f655-446d-bd01-59668ab1f8a9\") " pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.415872 master-0 kubenswrapper[29097]: I0312 18:42:31.415723 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/566222cd-f655-446d-bd01-59668ab1f8a9-sys\") pod \"vg-manager-xfrfc\" (UID: \"566222cd-f655-446d-bd01-59668ab1f8a9\") " pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.415872 master-0 kubenswrapper[29097]: I0312 18:42:31.415730 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/566222cd-f655-446d-bd01-59668ab1f8a9-device-dir\") pod \"vg-manager-xfrfc\" (UID: \"566222cd-f655-446d-bd01-59668ab1f8a9\") " pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.415872 master-0 kubenswrapper[29097]: I0312 18:42:31.415785 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/566222cd-f655-446d-bd01-59668ab1f8a9-metrics-cert\") pod \"vg-manager-xfrfc\" (UID: \"566222cd-f655-446d-bd01-59668ab1f8a9\") " pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.415872 master-0 kubenswrapper[29097]: I0312 18:42:31.415791 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/566222cd-f655-446d-bd01-59668ab1f8a9-device-dir\") pod \"vg-manager-xfrfc\" (UID: \"566222cd-f655-446d-bd01-59668ab1f8a9\") " pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.415872 master-0 kubenswrapper[29097]: I0312 18:42:31.415866 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/566222cd-f655-446d-bd01-59668ab1f8a9-file-lock-dir\") pod \"vg-manager-xfrfc\" (UID: \"566222cd-f655-446d-bd01-59668ab1f8a9\") " pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.416475 master-0 kubenswrapper[29097]: I0312 18:42:31.415899 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpmhr\" (UniqueName: \"kubernetes.io/projected/566222cd-f655-446d-bd01-59668ab1f8a9-kube-api-access-wpmhr\") pod \"vg-manager-xfrfc\" (UID: \"566222cd-f655-446d-bd01-59668ab1f8a9\") " pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.416475 master-0 kubenswrapper[29097]: I0312 18:42:31.416428 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/566222cd-f655-446d-bd01-59668ab1f8a9-registration-dir\") pod \"vg-manager-xfrfc\" (UID: \"566222cd-f655-446d-bd01-59668ab1f8a9\") " pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.416650 master-0 kubenswrapper[29097]: I0312 18:42:31.416464 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/566222cd-f655-446d-bd01-59668ab1f8a9-pod-volumes-dir\") pod \"vg-manager-xfrfc\" (UID: \"566222cd-f655-446d-bd01-59668ab1f8a9\") " pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.416721 master-0 kubenswrapper[29097]: I0312 18:42:31.416650 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/566222cd-f655-446d-bd01-59668ab1f8a9-run-udev\") pod \"vg-manager-xfrfc\" (UID: \"566222cd-f655-446d-bd01-59668ab1f8a9\") " pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.416721 master-0 kubenswrapper[29097]: I0312 18:42:31.416654 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/566222cd-f655-446d-bd01-59668ab1f8a9-node-plugin-dir\") pod \"vg-manager-xfrfc\" (UID: \"566222cd-f655-446d-bd01-59668ab1f8a9\") " pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.417007 master-0 kubenswrapper[29097]: I0312 18:42:31.416861 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/566222cd-f655-446d-bd01-59668ab1f8a9-lvmd-config\") pod \"vg-manager-xfrfc\" (UID: \"566222cd-f655-446d-bd01-59668ab1f8a9\") " pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.417007 master-0 kubenswrapper[29097]: I0312 18:42:31.416966 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/566222cd-f655-446d-bd01-59668ab1f8a9-file-lock-dir\") pod \"vg-manager-xfrfc\" (UID: \"566222cd-f655-446d-bd01-59668ab1f8a9\") " pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.419657 master-0 kubenswrapper[29097]: I0312 18:42:31.419574 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/566222cd-f655-446d-bd01-59668ab1f8a9-metrics-cert\") pod \"vg-manager-xfrfc\" (UID: \"566222cd-f655-446d-bd01-59668ab1f8a9\") " pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.447781 master-0 kubenswrapper[29097]: I0312 18:42:31.447683 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpmhr\" (UniqueName: \"kubernetes.io/projected/566222cd-f655-446d-bd01-59668ab1f8a9-kube-api-access-wpmhr\") pod \"vg-manager-xfrfc\" (UID: \"566222cd-f655-446d-bd01-59668ab1f8a9\") " pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.510474 master-0 kubenswrapper[29097]: I0312 18:42:31.510298 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:31.982691 master-0 kubenswrapper[29097]: I0312 18:42:31.982578 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-xfrfc"] Mar 12 18:42:31.986361 master-0 kubenswrapper[29097]: W0312 18:42:31.986323 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod566222cd_f655_446d_bd01_59668ab1f8a9.slice/crio-cb7dc5d3803deafc9cb11c0c77ed42327e16558542ba25cd0b9b9a51bcc9341d WatchSource:0}: Error finding container cb7dc5d3803deafc9cb11c0c77ed42327e16558542ba25cd0b9b9a51bcc9341d: Status 404 returned error can't find the container with id cb7dc5d3803deafc9cb11c0c77ed42327e16558542ba25cd0b9b9a51bcc9341d Mar 12 18:42:32.420877 master-0 kubenswrapper[29097]: I0312 18:42:32.420822 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-xfrfc" event={"ID":"566222cd-f655-446d-bd01-59668ab1f8a9","Type":"ContainerStarted","Data":"27f3dbbfbcec5a550175b5ac8f20ab7a7130a8d27e27ea9f315014618da75cbf"} Mar 12 18:42:32.420877 master-0 kubenswrapper[29097]: I0312 18:42:32.420877 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-xfrfc" event={"ID":"566222cd-f655-446d-bd01-59668ab1f8a9","Type":"ContainerStarted","Data":"cb7dc5d3803deafc9cb11c0c77ed42327e16558542ba25cd0b9b9a51bcc9341d"} Mar 12 18:42:34.040545 master-0 kubenswrapper[29097]: I0312 18:42:34.040271 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-rc6s5" Mar 12 18:42:34.081821 master-0 kubenswrapper[29097]: I0312 18:42:34.081613 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-xfrfc" podStartSLOduration=3.0815896289999998 podStartE2EDuration="3.081589629s" podCreationTimestamp="2026-03-12 18:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:42:32.466631566 +0000 UTC m=+792.020611663" watchObservedRunningTime="2026-03-12 18:42:34.081589629 +0000 UTC m=+793.635569726" Mar 12 18:42:34.445136 master-0 kubenswrapper[29097]: I0312 18:42:34.445068 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-xfrfc_566222cd-f655-446d-bd01-59668ab1f8a9/vg-manager/0.log" Mar 12 18:42:34.445136 master-0 kubenswrapper[29097]: I0312 18:42:34.445128 29097 generic.go:334] "Generic (PLEG): container finished" podID="566222cd-f655-446d-bd01-59668ab1f8a9" containerID="27f3dbbfbcec5a550175b5ac8f20ab7a7130a8d27e27ea9f315014618da75cbf" exitCode=1 Mar 12 18:42:34.445417 master-0 kubenswrapper[29097]: I0312 18:42:34.445160 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-xfrfc" event={"ID":"566222cd-f655-446d-bd01-59668ab1f8a9","Type":"ContainerDied","Data":"27f3dbbfbcec5a550175b5ac8f20ab7a7130a8d27e27ea9f315014618da75cbf"} Mar 12 18:42:34.445823 master-0 kubenswrapper[29097]: I0312 18:42:34.445786 29097 scope.go:117] "RemoveContainer" containerID="27f3dbbfbcec5a550175b5ac8f20ab7a7130a8d27e27ea9f315014618da75cbf" Mar 12 18:42:34.780479 master-0 kubenswrapper[29097]: I0312 18:42:34.780352 29097 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Mar 12 18:42:35.456247 master-0 kubenswrapper[29097]: I0312 18:42:35.456187 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-xfrfc_566222cd-f655-446d-bd01-59668ab1f8a9/vg-manager/0.log" Mar 12 18:42:35.456247 master-0 kubenswrapper[29097]: I0312 18:42:35.456249 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-xfrfc" event={"ID":"566222cd-f655-446d-bd01-59668ab1f8a9","Type":"ContainerStarted","Data":"9e1554ba99157810d391a067a67e355ccf1141d7439531df1ebe819084f5839c"} Mar 12 18:42:35.724356 master-0 kubenswrapper[29097]: I0312 18:42:35.723563 29097 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2026-03-12T18:42:34.780398315Z","Handler":null,"Name":""} Mar 12 18:42:35.725773 master-0 kubenswrapper[29097]: I0312 18:42:35.725493 29097 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Mar 12 18:42:35.725773 master-0 kubenswrapper[29097]: I0312 18:42:35.725534 29097 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Mar 12 18:42:41.438386 master-0 kubenswrapper[29097]: I0312 18:42:41.438305 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-56b9f847c7-f5n7l" podUID="5bf90e22-d433-4ade-9fa6-873b297d1f58" containerName="console" containerID="cri-o://6974ec10a99c79fe4f7c5b5a49cbef1b19e8efa3b652c51be4dfe65f3f7b7275" gracePeriod=15 Mar 12 18:42:41.511146 master-0 kubenswrapper[29097]: I0312 18:42:41.511067 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:41.514446 master-0 kubenswrapper[29097]: I0312 18:42:41.514382 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:41.526117 master-0 kubenswrapper[29097]: I0312 18:42:41.526053 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:41.527148 master-0 kubenswrapper[29097]: I0312 18:42:41.527111 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-xfrfc" Mar 12 18:42:41.922583 master-0 kubenswrapper[29097]: I0312 18:42:41.919676 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-56b9f847c7-f5n7l_5bf90e22-d433-4ade-9fa6-873b297d1f58/console/0.log" Mar 12 18:42:41.922583 master-0 kubenswrapper[29097]: I0312 18:42:41.919753 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-56b9f847c7-f5n7l" Mar 12 18:42:41.967705 master-0 kubenswrapper[29097]: I0312 18:42:41.967580 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5bf90e22-d433-4ade-9fa6-873b297d1f58-console-serving-cert\") pod \"5bf90e22-d433-4ade-9fa6-873b297d1f58\" (UID: \"5bf90e22-d433-4ade-9fa6-873b297d1f58\") " Mar 12 18:42:41.967705 master-0 kubenswrapper[29097]: I0312 18:42:41.967635 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5bf90e22-d433-4ade-9fa6-873b297d1f58-oauth-serving-cert\") pod \"5bf90e22-d433-4ade-9fa6-873b297d1f58\" (UID: \"5bf90e22-d433-4ade-9fa6-873b297d1f58\") " Mar 12 18:42:41.967705 master-0 kubenswrapper[29097]: I0312 18:42:41.967675 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5bf90e22-d433-4ade-9fa6-873b297d1f58-console-oauth-config\") pod \"5bf90e22-d433-4ade-9fa6-873b297d1f58\" (UID: \"5bf90e22-d433-4ade-9fa6-873b297d1f58\") " Mar 12 18:42:41.968013 master-0 kubenswrapper[29097]: I0312 18:42:41.967799 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5bf90e22-d433-4ade-9fa6-873b297d1f58-service-ca\") pod \"5bf90e22-d433-4ade-9fa6-873b297d1f58\" (UID: \"5bf90e22-d433-4ade-9fa6-873b297d1f58\") " Mar 12 18:42:41.968013 master-0 kubenswrapper[29097]: I0312 18:42:41.967885 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5bf90e22-d433-4ade-9fa6-873b297d1f58-console-config\") pod \"5bf90e22-d433-4ade-9fa6-873b297d1f58\" (UID: \"5bf90e22-d433-4ade-9fa6-873b297d1f58\") " Mar 12 18:42:41.968013 master-0 kubenswrapper[29097]: I0312 18:42:41.967944 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqng8\" (UniqueName: \"kubernetes.io/projected/5bf90e22-d433-4ade-9fa6-873b297d1f58-kube-api-access-gqng8\") pod \"5bf90e22-d433-4ade-9fa6-873b297d1f58\" (UID: \"5bf90e22-d433-4ade-9fa6-873b297d1f58\") " Mar 12 18:42:41.968013 master-0 kubenswrapper[29097]: I0312 18:42:41.967976 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bf90e22-d433-4ade-9fa6-873b297d1f58-trusted-ca-bundle\") pod \"5bf90e22-d433-4ade-9fa6-873b297d1f58\" (UID: \"5bf90e22-d433-4ade-9fa6-873b297d1f58\") " Mar 12 18:42:41.968645 master-0 kubenswrapper[29097]: I0312 18:42:41.968482 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bf90e22-d433-4ade-9fa6-873b297d1f58-service-ca" (OuterVolumeSpecName: "service-ca") pod "5bf90e22-d433-4ade-9fa6-873b297d1f58" (UID: "5bf90e22-d433-4ade-9fa6-873b297d1f58"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:42:41.968708 master-0 kubenswrapper[29097]: I0312 18:42:41.968657 29097 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5bf90e22-d433-4ade-9fa6-873b297d1f58-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 18:42:41.968708 master-0 kubenswrapper[29097]: I0312 18:42:41.968662 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bf90e22-d433-4ade-9fa6-873b297d1f58-console-config" (OuterVolumeSpecName: "console-config") pod "5bf90e22-d433-4ade-9fa6-873b297d1f58" (UID: "5bf90e22-d433-4ade-9fa6-873b297d1f58"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:42:41.969073 master-0 kubenswrapper[29097]: I0312 18:42:41.969002 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bf90e22-d433-4ade-9fa6-873b297d1f58-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "5bf90e22-d433-4ade-9fa6-873b297d1f58" (UID: "5bf90e22-d433-4ade-9fa6-873b297d1f58"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:42:41.969379 master-0 kubenswrapper[29097]: I0312 18:42:41.969291 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bf90e22-d433-4ade-9fa6-873b297d1f58-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "5bf90e22-d433-4ade-9fa6-873b297d1f58" (UID: "5bf90e22-d433-4ade-9fa6-873b297d1f58"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:42:41.970366 master-0 kubenswrapper[29097]: I0312 18:42:41.970334 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bf90e22-d433-4ade-9fa6-873b297d1f58-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "5bf90e22-d433-4ade-9fa6-873b297d1f58" (UID: "5bf90e22-d433-4ade-9fa6-873b297d1f58"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:42:41.971281 master-0 kubenswrapper[29097]: I0312 18:42:41.971214 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bf90e22-d433-4ade-9fa6-873b297d1f58-kube-api-access-gqng8" (OuterVolumeSpecName: "kube-api-access-gqng8") pod "5bf90e22-d433-4ade-9fa6-873b297d1f58" (UID: "5bf90e22-d433-4ade-9fa6-873b297d1f58"). InnerVolumeSpecName "kube-api-access-gqng8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:42:41.972959 master-0 kubenswrapper[29097]: I0312 18:42:41.972910 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bf90e22-d433-4ade-9fa6-873b297d1f58-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "5bf90e22-d433-4ade-9fa6-873b297d1f58" (UID: "5bf90e22-d433-4ade-9fa6-873b297d1f58"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:42:42.069825 master-0 kubenswrapper[29097]: I0312 18:42:42.069747 29097 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5bf90e22-d433-4ade-9fa6-873b297d1f58-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 18:42:42.069825 master-0 kubenswrapper[29097]: I0312 18:42:42.069820 29097 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5bf90e22-d433-4ade-9fa6-873b297d1f58-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 18:42:42.069825 master-0 kubenswrapper[29097]: I0312 18:42:42.069836 29097 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5bf90e22-d433-4ade-9fa6-873b297d1f58-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:42:42.070102 master-0 kubenswrapper[29097]: I0312 18:42:42.069852 29097 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5bf90e22-d433-4ade-9fa6-873b297d1f58-console-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:42:42.070102 master-0 kubenswrapper[29097]: I0312 18:42:42.069864 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqng8\" (UniqueName: \"kubernetes.io/projected/5bf90e22-d433-4ade-9fa6-873b297d1f58-kube-api-access-gqng8\") on node \"master-0\" DevicePath \"\"" Mar 12 18:42:42.070102 master-0 kubenswrapper[29097]: I0312 18:42:42.069901 29097 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bf90e22-d433-4ade-9fa6-873b297d1f58-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:42:42.537357 master-0 kubenswrapper[29097]: I0312 18:42:42.536993 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-56b9f847c7-f5n7l_5bf90e22-d433-4ade-9fa6-873b297d1f58/console/0.log" Mar 12 18:42:42.537357 master-0 kubenswrapper[29097]: I0312 18:42:42.537066 29097 generic.go:334] "Generic (PLEG): container finished" podID="5bf90e22-d433-4ade-9fa6-873b297d1f58" containerID="6974ec10a99c79fe4f7c5b5a49cbef1b19e8efa3b652c51be4dfe65f3f7b7275" exitCode=2 Mar 12 18:42:42.537357 master-0 kubenswrapper[29097]: I0312 18:42:42.537129 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-56b9f847c7-f5n7l" Mar 12 18:42:42.537357 master-0 kubenswrapper[29097]: I0312 18:42:42.537150 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-56b9f847c7-f5n7l" event={"ID":"5bf90e22-d433-4ade-9fa6-873b297d1f58","Type":"ContainerDied","Data":"6974ec10a99c79fe4f7c5b5a49cbef1b19e8efa3b652c51be4dfe65f3f7b7275"} Mar 12 18:42:42.537357 master-0 kubenswrapper[29097]: I0312 18:42:42.537234 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-56b9f847c7-f5n7l" event={"ID":"5bf90e22-d433-4ade-9fa6-873b297d1f58","Type":"ContainerDied","Data":"e3dc1d30bf9e5b352e5a203ce9b222a11dfecf1fa18bed1f18dbd9c41ff07225"} Mar 12 18:42:42.537357 master-0 kubenswrapper[29097]: I0312 18:42:42.537268 29097 scope.go:117] "RemoveContainer" containerID="6974ec10a99c79fe4f7c5b5a49cbef1b19e8efa3b652c51be4dfe65f3f7b7275" Mar 12 18:42:42.565146 master-0 kubenswrapper[29097]: I0312 18:42:42.565087 29097 scope.go:117] "RemoveContainer" containerID="6974ec10a99c79fe4f7c5b5a49cbef1b19e8efa3b652c51be4dfe65f3f7b7275" Mar 12 18:42:42.565691 master-0 kubenswrapper[29097]: E0312 18:42:42.565634 29097 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6974ec10a99c79fe4f7c5b5a49cbef1b19e8efa3b652c51be4dfe65f3f7b7275\": container with ID starting with 6974ec10a99c79fe4f7c5b5a49cbef1b19e8efa3b652c51be4dfe65f3f7b7275 not found: ID does not exist" containerID="6974ec10a99c79fe4f7c5b5a49cbef1b19e8efa3b652c51be4dfe65f3f7b7275" Mar 12 18:42:42.565766 master-0 kubenswrapper[29097]: I0312 18:42:42.565699 29097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6974ec10a99c79fe4f7c5b5a49cbef1b19e8efa3b652c51be4dfe65f3f7b7275"} err="failed to get container status \"6974ec10a99c79fe4f7c5b5a49cbef1b19e8efa3b652c51be4dfe65f3f7b7275\": rpc error: code = NotFound desc = could not find container \"6974ec10a99c79fe4f7c5b5a49cbef1b19e8efa3b652c51be4dfe65f3f7b7275\": container with ID starting with 6974ec10a99c79fe4f7c5b5a49cbef1b19e8efa3b652c51be4dfe65f3f7b7275 not found: ID does not exist" Mar 12 18:42:42.584318 master-0 kubenswrapper[29097]: I0312 18:42:42.584246 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-56b9f847c7-f5n7l"] Mar 12 18:42:42.592008 master-0 kubenswrapper[29097]: I0312 18:42:42.591955 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-56b9f847c7-f5n7l"] Mar 12 18:42:42.732040 master-0 kubenswrapper[29097]: I0312 18:42:42.731971 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bf90e22-d433-4ade-9fa6-873b297d1f58" path="/var/lib/kubelet/pods/5bf90e22-d433-4ade-9fa6-873b297d1f58/volumes" Mar 12 18:42:43.689673 master-0 kubenswrapper[29097]: I0312 18:42:43.689618 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-ngrvj"] Mar 12 18:42:43.690176 master-0 kubenswrapper[29097]: E0312 18:42:43.690029 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bf90e22-d433-4ade-9fa6-873b297d1f58" containerName="console" Mar 12 18:42:43.690176 master-0 kubenswrapper[29097]: I0312 18:42:43.690045 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bf90e22-d433-4ade-9fa6-873b297d1f58" containerName="console" Mar 12 18:42:43.690255 master-0 kubenswrapper[29097]: I0312 18:42:43.690234 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bf90e22-d433-4ade-9fa6-873b297d1f58" containerName="console" Mar 12 18:42:43.691021 master-0 kubenswrapper[29097]: I0312 18:42:43.690997 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ngrvj" Mar 12 18:42:43.693161 master-0 kubenswrapper[29097]: I0312 18:42:43.693132 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 12 18:42:43.699459 master-0 kubenswrapper[29097]: I0312 18:42:43.699417 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 12 18:42:43.712200 master-0 kubenswrapper[29097]: I0312 18:42:43.712096 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ngrvj"] Mar 12 18:42:43.798568 master-0 kubenswrapper[29097]: I0312 18:42:43.798353 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w226z\" (UniqueName: \"kubernetes.io/projected/ae29e21b-e7cc-4be1-877a-9d659cdf4692-kube-api-access-w226z\") pod \"openstack-operator-index-ngrvj\" (UID: \"ae29e21b-e7cc-4be1-877a-9d659cdf4692\") " pod="openstack-operators/openstack-operator-index-ngrvj" Mar 12 18:42:43.900678 master-0 kubenswrapper[29097]: I0312 18:42:43.900625 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w226z\" (UniqueName: \"kubernetes.io/projected/ae29e21b-e7cc-4be1-877a-9d659cdf4692-kube-api-access-w226z\") pod \"openstack-operator-index-ngrvj\" (UID: \"ae29e21b-e7cc-4be1-877a-9d659cdf4692\") " pod="openstack-operators/openstack-operator-index-ngrvj" Mar 12 18:42:43.929610 master-0 kubenswrapper[29097]: I0312 18:42:43.929164 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w226z\" (UniqueName: \"kubernetes.io/projected/ae29e21b-e7cc-4be1-877a-9d659cdf4692-kube-api-access-w226z\") pod \"openstack-operator-index-ngrvj\" (UID: \"ae29e21b-e7cc-4be1-877a-9d659cdf4692\") " pod="openstack-operators/openstack-operator-index-ngrvj" Mar 12 18:42:44.015990 master-0 kubenswrapper[29097]: I0312 18:42:44.015858 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ngrvj" Mar 12 18:42:44.463475 master-0 kubenswrapper[29097]: W0312 18:42:44.463420 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae29e21b_e7cc_4be1_877a_9d659cdf4692.slice/crio-7d2f1d6d225f9e1159c40c53b46625ab6cf36c003d69619300156c1ffec36996 WatchSource:0}: Error finding container 7d2f1d6d225f9e1159c40c53b46625ab6cf36c003d69619300156c1ffec36996: Status 404 returned error can't find the container with id 7d2f1d6d225f9e1159c40c53b46625ab6cf36c003d69619300156c1ffec36996 Mar 12 18:42:44.463980 master-0 kubenswrapper[29097]: I0312 18:42:44.463920 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ngrvj"] Mar 12 18:42:44.558859 master-0 kubenswrapper[29097]: I0312 18:42:44.558800 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ngrvj" event={"ID":"ae29e21b-e7cc-4be1-877a-9d659cdf4692","Type":"ContainerStarted","Data":"7d2f1d6d225f9e1159c40c53b46625ab6cf36c003d69619300156c1ffec36996"} Mar 12 18:42:46.581943 master-0 kubenswrapper[29097]: I0312 18:42:46.581830 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ngrvj" event={"ID":"ae29e21b-e7cc-4be1-877a-9d659cdf4692","Type":"ContainerStarted","Data":"1eafdbb17b0b108a9a80101d069378ab887f3b7a5e1f059dd8c553888bbda279"} Mar 12 18:42:46.607153 master-0 kubenswrapper[29097]: I0312 18:42:46.607050 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-ngrvj" podStartSLOduration=2.418225188 podStartE2EDuration="3.607025379s" podCreationTimestamp="2026-03-12 18:42:43 +0000 UTC" firstStartedPulling="2026-03-12 18:42:44.465708653 +0000 UTC m=+804.019688740" lastFinishedPulling="2026-03-12 18:42:45.654508834 +0000 UTC m=+805.208488931" observedRunningTime="2026-03-12 18:42:46.605762168 +0000 UTC m=+806.159742305" watchObservedRunningTime="2026-03-12 18:42:46.607025379 +0000 UTC m=+806.161005506" Mar 12 18:42:54.016574 master-0 kubenswrapper[29097]: I0312 18:42:54.016368 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-ngrvj" Mar 12 18:42:54.016574 master-0 kubenswrapper[29097]: I0312 18:42:54.016441 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-ngrvj" Mar 12 18:42:54.045037 master-0 kubenswrapper[29097]: I0312 18:42:54.044959 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-ngrvj" Mar 12 18:42:54.695976 master-0 kubenswrapper[29097]: I0312 18:42:54.695888 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-ngrvj" Mar 12 18:43:02.295614 master-0 kubenswrapper[29097]: I0312 18:43:02.295507 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qrxnm"] Mar 12 18:43:02.300559 master-0 kubenswrapper[29097]: I0312 18:43:02.298444 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qrxnm" Mar 12 18:43:02.310364 master-0 kubenswrapper[29097]: I0312 18:43:02.310303 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qrxnm"] Mar 12 18:43:02.348546 master-0 kubenswrapper[29097]: I0312 18:43:02.337193 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e2cf25c5-40aa-4c44-a2f5-913a71748db0-bundle\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qrxnm\" (UID: \"e2cf25c5-40aa-4c44-a2f5-913a71748db0\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qrxnm" Mar 12 18:43:02.348546 master-0 kubenswrapper[29097]: I0312 18:43:02.337270 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwhgq\" (UniqueName: \"kubernetes.io/projected/e2cf25c5-40aa-4c44-a2f5-913a71748db0-kube-api-access-gwhgq\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qrxnm\" (UID: \"e2cf25c5-40aa-4c44-a2f5-913a71748db0\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qrxnm" Mar 12 18:43:02.348546 master-0 kubenswrapper[29097]: I0312 18:43:02.337431 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e2cf25c5-40aa-4c44-a2f5-913a71748db0-util\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qrxnm\" (UID: \"e2cf25c5-40aa-4c44-a2f5-913a71748db0\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qrxnm" Mar 12 18:43:02.439571 master-0 kubenswrapper[29097]: I0312 18:43:02.439499 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e2cf25c5-40aa-4c44-a2f5-913a71748db0-bundle\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qrxnm\" (UID: \"e2cf25c5-40aa-4c44-a2f5-913a71748db0\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qrxnm" Mar 12 18:43:02.439571 master-0 kubenswrapper[29097]: I0312 18:43:02.439592 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwhgq\" (UniqueName: \"kubernetes.io/projected/e2cf25c5-40aa-4c44-a2f5-913a71748db0-kube-api-access-gwhgq\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qrxnm\" (UID: \"e2cf25c5-40aa-4c44-a2f5-913a71748db0\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qrxnm" Mar 12 18:43:02.439865 master-0 kubenswrapper[29097]: I0312 18:43:02.439629 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e2cf25c5-40aa-4c44-a2f5-913a71748db0-util\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qrxnm\" (UID: \"e2cf25c5-40aa-4c44-a2f5-913a71748db0\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qrxnm" Mar 12 18:43:02.440237 master-0 kubenswrapper[29097]: I0312 18:43:02.440192 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e2cf25c5-40aa-4c44-a2f5-913a71748db0-util\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qrxnm\" (UID: \"e2cf25c5-40aa-4c44-a2f5-913a71748db0\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qrxnm" Mar 12 18:43:02.440237 master-0 kubenswrapper[29097]: I0312 18:43:02.440203 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e2cf25c5-40aa-4c44-a2f5-913a71748db0-bundle\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qrxnm\" (UID: \"e2cf25c5-40aa-4c44-a2f5-913a71748db0\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qrxnm" Mar 12 18:43:02.456521 master-0 kubenswrapper[29097]: I0312 18:43:02.456468 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwhgq\" (UniqueName: \"kubernetes.io/projected/e2cf25c5-40aa-4c44-a2f5-913a71748db0-kube-api-access-gwhgq\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qrxnm\" (UID: \"e2cf25c5-40aa-4c44-a2f5-913a71748db0\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qrxnm" Mar 12 18:43:02.629655 master-0 kubenswrapper[29097]: I0312 18:43:02.629485 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qrxnm" Mar 12 18:43:03.105234 master-0 kubenswrapper[29097]: I0312 18:43:03.105165 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qrxnm"] Mar 12 18:43:03.119109 master-0 kubenswrapper[29097]: W0312 18:43:03.119048 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2cf25c5_40aa_4c44_a2f5_913a71748db0.slice/crio-4020c4d3764fb6ceae61cc055c4be0cca04ba5330d570cd38c2e940d0e0797d4 WatchSource:0}: Error finding container 4020c4d3764fb6ceae61cc055c4be0cca04ba5330d570cd38c2e940d0e0797d4: Status 404 returned error can't find the container with id 4020c4d3764fb6ceae61cc055c4be0cca04ba5330d570cd38c2e940d0e0797d4 Mar 12 18:43:03.773394 master-0 kubenswrapper[29097]: I0312 18:43:03.773252 29097 generic.go:334] "Generic (PLEG): container finished" podID="e2cf25c5-40aa-4c44-a2f5-913a71748db0" containerID="7cdd733253425e1d27585686ae44190471d6c17bc8f73af4d02afb09645bc60d" exitCode=0 Mar 12 18:43:03.773394 master-0 kubenswrapper[29097]: I0312 18:43:03.773318 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qrxnm" event={"ID":"e2cf25c5-40aa-4c44-a2f5-913a71748db0","Type":"ContainerDied","Data":"7cdd733253425e1d27585686ae44190471d6c17bc8f73af4d02afb09645bc60d"} Mar 12 18:43:03.773394 master-0 kubenswrapper[29097]: I0312 18:43:03.773396 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qrxnm" event={"ID":"e2cf25c5-40aa-4c44-a2f5-913a71748db0","Type":"ContainerStarted","Data":"4020c4d3764fb6ceae61cc055c4be0cca04ba5330d570cd38c2e940d0e0797d4"} Mar 12 18:43:04.787161 master-0 kubenswrapper[29097]: I0312 18:43:04.787036 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qrxnm" event={"ID":"e2cf25c5-40aa-4c44-a2f5-913a71748db0","Type":"ContainerStarted","Data":"4f900ac3853cc1d1c0c9adf383812561d09ea6a107fee509a460dbc3ba0678bc"} Mar 12 18:43:05.796785 master-0 kubenswrapper[29097]: I0312 18:43:05.796716 29097 generic.go:334] "Generic (PLEG): container finished" podID="e2cf25c5-40aa-4c44-a2f5-913a71748db0" containerID="4f900ac3853cc1d1c0c9adf383812561d09ea6a107fee509a460dbc3ba0678bc" exitCode=0 Mar 12 18:43:05.796785 master-0 kubenswrapper[29097]: I0312 18:43:05.796758 29097 generic.go:334] "Generic (PLEG): container finished" podID="e2cf25c5-40aa-4c44-a2f5-913a71748db0" containerID="328c5428e67887526265a836e6026db491518bd14ea4669686e060f8cf48d099" exitCode=0 Mar 12 18:43:05.797850 master-0 kubenswrapper[29097]: I0312 18:43:05.796780 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qrxnm" event={"ID":"e2cf25c5-40aa-4c44-a2f5-913a71748db0","Type":"ContainerDied","Data":"4f900ac3853cc1d1c0c9adf383812561d09ea6a107fee509a460dbc3ba0678bc"} Mar 12 18:43:05.797850 master-0 kubenswrapper[29097]: I0312 18:43:05.796847 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qrxnm" event={"ID":"e2cf25c5-40aa-4c44-a2f5-913a71748db0","Type":"ContainerDied","Data":"328c5428e67887526265a836e6026db491518bd14ea4669686e060f8cf48d099"} Mar 12 18:43:07.159548 master-0 kubenswrapper[29097]: I0312 18:43:07.159474 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qrxnm" Mar 12 18:43:07.233469 master-0 kubenswrapper[29097]: I0312 18:43:07.233347 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e2cf25c5-40aa-4c44-a2f5-913a71748db0-bundle\") pod \"e2cf25c5-40aa-4c44-a2f5-913a71748db0\" (UID: \"e2cf25c5-40aa-4c44-a2f5-913a71748db0\") " Mar 12 18:43:07.233832 master-0 kubenswrapper[29097]: I0312 18:43:07.233618 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwhgq\" (UniqueName: \"kubernetes.io/projected/e2cf25c5-40aa-4c44-a2f5-913a71748db0-kube-api-access-gwhgq\") pod \"e2cf25c5-40aa-4c44-a2f5-913a71748db0\" (UID: \"e2cf25c5-40aa-4c44-a2f5-913a71748db0\") " Mar 12 18:43:07.233832 master-0 kubenswrapper[29097]: I0312 18:43:07.233708 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e2cf25c5-40aa-4c44-a2f5-913a71748db0-util\") pod \"e2cf25c5-40aa-4c44-a2f5-913a71748db0\" (UID: \"e2cf25c5-40aa-4c44-a2f5-913a71748db0\") " Mar 12 18:43:07.234737 master-0 kubenswrapper[29097]: I0312 18:43:07.234661 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2cf25c5-40aa-4c44-a2f5-913a71748db0-bundle" (OuterVolumeSpecName: "bundle") pod "e2cf25c5-40aa-4c44-a2f5-913a71748db0" (UID: "e2cf25c5-40aa-4c44-a2f5-913a71748db0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:43:07.239240 master-0 kubenswrapper[29097]: I0312 18:43:07.239146 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2cf25c5-40aa-4c44-a2f5-913a71748db0-kube-api-access-gwhgq" (OuterVolumeSpecName: "kube-api-access-gwhgq") pod "e2cf25c5-40aa-4c44-a2f5-913a71748db0" (UID: "e2cf25c5-40aa-4c44-a2f5-913a71748db0"). InnerVolumeSpecName "kube-api-access-gwhgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:43:07.247980 master-0 kubenswrapper[29097]: I0312 18:43:07.247927 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2cf25c5-40aa-4c44-a2f5-913a71748db0-util" (OuterVolumeSpecName: "util") pod "e2cf25c5-40aa-4c44-a2f5-913a71748db0" (UID: "e2cf25c5-40aa-4c44-a2f5-913a71748db0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:43:07.335981 master-0 kubenswrapper[29097]: I0312 18:43:07.335875 29097 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e2cf25c5-40aa-4c44-a2f5-913a71748db0-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:43:07.335981 master-0 kubenswrapper[29097]: I0312 18:43:07.335949 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwhgq\" (UniqueName: \"kubernetes.io/projected/e2cf25c5-40aa-4c44-a2f5-913a71748db0-kube-api-access-gwhgq\") on node \"master-0\" DevicePath \"\"" Mar 12 18:43:07.335981 master-0 kubenswrapper[29097]: I0312 18:43:07.335978 29097 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e2cf25c5-40aa-4c44-a2f5-913a71748db0-util\") on node \"master-0\" DevicePath \"\"" Mar 12 18:43:07.822687 master-0 kubenswrapper[29097]: I0312 18:43:07.822626 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qrxnm" event={"ID":"e2cf25c5-40aa-4c44-a2f5-913a71748db0","Type":"ContainerDied","Data":"4020c4d3764fb6ceae61cc055c4be0cca04ba5330d570cd38c2e940d0e0797d4"} Mar 12 18:43:07.822687 master-0 kubenswrapper[29097]: I0312 18:43:07.822679 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4020c4d3764fb6ceae61cc055c4be0cca04ba5330d570cd38c2e940d0e0797d4" Mar 12 18:43:07.823021 master-0 kubenswrapper[29097]: I0312 18:43:07.822729 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477qrxnm" Mar 12 18:43:14.897408 master-0 kubenswrapper[29097]: I0312 18:43:14.897341 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-65b9994cf8-rhqb5"] Mar 12 18:43:14.898063 master-0 kubenswrapper[29097]: E0312 18:43:14.897685 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2cf25c5-40aa-4c44-a2f5-913a71748db0" containerName="extract" Mar 12 18:43:14.898063 master-0 kubenswrapper[29097]: I0312 18:43:14.897697 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2cf25c5-40aa-4c44-a2f5-913a71748db0" containerName="extract" Mar 12 18:43:14.898063 master-0 kubenswrapper[29097]: E0312 18:43:14.897718 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2cf25c5-40aa-4c44-a2f5-913a71748db0" containerName="util" Mar 12 18:43:14.898063 master-0 kubenswrapper[29097]: I0312 18:43:14.897725 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2cf25c5-40aa-4c44-a2f5-913a71748db0" containerName="util" Mar 12 18:43:14.898063 master-0 kubenswrapper[29097]: E0312 18:43:14.897760 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2cf25c5-40aa-4c44-a2f5-913a71748db0" containerName="pull" Mar 12 18:43:14.898063 master-0 kubenswrapper[29097]: I0312 18:43:14.897770 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2cf25c5-40aa-4c44-a2f5-913a71748db0" containerName="pull" Mar 12 18:43:14.898063 master-0 kubenswrapper[29097]: I0312 18:43:14.897923 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2cf25c5-40aa-4c44-a2f5-913a71748db0" containerName="extract" Mar 12 18:43:14.898438 master-0 kubenswrapper[29097]: I0312 18:43:14.898415 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-rhqb5" Mar 12 18:43:14.932894 master-0 kubenswrapper[29097]: I0312 18:43:14.932836 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-65b9994cf8-rhqb5"] Mar 12 18:43:14.969406 master-0 kubenswrapper[29097]: I0312 18:43:14.969338 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b2l9\" (UniqueName: \"kubernetes.io/projected/0980307a-d7a3-40ef-80dd-f6a522fcd0d1-kube-api-access-2b2l9\") pod \"openstack-operator-controller-init-65b9994cf8-rhqb5\" (UID: \"0980307a-d7a3-40ef-80dd-f6a522fcd0d1\") " pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-rhqb5" Mar 12 18:43:15.071365 master-0 kubenswrapper[29097]: I0312 18:43:15.071306 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b2l9\" (UniqueName: \"kubernetes.io/projected/0980307a-d7a3-40ef-80dd-f6a522fcd0d1-kube-api-access-2b2l9\") pod \"openstack-operator-controller-init-65b9994cf8-rhqb5\" (UID: \"0980307a-d7a3-40ef-80dd-f6a522fcd0d1\") " pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-rhqb5" Mar 12 18:43:15.095263 master-0 kubenswrapper[29097]: I0312 18:43:15.095217 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b2l9\" (UniqueName: \"kubernetes.io/projected/0980307a-d7a3-40ef-80dd-f6a522fcd0d1-kube-api-access-2b2l9\") pod \"openstack-operator-controller-init-65b9994cf8-rhqb5\" (UID: \"0980307a-d7a3-40ef-80dd-f6a522fcd0d1\") " pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-rhqb5" Mar 12 18:43:15.215027 master-0 kubenswrapper[29097]: I0312 18:43:15.214862 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-rhqb5" Mar 12 18:43:15.692632 master-0 kubenswrapper[29097]: I0312 18:43:15.686905 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-65b9994cf8-rhqb5"] Mar 12 18:43:15.692632 master-0 kubenswrapper[29097]: I0312 18:43:15.692212 29097 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 18:43:15.902826 master-0 kubenswrapper[29097]: I0312 18:43:15.902696 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-rhqb5" event={"ID":"0980307a-d7a3-40ef-80dd-f6a522fcd0d1","Type":"ContainerStarted","Data":"dad44cf390bb5cc2be3430c9f5f93522af60254edb748597a3e6ae37f13f9d98"} Mar 12 18:43:20.949817 master-0 kubenswrapper[29097]: I0312 18:43:20.949657 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-rhqb5" event={"ID":"0980307a-d7a3-40ef-80dd-f6a522fcd0d1","Type":"ContainerStarted","Data":"326f38d392993a229b3302f56ba3c8aad6dd9d1288951d95dec8f3f9fc50db8a"} Mar 12 18:43:20.950931 master-0 kubenswrapper[29097]: I0312 18:43:20.950881 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-rhqb5" Mar 12 18:43:20.994946 master-0 kubenswrapper[29097]: I0312 18:43:20.994833 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-rhqb5" podStartSLOduration=2.699133853 podStartE2EDuration="6.994808301s" podCreationTimestamp="2026-03-12 18:43:14 +0000 UTC" firstStartedPulling="2026-03-12 18:43:15.692120458 +0000 UTC m=+835.246100595" lastFinishedPulling="2026-03-12 18:43:19.987794946 +0000 UTC m=+839.541775043" observedRunningTime="2026-03-12 18:43:20.989099408 +0000 UTC m=+840.543079525" watchObservedRunningTime="2026-03-12 18:43:20.994808301 +0000 UTC m=+840.548788408" Mar 12 18:43:21.198959 master-0 kubenswrapper[29097]: E0312 18:43:21.198908 29097 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33feec78_4592_4343_965b_aa1b7044fcf3.slice/crio-26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Error finding container 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Status 404 returned error can't find the container with id 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547 Mar 12 18:43:25.217916 master-0 kubenswrapper[29097]: I0312 18:43:25.217871 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-rhqb5" Mar 12 18:43:46.067638 master-0 kubenswrapper[29097]: I0312 18:43:46.067567 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-2j959"] Mar 12 18:43:46.076442 master-0 kubenswrapper[29097]: I0312 18:43:46.069038 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-2j959" Mar 12 18:43:46.089826 master-0 kubenswrapper[29097]: I0312 18:43:46.089763 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sms6\" (UniqueName: \"kubernetes.io/projected/a9aded67-7bb9-4df6-8334-ddffef99aa7f-kube-api-access-8sms6\") pod \"barbican-operator-controller-manager-677bd678f7-2j959\" (UID: \"a9aded67-7bb9-4df6-8334-ddffef99aa7f\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-2j959" Mar 12 18:43:46.096920 master-0 kubenswrapper[29097]: I0312 18:43:46.095102 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-qkhtq"] Mar 12 18:43:46.096920 master-0 kubenswrapper[29097]: I0312 18:43:46.096212 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-qkhtq" Mar 12 18:43:46.108961 master-0 kubenswrapper[29097]: I0312 18:43:46.108927 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-2j959"] Mar 12 18:43:46.116047 master-0 kubenswrapper[29097]: I0312 18:43:46.115987 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-qkhtq"] Mar 12 18:43:46.161264 master-0 kubenswrapper[29097]: I0312 18:43:46.161212 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-7pdcm"] Mar 12 18:43:46.163164 master-0 kubenswrapper[29097]: I0312 18:43:46.163147 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-7pdcm" Mar 12 18:43:46.176488 master-0 kubenswrapper[29097]: I0312 18:43:46.176404 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-44fbq"] Mar 12 18:43:46.179409 master-0 kubenswrapper[29097]: I0312 18:43:46.179362 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-44fbq" Mar 12 18:43:46.191826 master-0 kubenswrapper[29097]: I0312 18:43:46.191709 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66tgt\" (UniqueName: \"kubernetes.io/projected/f7180d7f-df3c-48a8-b7f2-88c910fb2917-kube-api-access-66tgt\") pod \"cinder-operator-controller-manager-984cd4dcf-qkhtq\" (UID: \"f7180d7f-df3c-48a8-b7f2-88c910fb2917\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-qkhtq" Mar 12 18:43:46.192033 master-0 kubenswrapper[29097]: I0312 18:43:46.191814 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw5mk\" (UniqueName: \"kubernetes.io/projected/13c9cdbc-00fa-4ca0-b136-46f9df179f76-kube-api-access-jw5mk\") pod \"glance-operator-controller-manager-5964f64c48-44fbq\" (UID: \"13c9cdbc-00fa-4ca0-b136-46f9df179f76\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-44fbq" Mar 12 18:43:46.192033 master-0 kubenswrapper[29097]: I0312 18:43:46.191882 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpr79\" (UniqueName: \"kubernetes.io/projected/180f0122-4a3b-40bd-b8e4-c81368e5be7f-kube-api-access-dpr79\") pod \"designate-operator-controller-manager-66d56f6ff4-7pdcm\" (UID: \"180f0122-4a3b-40bd-b8e4-c81368e5be7f\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-7pdcm" Mar 12 18:43:46.192033 master-0 kubenswrapper[29097]: I0312 18:43:46.191938 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sms6\" (UniqueName: \"kubernetes.io/projected/a9aded67-7bb9-4df6-8334-ddffef99aa7f-kube-api-access-8sms6\") pod \"barbican-operator-controller-manager-677bd678f7-2j959\" (UID: \"a9aded67-7bb9-4df6-8334-ddffef99aa7f\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-2j959" Mar 12 18:43:46.204261 master-0 kubenswrapper[29097]: I0312 18:43:46.204205 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-7pdcm"] Mar 12 18:43:46.220050 master-0 kubenswrapper[29097]: I0312 18:43:46.220004 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-46wm7"] Mar 12 18:43:46.235811 master-0 kubenswrapper[29097]: I0312 18:43:46.221818 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sms6\" (UniqueName: \"kubernetes.io/projected/a9aded67-7bb9-4df6-8334-ddffef99aa7f-kube-api-access-8sms6\") pod \"barbican-operator-controller-manager-677bd678f7-2j959\" (UID: \"a9aded67-7bb9-4df6-8334-ddffef99aa7f\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-2j959" Mar 12 18:43:46.237234 master-0 kubenswrapper[29097]: I0312 18:43:46.237163 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-46wm7" Mar 12 18:43:46.270301 master-0 kubenswrapper[29097]: I0312 18:43:46.270266 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-44fbq"] Mar 12 18:43:46.312086 master-0 kubenswrapper[29097]: I0312 18:43:46.312031 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw5mk\" (UniqueName: \"kubernetes.io/projected/13c9cdbc-00fa-4ca0-b136-46f9df179f76-kube-api-access-jw5mk\") pod \"glance-operator-controller-manager-5964f64c48-44fbq\" (UID: \"13c9cdbc-00fa-4ca0-b136-46f9df179f76\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-44fbq" Mar 12 18:43:46.312280 master-0 kubenswrapper[29097]: I0312 18:43:46.312120 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpr79\" (UniqueName: \"kubernetes.io/projected/180f0122-4a3b-40bd-b8e4-c81368e5be7f-kube-api-access-dpr79\") pod \"designate-operator-controller-manager-66d56f6ff4-7pdcm\" (UID: \"180f0122-4a3b-40bd-b8e4-c81368e5be7f\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-7pdcm" Mar 12 18:43:46.312280 master-0 kubenswrapper[29097]: I0312 18:43:46.312272 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wck67\" (UniqueName: \"kubernetes.io/projected/e9ded411-b09e-46f8-aee1-f6c168f2edd8-kube-api-access-wck67\") pod \"heat-operator-controller-manager-77b6666d85-46wm7\" (UID: \"e9ded411-b09e-46f8-aee1-f6c168f2edd8\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-46wm7" Mar 12 18:43:46.312372 master-0 kubenswrapper[29097]: I0312 18:43:46.312350 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66tgt\" (UniqueName: \"kubernetes.io/projected/f7180d7f-df3c-48a8-b7f2-88c910fb2917-kube-api-access-66tgt\") pod \"cinder-operator-controller-manager-984cd4dcf-qkhtq\" (UID: \"f7180d7f-df3c-48a8-b7f2-88c910fb2917\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-qkhtq" Mar 12 18:43:46.333579 master-0 kubenswrapper[29097]: I0312 18:43:46.331858 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-46wm7"] Mar 12 18:43:46.340110 master-0 kubenswrapper[29097]: I0312 18:43:46.340053 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-p5v49"] Mar 12 18:43:46.341655 master-0 kubenswrapper[29097]: I0312 18:43:46.341221 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-p5v49" Mar 12 18:43:46.349583 master-0 kubenswrapper[29097]: I0312 18:43:46.348271 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw5mk\" (UniqueName: \"kubernetes.io/projected/13c9cdbc-00fa-4ca0-b136-46f9df179f76-kube-api-access-jw5mk\") pod \"glance-operator-controller-manager-5964f64c48-44fbq\" (UID: \"13c9cdbc-00fa-4ca0-b136-46f9df179f76\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-44fbq" Mar 12 18:43:46.350201 master-0 kubenswrapper[29097]: I0312 18:43:46.350182 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66tgt\" (UniqueName: \"kubernetes.io/projected/f7180d7f-df3c-48a8-b7f2-88c910fb2917-kube-api-access-66tgt\") pod \"cinder-operator-controller-manager-984cd4dcf-qkhtq\" (UID: \"f7180d7f-df3c-48a8-b7f2-88c910fb2917\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-qkhtq" Mar 12 18:43:46.365534 master-0 kubenswrapper[29097]: I0312 18:43:46.365451 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpr79\" (UniqueName: \"kubernetes.io/projected/180f0122-4a3b-40bd-b8e4-c81368e5be7f-kube-api-access-dpr79\") pod \"designate-operator-controller-manager-66d56f6ff4-7pdcm\" (UID: \"180f0122-4a3b-40bd-b8e4-c81368e5be7f\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-7pdcm" Mar 12 18:43:46.400757 master-0 kubenswrapper[29097]: I0312 18:43:46.400268 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-2j959" Mar 12 18:43:46.417109 master-0 kubenswrapper[29097]: I0312 18:43:46.417059 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wck67\" (UniqueName: \"kubernetes.io/projected/e9ded411-b09e-46f8-aee1-f6c168f2edd8-kube-api-access-wck67\") pod \"heat-operator-controller-manager-77b6666d85-46wm7\" (UID: \"e9ded411-b09e-46f8-aee1-f6c168f2edd8\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-46wm7" Mar 12 18:43:46.417312 master-0 kubenswrapper[29097]: I0312 18:43:46.417166 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xdw9\" (UniqueName: \"kubernetes.io/projected/b4a40675-9bcf-43e8-b9a3-2016eb61e4da-kube-api-access-8xdw9\") pod \"horizon-operator-controller-manager-6d9d6b584d-p5v49\" (UID: \"b4a40675-9bcf-43e8-b9a3-2016eb61e4da\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-p5v49" Mar 12 18:43:46.433264 master-0 kubenswrapper[29097]: I0312 18:43:46.433178 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-qkhtq" Mar 12 18:43:46.474841 master-0 kubenswrapper[29097]: I0312 18:43:46.474781 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-b8c8d7cc8-z8trl"] Mar 12 18:43:46.482724 master-0 kubenswrapper[29097]: I0312 18:43:46.481157 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-z8trl" Mar 12 18:43:46.487012 master-0 kubenswrapper[29097]: I0312 18:43:46.484078 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wck67\" (UniqueName: \"kubernetes.io/projected/e9ded411-b09e-46f8-aee1-f6c168f2edd8-kube-api-access-wck67\") pod \"heat-operator-controller-manager-77b6666d85-46wm7\" (UID: \"e9ded411-b09e-46f8-aee1-f6c168f2edd8\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-46wm7" Mar 12 18:43:46.487363 master-0 kubenswrapper[29097]: I0312 18:43:46.487319 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Mar 12 18:43:46.503046 master-0 kubenswrapper[29097]: I0312 18:43:46.503012 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-7pdcm" Mar 12 18:43:46.510359 master-0 kubenswrapper[29097]: I0312 18:43:46.510319 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-p5v49"] Mar 12 18:43:46.519151 master-0 kubenswrapper[29097]: I0312 18:43:46.518626 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xdw9\" (UniqueName: \"kubernetes.io/projected/b4a40675-9bcf-43e8-b9a3-2016eb61e4da-kube-api-access-8xdw9\") pod \"horizon-operator-controller-manager-6d9d6b584d-p5v49\" (UID: \"b4a40675-9bcf-43e8-b9a3-2016eb61e4da\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-p5v49" Mar 12 18:43:46.519151 master-0 kubenswrapper[29097]: I0312 18:43:46.518706 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a341a0dd-9612-4ebc-a88f-c0afe26c6859-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-z8trl\" (UID: \"a341a0dd-9612-4ebc-a88f-c0afe26c6859\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-z8trl" Mar 12 18:43:46.519151 master-0 kubenswrapper[29097]: I0312 18:43:46.518743 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv2c2\" (UniqueName: \"kubernetes.io/projected/a341a0dd-9612-4ebc-a88f-c0afe26c6859-kube-api-access-mv2c2\") pod \"infra-operator-controller-manager-b8c8d7cc8-z8trl\" (UID: \"a341a0dd-9612-4ebc-a88f-c0afe26c6859\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-z8trl" Mar 12 18:43:46.562690 master-0 kubenswrapper[29097]: I0312 18:43:46.561855 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-44fbq" Mar 12 18:43:46.597826 master-0 kubenswrapper[29097]: I0312 18:43:46.595158 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-46wm7" Mar 12 18:43:46.602404 master-0 kubenswrapper[29097]: I0312 18:43:46.602329 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xdw9\" (UniqueName: \"kubernetes.io/projected/b4a40675-9bcf-43e8-b9a3-2016eb61e4da-kube-api-access-8xdw9\") pod \"horizon-operator-controller-manager-6d9d6b584d-p5v49\" (UID: \"b4a40675-9bcf-43e8-b9a3-2016eb61e4da\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-p5v49" Mar 12 18:43:46.613506 master-0 kubenswrapper[29097]: I0312 18:43:46.610286 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-b8c8d7cc8-z8trl"] Mar 12 18:43:46.622318 master-0 kubenswrapper[29097]: I0312 18:43:46.621795 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a341a0dd-9612-4ebc-a88f-c0afe26c6859-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-z8trl\" (UID: \"a341a0dd-9612-4ebc-a88f-c0afe26c6859\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-z8trl" Mar 12 18:43:46.622318 master-0 kubenswrapper[29097]: I0312 18:43:46.621853 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv2c2\" (UniqueName: \"kubernetes.io/projected/a341a0dd-9612-4ebc-a88f-c0afe26c6859-kube-api-access-mv2c2\") pod \"infra-operator-controller-manager-b8c8d7cc8-z8trl\" (UID: \"a341a0dd-9612-4ebc-a88f-c0afe26c6859\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-z8trl" Mar 12 18:43:46.622782 master-0 kubenswrapper[29097]: E0312 18:43:46.622621 29097 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 18:43:46.622782 master-0 kubenswrapper[29097]: E0312 18:43:46.622681 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a341a0dd-9612-4ebc-a88f-c0afe26c6859-cert podName:a341a0dd-9612-4ebc-a88f-c0afe26c6859 nodeName:}" failed. No retries permitted until 2026-03-12 18:43:47.122661404 +0000 UTC m=+866.676641501 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a341a0dd-9612-4ebc-a88f-c0afe26c6859-cert") pod "infra-operator-controller-manager-b8c8d7cc8-z8trl" (UID: "a341a0dd-9612-4ebc-a88f-c0afe26c6859") : secret "infra-operator-webhook-server-cert" not found Mar 12 18:43:46.631632 master-0 kubenswrapper[29097]: I0312 18:43:46.631574 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-d29kk"] Mar 12 18:43:46.633286 master-0 kubenswrapper[29097]: I0312 18:43:46.632738 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-d29kk" Mar 12 18:43:46.654911 master-0 kubenswrapper[29097]: I0312 18:43:46.653276 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv2c2\" (UniqueName: \"kubernetes.io/projected/a341a0dd-9612-4ebc-a88f-c0afe26c6859-kube-api-access-mv2c2\") pod \"infra-operator-controller-manager-b8c8d7cc8-z8trl\" (UID: \"a341a0dd-9612-4ebc-a88f-c0afe26c6859\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-z8trl" Mar 12 18:43:46.684539 master-0 kubenswrapper[29097]: I0312 18:43:46.681624 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-b92h7"] Mar 12 18:43:46.684539 master-0 kubenswrapper[29097]: I0312 18:43:46.683721 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-b92h7" Mar 12 18:43:46.691448 master-0 kubenswrapper[29097]: I0312 18:43:46.690056 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-d29kk"] Mar 12 18:43:46.714724 master-0 kubenswrapper[29097]: I0312 18:43:46.702339 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-b92h7"] Mar 12 18:43:46.724217 master-0 kubenswrapper[29097]: I0312 18:43:46.723327 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb875\" (UniqueName: \"kubernetes.io/projected/76af05d5-a6fb-4aea-949a-fa0329c4739c-kube-api-access-pb875\") pod \"ironic-operator-controller-manager-6bbb499bbc-d29kk\" (UID: \"76af05d5-a6fb-4aea-949a-fa0329c4739c\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-d29kk" Mar 12 18:43:46.724217 master-0 kubenswrapper[29097]: I0312 18:43:46.723441 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5k8m\" (UniqueName: \"kubernetes.io/projected/827cb92d-3fbc-48fc-b74d-5a8a9046cadb-kube-api-access-l5k8m\") pod \"keystone-operator-controller-manager-684f77d66d-b92h7\" (UID: \"827cb92d-3fbc-48fc-b74d-5a8a9046cadb\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-b92h7" Mar 12 18:43:46.735799 master-0 kubenswrapper[29097]: I0312 18:43:46.735107 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-7f7bk"] Mar 12 18:43:46.736741 master-0 kubenswrapper[29097]: I0312 18:43:46.736481 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-7f7bk" Mar 12 18:43:46.740070 master-0 kubenswrapper[29097]: I0312 18:43:46.738488 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-p5v49" Mar 12 18:43:46.740329 master-0 kubenswrapper[29097]: I0312 18:43:46.740152 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-7f7bk"] Mar 12 18:43:46.763747 master-0 kubenswrapper[29097]: I0312 18:43:46.755459 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-l6v2h"] Mar 12 18:43:46.763747 master-0 kubenswrapper[29097]: I0312 18:43:46.756576 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-l6v2h" Mar 12 18:43:46.772505 master-0 kubenswrapper[29097]: I0312 18:43:46.771047 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-hszdc"] Mar 12 18:43:46.784329 master-0 kubenswrapper[29097]: I0312 18:43:46.778854 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-hszdc" Mar 12 18:43:46.804206 master-0 kubenswrapper[29097]: I0312 18:43:46.802545 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-l6v2h"] Mar 12 18:43:46.820998 master-0 kubenswrapper[29097]: I0312 18:43:46.816849 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-hszdc"] Mar 12 18:43:46.832460 master-0 kubenswrapper[29097]: I0312 18:43:46.825302 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqll4\" (UniqueName: \"kubernetes.io/projected/bd3cb300-0aee-4771-bd27-162e219b892a-kube-api-access-pqll4\") pod \"manila-operator-controller-manager-68f45f9d9f-7f7bk\" (UID: \"bd3cb300-0aee-4771-bd27-162e219b892a\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-7f7bk" Mar 12 18:43:46.832460 master-0 kubenswrapper[29097]: I0312 18:43:46.825375 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-886st\" (UniqueName: \"kubernetes.io/projected/f23f2008-a1bd-41d2-90e0-8885562051bb-kube-api-access-886st\") pod \"neutron-operator-controller-manager-776c5696bf-hszdc\" (UID: \"f23f2008-a1bd-41d2-90e0-8885562051bb\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-hszdc" Mar 12 18:43:46.832460 master-0 kubenswrapper[29097]: I0312 18:43:46.827772 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5k8m\" (UniqueName: \"kubernetes.io/projected/827cb92d-3fbc-48fc-b74d-5a8a9046cadb-kube-api-access-l5k8m\") pod \"keystone-operator-controller-manager-684f77d66d-b92h7\" (UID: \"827cb92d-3fbc-48fc-b74d-5a8a9046cadb\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-b92h7" Mar 12 18:43:46.832460 master-0 kubenswrapper[29097]: I0312 18:43:46.827918 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb875\" (UniqueName: \"kubernetes.io/projected/76af05d5-a6fb-4aea-949a-fa0329c4739c-kube-api-access-pb875\") pod \"ironic-operator-controller-manager-6bbb499bbc-d29kk\" (UID: \"76af05d5-a6fb-4aea-949a-fa0329c4739c\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-d29kk" Mar 12 18:43:46.832460 master-0 kubenswrapper[29097]: I0312 18:43:46.827945 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7pjx\" (UniqueName: \"kubernetes.io/projected/22702d59-1bb1-494c-a0a8-f9f0fd34b196-kube-api-access-s7pjx\") pod \"mariadb-operator-controller-manager-658d4cdd5-l6v2h\" (UID: \"22702d59-1bb1-494c-a0a8-f9f0fd34b196\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-l6v2h" Mar 12 18:43:46.832460 master-0 kubenswrapper[29097]: I0312 18:43:46.828180 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-c4xmt"] Mar 12 18:43:46.832460 master-0 kubenswrapper[29097]: I0312 18:43:46.829314 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-c4xmt" Mar 12 18:43:46.841774 master-0 kubenswrapper[29097]: I0312 18:43:46.834244 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-c4xmt"] Mar 12 18:43:46.864288 master-0 kubenswrapper[29097]: I0312 18:43:46.864231 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5k8m\" (UniqueName: \"kubernetes.io/projected/827cb92d-3fbc-48fc-b74d-5a8a9046cadb-kube-api-access-l5k8m\") pod \"keystone-operator-controller-manager-684f77d66d-b92h7\" (UID: \"827cb92d-3fbc-48fc-b74d-5a8a9046cadb\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-b92h7" Mar 12 18:43:46.864476 master-0 kubenswrapper[29097]: I0312 18:43:46.864371 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-75nl8"] Mar 12 18:43:46.865152 master-0 kubenswrapper[29097]: I0312 18:43:46.865116 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb875\" (UniqueName: \"kubernetes.io/projected/76af05d5-a6fb-4aea-949a-fa0329c4739c-kube-api-access-pb875\") pod \"ironic-operator-controller-manager-6bbb499bbc-d29kk\" (UID: \"76af05d5-a6fb-4aea-949a-fa0329c4739c\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-d29kk" Mar 12 18:43:46.865902 master-0 kubenswrapper[29097]: I0312 18:43:46.865871 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-75nl8" Mar 12 18:43:46.916690 master-0 kubenswrapper[29097]: I0312 18:43:46.916628 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-75nl8"] Mar 12 18:43:46.932002 master-0 kubenswrapper[29097]: I0312 18:43:46.929335 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsr89\" (UniqueName: \"kubernetes.io/projected/fa6bbd68-101b-47ce-a9d6-9ad81777a1d3-kube-api-access-gsr89\") pod \"nova-operator-controller-manager-569cc54c5-c4xmt\" (UID: \"fa6bbd68-101b-47ce-a9d6-9ad81777a1d3\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-c4xmt" Mar 12 18:43:46.932002 master-0 kubenswrapper[29097]: I0312 18:43:46.931058 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7pjx\" (UniqueName: \"kubernetes.io/projected/22702d59-1bb1-494c-a0a8-f9f0fd34b196-kube-api-access-s7pjx\") pod \"mariadb-operator-controller-manager-658d4cdd5-l6v2h\" (UID: \"22702d59-1bb1-494c-a0a8-f9f0fd34b196\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-l6v2h" Mar 12 18:43:46.932002 master-0 kubenswrapper[29097]: I0312 18:43:46.931127 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx76z\" (UniqueName: \"kubernetes.io/projected/b9e311e9-c85b-4072-972d-3dc6f8ff5f64-kube-api-access-hx76z\") pod \"octavia-operator-controller-manager-5f4f55cb5c-75nl8\" (UID: \"b9e311e9-c85b-4072-972d-3dc6f8ff5f64\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-75nl8" Mar 12 18:43:46.932002 master-0 kubenswrapper[29097]: I0312 18:43:46.931156 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqll4\" (UniqueName: \"kubernetes.io/projected/bd3cb300-0aee-4771-bd27-162e219b892a-kube-api-access-pqll4\") pod \"manila-operator-controller-manager-68f45f9d9f-7f7bk\" (UID: \"bd3cb300-0aee-4771-bd27-162e219b892a\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-7f7bk" Mar 12 18:43:46.932002 master-0 kubenswrapper[29097]: I0312 18:43:46.931210 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-886st\" (UniqueName: \"kubernetes.io/projected/f23f2008-a1bd-41d2-90e0-8885562051bb-kube-api-access-886st\") pod \"neutron-operator-controller-manager-776c5696bf-hszdc\" (UID: \"f23f2008-a1bd-41d2-90e0-8885562051bb\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-hszdc" Mar 12 18:43:46.937435 master-0 kubenswrapper[29097]: I0312 18:43:46.937338 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh"] Mar 12 18:43:46.938443 master-0 kubenswrapper[29097]: I0312 18:43:46.938364 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh" Mar 12 18:43:46.943607 master-0 kubenswrapper[29097]: I0312 18:43:46.942851 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Mar 12 18:43:46.950420 master-0 kubenswrapper[29097]: I0312 18:43:46.950121 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7pjx\" (UniqueName: \"kubernetes.io/projected/22702d59-1bb1-494c-a0a8-f9f0fd34b196-kube-api-access-s7pjx\") pod \"mariadb-operator-controller-manager-658d4cdd5-l6v2h\" (UID: \"22702d59-1bb1-494c-a0a8-f9f0fd34b196\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-l6v2h" Mar 12 18:43:46.954199 master-0 kubenswrapper[29097]: I0312 18:43:46.953998 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-v8xwb"] Mar 12 18:43:46.958377 master-0 kubenswrapper[29097]: I0312 18:43:46.957217 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-886st\" (UniqueName: \"kubernetes.io/projected/f23f2008-a1bd-41d2-90e0-8885562051bb-kube-api-access-886st\") pod \"neutron-operator-controller-manager-776c5696bf-hszdc\" (UID: \"f23f2008-a1bd-41d2-90e0-8885562051bb\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-hszdc" Mar 12 18:43:46.959413 master-0 kubenswrapper[29097]: I0312 18:43:46.959227 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqll4\" (UniqueName: \"kubernetes.io/projected/bd3cb300-0aee-4771-bd27-162e219b892a-kube-api-access-pqll4\") pod \"manila-operator-controller-manager-68f45f9d9f-7f7bk\" (UID: \"bd3cb300-0aee-4771-bd27-162e219b892a\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-7f7bk" Mar 12 18:43:46.970293 master-0 kubenswrapper[29097]: I0312 18:43:46.969651 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-v8xwb" Mar 12 18:43:46.988477 master-0 kubenswrapper[29097]: I0312 18:43:46.987958 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh"] Mar 12 18:43:46.994287 master-0 kubenswrapper[29097]: W0312 18:43:46.993813 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7180d7f_df3c_48a8_b7f2_88c910fb2917.slice/crio-622ac490d558aa90b10b6c795d48914fe1f1e669530a433f75762e951c92af31 WatchSource:0}: Error finding container 622ac490d558aa90b10b6c795d48914fe1f1e669530a433f75762e951c92af31: Status 404 returned error can't find the container with id 622ac490d558aa90b10b6c795d48914fe1f1e669530a433f75762e951c92af31 Mar 12 18:43:46.998256 master-0 kubenswrapper[29097]: I0312 18:43:46.996874 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-v8xwb"] Mar 12 18:43:47.005480 master-0 kubenswrapper[29097]: I0312 18:43:47.005122 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-d29kk" Mar 12 18:43:47.005636 master-0 kubenswrapper[29097]: I0312 18:43:47.005552 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-snrcc"] Mar 12 18:43:47.007783 master-0 kubenswrapper[29097]: I0312 18:43:47.007227 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-snrcc" Mar 12 18:43:47.020086 master-0 kubenswrapper[29097]: I0312 18:43:47.019225 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-svkjf"] Mar 12 18:43:47.021246 master-0 kubenswrapper[29097]: I0312 18:43:47.020836 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-677c674df7-svkjf" Mar 12 18:43:47.033410 master-0 kubenswrapper[29097]: I0312 18:43:47.033125 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx76z\" (UniqueName: \"kubernetes.io/projected/b9e311e9-c85b-4072-972d-3dc6f8ff5f64-kube-api-access-hx76z\") pod \"octavia-operator-controller-manager-5f4f55cb5c-75nl8\" (UID: \"b9e311e9-c85b-4072-972d-3dc6f8ff5f64\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-75nl8" Mar 12 18:43:47.033410 master-0 kubenswrapper[29097]: I0312 18:43:47.033268 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsr89\" (UniqueName: \"kubernetes.io/projected/fa6bbd68-101b-47ce-a9d6-9ad81777a1d3-kube-api-access-gsr89\") pod \"nova-operator-controller-manager-569cc54c5-c4xmt\" (UID: \"fa6bbd68-101b-47ce-a9d6-9ad81777a1d3\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-c4xmt" Mar 12 18:43:47.033410 master-0 kubenswrapper[29097]: I0312 18:43:47.033311 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq56q\" (UniqueName: \"kubernetes.io/projected/b7201b74-50e0-42cb-a896-0870ab9b41ce-kube-api-access-rq56q\") pod \"ovn-operator-controller-manager-bbc5b68f9-v8xwb\" (UID: \"b7201b74-50e0-42cb-a896-0870ab9b41ce\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-v8xwb" Mar 12 18:43:47.033670 master-0 kubenswrapper[29097]: I0312 18:43:47.033566 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh\" (UID: \"f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh" Mar 12 18:43:47.033670 master-0 kubenswrapper[29097]: I0312 18:43:47.033639 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn9s7\" (UniqueName: \"kubernetes.io/projected/f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46-kube-api-access-wn9s7\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh\" (UID: \"f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh" Mar 12 18:43:47.035762 master-0 kubenswrapper[29097]: I0312 18:43:47.035298 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-b92h7" Mar 12 18:43:47.053832 master-0 kubenswrapper[29097]: I0312 18:43:47.053327 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsr89\" (UniqueName: \"kubernetes.io/projected/fa6bbd68-101b-47ce-a9d6-9ad81777a1d3-kube-api-access-gsr89\") pod \"nova-operator-controller-manager-569cc54c5-c4xmt\" (UID: \"fa6bbd68-101b-47ce-a9d6-9ad81777a1d3\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-c4xmt" Mar 12 18:43:47.053832 master-0 kubenswrapper[29097]: I0312 18:43:47.053331 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx76z\" (UniqueName: \"kubernetes.io/projected/b9e311e9-c85b-4072-972d-3dc6f8ff5f64-kube-api-access-hx76z\") pod \"octavia-operator-controller-manager-5f4f55cb5c-75nl8\" (UID: \"b9e311e9-c85b-4072-972d-3dc6f8ff5f64\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-75nl8" Mar 12 18:43:47.064137 master-0 kubenswrapper[29097]: I0312 18:43:47.064086 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-snrcc"] Mar 12 18:43:47.073221 master-0 kubenswrapper[29097]: I0312 18:43:47.073128 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-svkjf"] Mar 12 18:43:47.081421 master-0 kubenswrapper[29097]: I0312 18:43:47.081381 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5npkd"] Mar 12 18:43:47.082938 master-0 kubenswrapper[29097]: I0312 18:43:47.082376 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5npkd" Mar 12 18:43:47.143142 master-0 kubenswrapper[29097]: I0312 18:43:47.139278 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-tjm8v"] Mar 12 18:43:47.143142 master-0 kubenswrapper[29097]: I0312 18:43:47.139737 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-7f7bk" Mar 12 18:43:47.143142 master-0 kubenswrapper[29097]: I0312 18:43:47.140481 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-l6v2h" Mar 12 18:43:47.162942 master-0 kubenswrapper[29097]: I0312 18:43:47.162859 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh\" (UID: \"f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh" Mar 12 18:43:47.163039 master-0 kubenswrapper[29097]: I0312 18:43:47.162984 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn9s7\" (UniqueName: \"kubernetes.io/projected/f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46-kube-api-access-wn9s7\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh\" (UID: \"f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh" Mar 12 18:43:47.163150 master-0 kubenswrapper[29097]: I0312 18:43:47.163129 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a341a0dd-9612-4ebc-a88f-c0afe26c6859-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-z8trl\" (UID: \"a341a0dd-9612-4ebc-a88f-c0afe26c6859\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-z8trl" Mar 12 18:43:47.163371 master-0 kubenswrapper[29097]: I0312 18:43:47.163344 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rq56q\" (UniqueName: \"kubernetes.io/projected/b7201b74-50e0-42cb-a896-0870ab9b41ce-kube-api-access-rq56q\") pod \"ovn-operator-controller-manager-bbc5b68f9-v8xwb\" (UID: \"b7201b74-50e0-42cb-a896-0870ab9b41ce\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-v8xwb" Mar 12 18:43:47.163684 master-0 kubenswrapper[29097]: E0312 18:43:47.163651 29097 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 18:43:47.163825 master-0 kubenswrapper[29097]: E0312 18:43:47.163812 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46-cert podName:f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46 nodeName:}" failed. No retries permitted until 2026-03-12 18:43:47.663783625 +0000 UTC m=+867.217763722 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46-cert") pod "openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh" (UID: "f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 18:43:47.164191 master-0 kubenswrapper[29097]: E0312 18:43:47.164163 29097 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 18:43:47.164260 master-0 kubenswrapper[29097]: E0312 18:43:47.164220 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a341a0dd-9612-4ebc-a88f-c0afe26c6859-cert podName:a341a0dd-9612-4ebc-a88f-c0afe26c6859 nodeName:}" failed. No retries permitted until 2026-03-12 18:43:48.164204416 +0000 UTC m=+867.718184513 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a341a0dd-9612-4ebc-a88f-c0afe26c6859-cert") pod "infra-operator-controller-manager-b8c8d7cc8-z8trl" (UID: "a341a0dd-9612-4ebc-a88f-c0afe26c6859") : secret "infra-operator-webhook-server-cert" not found Mar 12 18:43:47.165846 master-0 kubenswrapper[29097]: I0312 18:43:47.165825 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-hszdc" Mar 12 18:43:47.168227 master-0 kubenswrapper[29097]: I0312 18:43:47.168203 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-tjm8v" Mar 12 18:43:47.194598 master-0 kubenswrapper[29097]: I0312 18:43:47.193786 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-c4xmt" Mar 12 18:43:47.195474 master-0 kubenswrapper[29097]: I0312 18:43:47.195403 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5npkd"] Mar 12 18:43:47.210062 master-0 kubenswrapper[29097]: I0312 18:43:47.210017 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn9s7\" (UniqueName: \"kubernetes.io/projected/f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46-kube-api-access-wn9s7\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh\" (UID: \"f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh" Mar 12 18:43:47.215395 master-0 kubenswrapper[29097]: I0312 18:43:47.212216 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-tjm8v"] Mar 12 18:43:47.227309 master-0 kubenswrapper[29097]: I0312 18:43:47.227199 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-qkhtq" event={"ID":"f7180d7f-df3c-48a8-b7f2-88c910fb2917","Type":"ContainerStarted","Data":"622ac490d558aa90b10b6c795d48914fe1f1e669530a433f75762e951c92af31"} Mar 12 18:43:47.230132 master-0 kubenswrapper[29097]: I0312 18:43:47.229909 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-75nl8" Mar 12 18:43:47.239943 master-0 kubenswrapper[29097]: I0312 18:43:47.232485 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq56q\" (UniqueName: \"kubernetes.io/projected/b7201b74-50e0-42cb-a896-0870ab9b41ce-kube-api-access-rq56q\") pod \"ovn-operator-controller-manager-bbc5b68f9-v8xwb\" (UID: \"b7201b74-50e0-42cb-a896-0870ab9b41ce\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-v8xwb" Mar 12 18:43:47.271530 master-0 kubenswrapper[29097]: I0312 18:43:47.268043 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-2l7g8"] Mar 12 18:43:47.271530 master-0 kubenswrapper[29097]: I0312 18:43:47.269086 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-2l7g8" Mar 12 18:43:47.275754 master-0 kubenswrapper[29097]: I0312 18:43:47.275662 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj9h9\" (UniqueName: \"kubernetes.io/projected/0a6c3e65-e541-4970-bf55-f303817995c0-kube-api-access-kj9h9\") pod \"placement-operator-controller-manager-574d45c66c-snrcc\" (UID: \"0a6c3e65-e541-4970-bf55-f303817995c0\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-snrcc" Mar 12 18:43:47.276614 master-0 kubenswrapper[29097]: I0312 18:43:47.276584 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gc4r\" (UniqueName: \"kubernetes.io/projected/2712f389-625e-4afa-8e4c-cc844ba4c169-kube-api-access-8gc4r\") pod \"test-operator-controller-manager-5c5cb9c4d7-tjm8v\" (UID: \"2712f389-625e-4afa-8e4c-cc844ba4c169\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-tjm8v" Mar 12 18:43:47.276961 master-0 kubenswrapper[29097]: I0312 18:43:47.276839 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49m66\" (UniqueName: \"kubernetes.io/projected/27cf3106-f02c-4b8b-b35e-aa261037f930-kube-api-access-49m66\") pod \"swift-operator-controller-manager-677c674df7-svkjf\" (UID: \"27cf3106-f02c-4b8b-b35e-aa261037f930\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-svkjf" Mar 12 18:43:47.276961 master-0 kubenswrapper[29097]: I0312 18:43:47.276891 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrq7n\" (UniqueName: \"kubernetes.io/projected/80304a52-fbd4-40c6-9467-92570ac930a3-kube-api-access-rrq7n\") pod \"telemetry-operator-controller-manager-6cd66dbd4b-5npkd\" (UID: \"80304a52-fbd4-40c6-9467-92570ac930a3\") " pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5npkd" Mar 12 18:43:47.305175 master-0 kubenswrapper[29097]: I0312 18:43:47.305134 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-v8xwb" Mar 12 18:43:47.313839 master-0 kubenswrapper[29097]: I0312 18:43:47.313576 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-2l7g8"] Mar 12 18:43:47.377603 master-0 kubenswrapper[29097]: I0312 18:43:47.377561 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvzbv\" (UniqueName: \"kubernetes.io/projected/d5559f37-caa9-4f7c-913f-cea72d79eb03-kube-api-access-zvzbv\") pod \"watcher-operator-controller-manager-6dd88c6f67-2l7g8\" (UID: \"d5559f37-caa9-4f7c-913f-cea72d79eb03\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-2l7g8" Mar 12 18:43:47.377739 master-0 kubenswrapper[29097]: I0312 18:43:47.377637 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49m66\" (UniqueName: \"kubernetes.io/projected/27cf3106-f02c-4b8b-b35e-aa261037f930-kube-api-access-49m66\") pod \"swift-operator-controller-manager-677c674df7-svkjf\" (UID: \"27cf3106-f02c-4b8b-b35e-aa261037f930\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-svkjf" Mar 12 18:43:47.377739 master-0 kubenswrapper[29097]: I0312 18:43:47.377664 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrq7n\" (UniqueName: \"kubernetes.io/projected/80304a52-fbd4-40c6-9467-92570ac930a3-kube-api-access-rrq7n\") pod \"telemetry-operator-controller-manager-6cd66dbd4b-5npkd\" (UID: \"80304a52-fbd4-40c6-9467-92570ac930a3\") " pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5npkd" Mar 12 18:43:47.377739 master-0 kubenswrapper[29097]: I0312 18:43:47.377717 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj9h9\" (UniqueName: \"kubernetes.io/projected/0a6c3e65-e541-4970-bf55-f303817995c0-kube-api-access-kj9h9\") pod \"placement-operator-controller-manager-574d45c66c-snrcc\" (UID: \"0a6c3e65-e541-4970-bf55-f303817995c0\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-snrcc" Mar 12 18:43:47.377739 master-0 kubenswrapper[29097]: I0312 18:43:47.377734 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gc4r\" (UniqueName: \"kubernetes.io/projected/2712f389-625e-4afa-8e4c-cc844ba4c169-kube-api-access-8gc4r\") pod \"test-operator-controller-manager-5c5cb9c4d7-tjm8v\" (UID: \"2712f389-625e-4afa-8e4c-cc844ba4c169\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-tjm8v" Mar 12 18:43:47.393698 master-0 kubenswrapper[29097]: I0312 18:43:47.393481 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7795b46f77-rn6wp"] Mar 12 18:43:47.395528 master-0 kubenswrapper[29097]: I0312 18:43:47.394797 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-rn6wp" Mar 12 18:43:47.398736 master-0 kubenswrapper[29097]: I0312 18:43:47.398706 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 12 18:43:47.399471 master-0 kubenswrapper[29097]: I0312 18:43:47.399448 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Mar 12 18:43:47.400205 master-0 kubenswrapper[29097]: I0312 18:43:47.400173 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gc4r\" (UniqueName: \"kubernetes.io/projected/2712f389-625e-4afa-8e4c-cc844ba4c169-kube-api-access-8gc4r\") pod \"test-operator-controller-manager-5c5cb9c4d7-tjm8v\" (UID: \"2712f389-625e-4afa-8e4c-cc844ba4c169\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-tjm8v" Mar 12 18:43:47.402932 master-0 kubenswrapper[29097]: I0312 18:43:47.402748 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrq7n\" (UniqueName: \"kubernetes.io/projected/80304a52-fbd4-40c6-9467-92570ac930a3-kube-api-access-rrq7n\") pod \"telemetry-operator-controller-manager-6cd66dbd4b-5npkd\" (UID: \"80304a52-fbd4-40c6-9467-92570ac930a3\") " pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5npkd" Mar 12 18:43:47.402932 master-0 kubenswrapper[29097]: I0312 18:43:47.402884 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj9h9\" (UniqueName: \"kubernetes.io/projected/0a6c3e65-e541-4970-bf55-f303817995c0-kube-api-access-kj9h9\") pod \"placement-operator-controller-manager-574d45c66c-snrcc\" (UID: \"0a6c3e65-e541-4970-bf55-f303817995c0\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-snrcc" Mar 12 18:43:47.406470 master-0 kubenswrapper[29097]: I0312 18:43:47.406419 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49m66\" (UniqueName: \"kubernetes.io/projected/27cf3106-f02c-4b8b-b35e-aa261037f930-kube-api-access-49m66\") pod \"swift-operator-controller-manager-677c674df7-svkjf\" (UID: \"27cf3106-f02c-4b8b-b35e-aa261037f930\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-svkjf" Mar 12 18:43:47.444797 master-0 kubenswrapper[29097]: I0312 18:43:47.444345 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5npkd" Mar 12 18:43:47.451613 master-0 kubenswrapper[29097]: I0312 18:43:47.451552 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7795b46f77-rn6wp"] Mar 12 18:43:47.483403 master-0 kubenswrapper[29097]: I0312 18:43:47.478769 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvzbv\" (UniqueName: \"kubernetes.io/projected/d5559f37-caa9-4f7c-913f-cea72d79eb03-kube-api-access-zvzbv\") pod \"watcher-operator-controller-manager-6dd88c6f67-2l7g8\" (UID: \"d5559f37-caa9-4f7c-913f-cea72d79eb03\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-2l7g8" Mar 12 18:43:47.483403 master-0 kubenswrapper[29097]: I0312 18:43:47.481369 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sc2sm"] Mar 12 18:43:47.496445 master-0 kubenswrapper[29097]: I0312 18:43:47.494905 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sc2sm" Mar 12 18:43:47.498193 master-0 kubenswrapper[29097]: I0312 18:43:47.498164 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sc2sm"] Mar 12 18:43:47.500469 master-0 kubenswrapper[29097]: I0312 18:43:47.500248 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvzbv\" (UniqueName: \"kubernetes.io/projected/d5559f37-caa9-4f7c-913f-cea72d79eb03-kube-api-access-zvzbv\") pod \"watcher-operator-controller-manager-6dd88c6f67-2l7g8\" (UID: \"d5559f37-caa9-4f7c-913f-cea72d79eb03\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-2l7g8" Mar 12 18:43:47.547550 master-0 kubenswrapper[29097]: I0312 18:43:47.546041 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-qkhtq"] Mar 12 18:43:47.581818 master-0 kubenswrapper[29097]: I0312 18:43:47.581766 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-rn6wp\" (UID: \"8d400cc8-2921-4006-9ac0-2d6e5e2140d7\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-rn6wp" Mar 12 18:43:47.582017 master-0 kubenswrapper[29097]: I0312 18:43:47.581885 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6695d\" (UniqueName: \"kubernetes.io/projected/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-kube-api-access-6695d\") pod \"openstack-operator-controller-manager-7795b46f77-rn6wp\" (UID: \"8d400cc8-2921-4006-9ac0-2d6e5e2140d7\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-rn6wp" Mar 12 18:43:47.582017 master-0 kubenswrapper[29097]: I0312 18:43:47.581941 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-rn6wp\" (UID: \"8d400cc8-2921-4006-9ac0-2d6e5e2140d7\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-rn6wp" Mar 12 18:43:47.582017 master-0 kubenswrapper[29097]: I0312 18:43:47.581967 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjwj2\" (UniqueName: \"kubernetes.io/projected/a909c01e-f800-45ee-94d1-bb4880efd178-kube-api-access-cjwj2\") pod \"rabbitmq-cluster-operator-manager-668c99d594-sc2sm\" (UID: \"a909c01e-f800-45ee-94d1-bb4880efd178\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sc2sm" Mar 12 18:43:47.603606 master-0 kubenswrapper[29097]: I0312 18:43:47.603510 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-7pdcm"] Mar 12 18:43:47.625417 master-0 kubenswrapper[29097]: I0312 18:43:47.625276 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-2j959"] Mar 12 18:43:47.632129 master-0 kubenswrapper[29097]: I0312 18:43:47.632068 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-snrcc" Mar 12 18:43:47.667888 master-0 kubenswrapper[29097]: I0312 18:43:47.667846 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-677c674df7-svkjf" Mar 12 18:43:47.689878 master-0 kubenswrapper[29097]: I0312 18:43:47.689461 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6695d\" (UniqueName: \"kubernetes.io/projected/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-kube-api-access-6695d\") pod \"openstack-operator-controller-manager-7795b46f77-rn6wp\" (UID: \"8d400cc8-2921-4006-9ac0-2d6e5e2140d7\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-rn6wp" Mar 12 18:43:47.689878 master-0 kubenswrapper[29097]: I0312 18:43:47.689538 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh\" (UID: \"f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh" Mar 12 18:43:47.689878 master-0 kubenswrapper[29097]: I0312 18:43:47.689578 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-rn6wp\" (UID: \"8d400cc8-2921-4006-9ac0-2d6e5e2140d7\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-rn6wp" Mar 12 18:43:47.689878 master-0 kubenswrapper[29097]: I0312 18:43:47.689603 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjwj2\" (UniqueName: \"kubernetes.io/projected/a909c01e-f800-45ee-94d1-bb4880efd178-kube-api-access-cjwj2\") pod \"rabbitmq-cluster-operator-manager-668c99d594-sc2sm\" (UID: \"a909c01e-f800-45ee-94d1-bb4880efd178\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sc2sm" Mar 12 18:43:47.689878 master-0 kubenswrapper[29097]: I0312 18:43:47.689667 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-rn6wp\" (UID: \"8d400cc8-2921-4006-9ac0-2d6e5e2140d7\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-rn6wp" Mar 12 18:43:47.689878 master-0 kubenswrapper[29097]: E0312 18:43:47.689812 29097 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 18:43:47.690771 master-0 kubenswrapper[29097]: E0312 18:43:47.689889 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46-cert podName:f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46 nodeName:}" failed. No retries permitted until 2026-03-12 18:43:48.68987007 +0000 UTC m=+868.243850167 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46-cert") pod "openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh" (UID: "f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 18:43:47.690771 master-0 kubenswrapper[29097]: E0312 18:43:47.690064 29097 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 18:43:47.690771 master-0 kubenswrapper[29097]: E0312 18:43:47.690109 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-webhook-certs podName:8d400cc8-2921-4006-9ac0-2d6e5e2140d7 nodeName:}" failed. No retries permitted until 2026-03-12 18:43:48.190093546 +0000 UTC m=+867.744073643 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-webhook-certs") pod "openstack-operator-controller-manager-7795b46f77-rn6wp" (UID: "8d400cc8-2921-4006-9ac0-2d6e5e2140d7") : secret "webhook-server-cert" not found Mar 12 18:43:47.690771 master-0 kubenswrapper[29097]: E0312 18:43:47.690143 29097 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 18:43:47.690771 master-0 kubenswrapper[29097]: E0312 18:43:47.690162 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-metrics-certs podName:8d400cc8-2921-4006-9ac0-2d6e5e2140d7 nodeName:}" failed. No retries permitted until 2026-03-12 18:43:48.190156537 +0000 UTC m=+867.744136634 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-metrics-certs") pod "openstack-operator-controller-manager-7795b46f77-rn6wp" (UID: "8d400cc8-2921-4006-9ac0-2d6e5e2140d7") : secret "metrics-server-cert" not found Mar 12 18:43:47.718966 master-0 kubenswrapper[29097]: I0312 18:43:47.700053 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-tjm8v" Mar 12 18:43:47.718966 master-0 kubenswrapper[29097]: I0312 18:43:47.717152 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjwj2\" (UniqueName: \"kubernetes.io/projected/a909c01e-f800-45ee-94d1-bb4880efd178-kube-api-access-cjwj2\") pod \"rabbitmq-cluster-operator-manager-668c99d594-sc2sm\" (UID: \"a909c01e-f800-45ee-94d1-bb4880efd178\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sc2sm" Mar 12 18:43:47.721005 master-0 kubenswrapper[29097]: I0312 18:43:47.720612 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-2l7g8" Mar 12 18:43:47.721005 master-0 kubenswrapper[29097]: I0312 18:43:47.720623 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6695d\" (UniqueName: \"kubernetes.io/projected/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-kube-api-access-6695d\") pod \"openstack-operator-controller-manager-7795b46f77-rn6wp\" (UID: \"8d400cc8-2921-4006-9ac0-2d6e5e2140d7\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-rn6wp" Mar 12 18:43:47.733002 master-0 kubenswrapper[29097]: I0312 18:43:47.731834 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-46wm7"] Mar 12 18:43:47.741202 master-0 kubenswrapper[29097]: I0312 18:43:47.740379 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-44fbq"] Mar 12 18:43:47.841746 master-0 kubenswrapper[29097]: I0312 18:43:47.841303 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sc2sm" Mar 12 18:43:47.951306 master-0 kubenswrapper[29097]: I0312 18:43:47.947459 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-p5v49"] Mar 12 18:43:47.953105 master-0 kubenswrapper[29097]: W0312 18:43:47.952954 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4a40675_9bcf_43e8_b9a3_2016eb61e4da.slice/crio-3a740375f436726de17779050cade9b418ea1a8184e87773c9076e7b5c85b369 WatchSource:0}: Error finding container 3a740375f436726de17779050cade9b418ea1a8184e87773c9076e7b5c85b369: Status 404 returned error can't find the container with id 3a740375f436726de17779050cade9b418ea1a8184e87773c9076e7b5c85b369 Mar 12 18:43:47.968653 master-0 kubenswrapper[29097]: I0312 18:43:47.967057 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-b92h7"] Mar 12 18:43:47.979214 master-0 kubenswrapper[29097]: I0312 18:43:47.978502 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-d29kk"] Mar 12 18:43:48.014537 master-0 kubenswrapper[29097]: W0312 18:43:48.014473 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76af05d5_a6fb_4aea_949a_fa0329c4739c.slice/crio-27a23f5a8e589f92fe93275f8b3833a8de4f4a37d52c92f4823fd4e30226213b WatchSource:0}: Error finding container 27a23f5a8e589f92fe93275f8b3833a8de4f4a37d52c92f4823fd4e30226213b: Status 404 returned error can't find the container with id 27a23f5a8e589f92fe93275f8b3833a8de4f4a37d52c92f4823fd4e30226213b Mar 12 18:43:48.219298 master-0 kubenswrapper[29097]: I0312 18:43:48.218560 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-rn6wp\" (UID: \"8d400cc8-2921-4006-9ac0-2d6e5e2140d7\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-rn6wp" Mar 12 18:43:48.219298 master-0 kubenswrapper[29097]: I0312 18:43:48.218630 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a341a0dd-9612-4ebc-a88f-c0afe26c6859-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-z8trl\" (UID: \"a341a0dd-9612-4ebc-a88f-c0afe26c6859\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-z8trl" Mar 12 18:43:48.219298 master-0 kubenswrapper[29097]: I0312 18:43:48.218673 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-rn6wp\" (UID: \"8d400cc8-2921-4006-9ac0-2d6e5e2140d7\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-rn6wp" Mar 12 18:43:48.219298 master-0 kubenswrapper[29097]: E0312 18:43:48.218902 29097 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 18:43:48.219298 master-0 kubenswrapper[29097]: E0312 18:43:48.218949 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-webhook-certs podName:8d400cc8-2921-4006-9ac0-2d6e5e2140d7 nodeName:}" failed. No retries permitted until 2026-03-12 18:43:49.218936151 +0000 UTC m=+868.772916248 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-webhook-certs") pod "openstack-operator-controller-manager-7795b46f77-rn6wp" (UID: "8d400cc8-2921-4006-9ac0-2d6e5e2140d7") : secret "webhook-server-cert" not found Mar 12 18:43:48.219298 master-0 kubenswrapper[29097]: E0312 18:43:48.218988 29097 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 18:43:48.219298 master-0 kubenswrapper[29097]: E0312 18:43:48.219005 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-metrics-certs podName:8d400cc8-2921-4006-9ac0-2d6e5e2140d7 nodeName:}" failed. No retries permitted until 2026-03-12 18:43:49.218999572 +0000 UTC m=+868.772979659 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-metrics-certs") pod "openstack-operator-controller-manager-7795b46f77-rn6wp" (UID: "8d400cc8-2921-4006-9ac0-2d6e5e2140d7") : secret "metrics-server-cert" not found Mar 12 18:43:48.219298 master-0 kubenswrapper[29097]: E0312 18:43:48.219039 29097 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 18:43:48.219298 master-0 kubenswrapper[29097]: E0312 18:43:48.219056 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a341a0dd-9612-4ebc-a88f-c0afe26c6859-cert podName:a341a0dd-9612-4ebc-a88f-c0afe26c6859 nodeName:}" failed. No retries permitted until 2026-03-12 18:43:50.219049743 +0000 UTC m=+869.773029830 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a341a0dd-9612-4ebc-a88f-c0afe26c6859-cert") pod "infra-operator-controller-manager-b8c8d7cc8-z8trl" (UID: "a341a0dd-9612-4ebc-a88f-c0afe26c6859") : secret "infra-operator-webhook-server-cert" not found Mar 12 18:43:48.290038 master-0 kubenswrapper[29097]: I0312 18:43:48.289965 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-44fbq" event={"ID":"13c9cdbc-00fa-4ca0-b136-46f9df179f76","Type":"ContainerStarted","Data":"a72a633f21c02e79e9dd967a4b8c242573bcc18decf12539d19818e46b1f7955"} Mar 12 18:43:48.291154 master-0 kubenswrapper[29097]: I0312 18:43:48.291082 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-46wm7" event={"ID":"e9ded411-b09e-46f8-aee1-f6c168f2edd8","Type":"ContainerStarted","Data":"eaf69ebf37a129e23839f67dbd161f1e8ab29f0bd59c8a30135f34f11f68f1d9"} Mar 12 18:43:48.294701 master-0 kubenswrapper[29097]: I0312 18:43:48.294667 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-p5v49" event={"ID":"b4a40675-9bcf-43e8-b9a3-2016eb61e4da","Type":"ContainerStarted","Data":"3a740375f436726de17779050cade9b418ea1a8184e87773c9076e7b5c85b369"} Mar 12 18:43:48.296627 master-0 kubenswrapper[29097]: I0312 18:43:48.295758 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-d29kk" event={"ID":"76af05d5-a6fb-4aea-949a-fa0329c4739c","Type":"ContainerStarted","Data":"27a23f5a8e589f92fe93275f8b3833a8de4f4a37d52c92f4823fd4e30226213b"} Mar 12 18:43:48.297088 master-0 kubenswrapper[29097]: I0312 18:43:48.297007 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-b92h7" event={"ID":"827cb92d-3fbc-48fc-b74d-5a8a9046cadb","Type":"ContainerStarted","Data":"bd559fa47899bde295f719128f88300721160f9c220da148176fb6568668483f"} Mar 12 18:43:48.297992 master-0 kubenswrapper[29097]: I0312 18:43:48.297970 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-7pdcm" event={"ID":"180f0122-4a3b-40bd-b8e4-c81368e5be7f","Type":"ContainerStarted","Data":"7355e0323109f4b615474283888989f607d55216593dbaad1466ce058a896fc8"} Mar 12 18:43:48.300408 master-0 kubenswrapper[29097]: I0312 18:43:48.300371 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-2j959" event={"ID":"a9aded67-7bb9-4df6-8334-ddffef99aa7f","Type":"ContainerStarted","Data":"295958ba370e03eb65ed924374ecfabc1633aa0412643cfd341bff97ad897fca"} Mar 12 18:43:48.597624 master-0 kubenswrapper[29097]: I0312 18:43:48.596067 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-7f7bk"] Mar 12 18:43:48.682894 master-0 kubenswrapper[29097]: I0312 18:43:48.682756 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-l6v2h"] Mar 12 18:43:48.716737 master-0 kubenswrapper[29097]: I0312 18:43:48.715646 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-v8xwb"] Mar 12 18:43:48.739629 master-0 kubenswrapper[29097]: I0312 18:43:48.738169 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh\" (UID: \"f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh" Mar 12 18:43:48.739629 master-0 kubenswrapper[29097]: E0312 18:43:48.738311 29097 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 18:43:48.739629 master-0 kubenswrapper[29097]: E0312 18:43:48.738361 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46-cert podName:f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46 nodeName:}" failed. No retries permitted until 2026-03-12 18:43:50.73834535 +0000 UTC m=+870.292325447 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46-cert") pod "openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh" (UID: "f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 18:43:48.774161 master-0 kubenswrapper[29097]: I0312 18:43:48.774111 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-c4xmt"] Mar 12 18:43:48.842404 master-0 kubenswrapper[29097]: I0312 18:43:48.841936 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-hszdc"] Mar 12 18:43:48.894429 master-0 kubenswrapper[29097]: I0312 18:43:48.894254 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-75nl8"] Mar 12 18:43:49.018591 master-0 kubenswrapper[29097]: I0312 18:43:49.018459 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5npkd"] Mar 12 18:43:49.035010 master-0 kubenswrapper[29097]: W0312 18:43:49.034575 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80304a52_fbd4_40c6_9467_92570ac930a3.slice/crio-4b62d301db33ef4ed8a43110f3eab11b2c3215e0570a8b4f413a4772217cdc58 WatchSource:0}: Error finding container 4b62d301db33ef4ed8a43110f3eab11b2c3215e0570a8b4f413a4772217cdc58: Status 404 returned error can't find the container with id 4b62d301db33ef4ed8a43110f3eab11b2c3215e0570a8b4f413a4772217cdc58 Mar 12 18:43:49.065550 master-0 kubenswrapper[29097]: I0312 18:43:49.060881 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-svkjf"] Mar 12 18:43:49.091843 master-0 kubenswrapper[29097]: I0312 18:43:49.089620 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-snrcc"] Mar 12 18:43:49.096520 master-0 kubenswrapper[29097]: W0312 18:43:49.096435 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27cf3106_f02c_4b8b_b35e_aa261037f930.slice/crio-8c724bab6d2a296531b4dc6b36bf7d87c0b2c051f6ba001a63ea8bd885a23396 WatchSource:0}: Error finding container 8c724bab6d2a296531b4dc6b36bf7d87c0b2c051f6ba001a63ea8bd885a23396: Status 404 returned error can't find the container with id 8c724bab6d2a296531b4dc6b36bf7d87c0b2c051f6ba001a63ea8bd885a23396 Mar 12 18:43:49.112564 master-0 kubenswrapper[29097]: W0312 18:43:49.111076 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a6c3e65_e541_4970_bf55_f303817995c0.slice/crio-08154936db8604dab7babc647e5df1e9e8311458041ba939cd9d3df84cf4849a WatchSource:0}: Error finding container 08154936db8604dab7babc647e5df1e9e8311458041ba939cd9d3df84cf4849a: Status 404 returned error can't find the container with id 08154936db8604dab7babc647e5df1e9e8311458041ba939cd9d3df84cf4849a Mar 12 18:43:49.263870 master-0 kubenswrapper[29097]: I0312 18:43:49.261018 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-rn6wp\" (UID: \"8d400cc8-2921-4006-9ac0-2d6e5e2140d7\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-rn6wp" Mar 12 18:43:49.263870 master-0 kubenswrapper[29097]: E0312 18:43:49.261201 29097 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 18:43:49.263870 master-0 kubenswrapper[29097]: I0312 18:43:49.261253 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-rn6wp\" (UID: \"8d400cc8-2921-4006-9ac0-2d6e5e2140d7\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-rn6wp" Mar 12 18:43:49.263870 master-0 kubenswrapper[29097]: E0312 18:43:49.261330 29097 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 18:43:49.263870 master-0 kubenswrapper[29097]: E0312 18:43:49.261333 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-webhook-certs podName:8d400cc8-2921-4006-9ac0-2d6e5e2140d7 nodeName:}" failed. No retries permitted until 2026-03-12 18:43:51.261315238 +0000 UTC m=+870.815295335 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-webhook-certs") pod "openstack-operator-controller-manager-7795b46f77-rn6wp" (UID: "8d400cc8-2921-4006-9ac0-2d6e5e2140d7") : secret "webhook-server-cert" not found Mar 12 18:43:49.263870 master-0 kubenswrapper[29097]: E0312 18:43:49.261546 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-metrics-certs podName:8d400cc8-2921-4006-9ac0-2d6e5e2140d7 nodeName:}" failed. No retries permitted until 2026-03-12 18:43:51.261370009 +0000 UTC m=+870.815350096 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-metrics-certs") pod "openstack-operator-controller-manager-7795b46f77-rn6wp" (UID: "8d400cc8-2921-4006-9ac0-2d6e5e2140d7") : secret "metrics-server-cert" not found Mar 12 18:43:49.282823 master-0 kubenswrapper[29097]: I0312 18:43:49.282741 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-2l7g8"] Mar 12 18:43:49.314448 master-0 kubenswrapper[29097]: I0312 18:43:49.314370 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-v8xwb" event={"ID":"b7201b74-50e0-42cb-a896-0870ab9b41ce","Type":"ContainerStarted","Data":"782701f233e227e33507e50b0c7f801162df752c9e1678b13d66fb973d18a409"} Mar 12 18:43:49.316921 master-0 kubenswrapper[29097]: I0312 18:43:49.316878 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-7f7bk" event={"ID":"bd3cb300-0aee-4771-bd27-162e219b892a","Type":"ContainerStarted","Data":"05f51d5aeab2e8ff7355a3851db3d0285c0863c2e3155d268e6a81115cf239e5"} Mar 12 18:43:49.320790 master-0 kubenswrapper[29097]: I0312 18:43:49.320675 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-2l7g8" event={"ID":"d5559f37-caa9-4f7c-913f-cea72d79eb03","Type":"ContainerStarted","Data":"8590be24cd2820eda1abfea92d54251e4f562d85882a3f6296a80a06232912cf"} Mar 12 18:43:49.323280 master-0 kubenswrapper[29097]: I0312 18:43:49.323249 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5npkd" event={"ID":"80304a52-fbd4-40c6-9467-92570ac930a3","Type":"ContainerStarted","Data":"4b62d301db33ef4ed8a43110f3eab11b2c3215e0570a8b4f413a4772217cdc58"} Mar 12 18:43:49.348636 master-0 kubenswrapper[29097]: I0312 18:43:49.344731 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-75nl8" event={"ID":"b9e311e9-c85b-4072-972d-3dc6f8ff5f64","Type":"ContainerStarted","Data":"19401ef2db4134132f4f2601554bc4cdc716cf7d8829bccca50a7ba6a0f71420"} Mar 12 18:43:49.348636 master-0 kubenswrapper[29097]: I0312 18:43:49.346000 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sc2sm"] Mar 12 18:43:49.355047 master-0 kubenswrapper[29097]: I0312 18:43:49.355014 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-tjm8v"] Mar 12 18:43:49.356972 master-0 kubenswrapper[29097]: I0312 18:43:49.356941 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-snrcc" event={"ID":"0a6c3e65-e541-4970-bf55-f303817995c0","Type":"ContainerStarted","Data":"08154936db8604dab7babc647e5df1e9e8311458041ba939cd9d3df84cf4849a"} Mar 12 18:43:49.361128 master-0 kubenswrapper[29097]: I0312 18:43:49.361098 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-hszdc" event={"ID":"f23f2008-a1bd-41d2-90e0-8885562051bb","Type":"ContainerStarted","Data":"36399c4cc1803e2d0625b30358bddd60337ed4a99c493b31b0291a301a8bbbd2"} Mar 12 18:43:49.363708 master-0 kubenswrapper[29097]: I0312 18:43:49.363682 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-l6v2h" event={"ID":"22702d59-1bb1-494c-a0a8-f9f0fd34b196","Type":"ContainerStarted","Data":"fde31e6c4f1341b98bc3e95bc9bd9d82a84e95bb0a40e0a3b26ba4419de7390f"} Mar 12 18:43:49.367502 master-0 kubenswrapper[29097]: I0312 18:43:49.367450 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-c4xmt" event={"ID":"fa6bbd68-101b-47ce-a9d6-9ad81777a1d3","Type":"ContainerStarted","Data":"ec477e8338bdb65eed541213d235de5564e7067996989e22d66eacb7f02e8b2f"} Mar 12 18:43:49.368928 master-0 kubenswrapper[29097]: I0312 18:43:49.368879 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-677c674df7-svkjf" event={"ID":"27cf3106-f02c-4b8b-b35e-aa261037f930","Type":"ContainerStarted","Data":"8c724bab6d2a296531b4dc6b36bf7d87c0b2c051f6ba001a63ea8bd885a23396"} Mar 12 18:43:50.277165 master-0 kubenswrapper[29097]: I0312 18:43:50.277112 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a341a0dd-9612-4ebc-a88f-c0afe26c6859-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-z8trl\" (UID: \"a341a0dd-9612-4ebc-a88f-c0afe26c6859\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-z8trl" Mar 12 18:43:50.277686 master-0 kubenswrapper[29097]: E0312 18:43:50.277376 29097 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 18:43:50.277686 master-0 kubenswrapper[29097]: E0312 18:43:50.277468 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a341a0dd-9612-4ebc-a88f-c0afe26c6859-cert podName:a341a0dd-9612-4ebc-a88f-c0afe26c6859 nodeName:}" failed. No retries permitted until 2026-03-12 18:43:54.277448531 +0000 UTC m=+873.831428618 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a341a0dd-9612-4ebc-a88f-c0afe26c6859-cert") pod "infra-operator-controller-manager-b8c8d7cc8-z8trl" (UID: "a341a0dd-9612-4ebc-a88f-c0afe26c6859") : secret "infra-operator-webhook-server-cert" not found Mar 12 18:43:50.379794 master-0 kubenswrapper[29097]: I0312 18:43:50.379733 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-tjm8v" event={"ID":"2712f389-625e-4afa-8e4c-cc844ba4c169","Type":"ContainerStarted","Data":"ac42db69267de370360d6a62b995a4faa659bf2d681c99c7e6df45edfbd803b3"} Mar 12 18:43:50.381900 master-0 kubenswrapper[29097]: I0312 18:43:50.381847 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sc2sm" event={"ID":"a909c01e-f800-45ee-94d1-bb4880efd178","Type":"ContainerStarted","Data":"b5981a4da638726c65559a51a091ba70c8b0425cc432c7d51d8ad62054c07182"} Mar 12 18:43:50.787609 master-0 kubenswrapper[29097]: I0312 18:43:50.787123 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh\" (UID: \"f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh" Mar 12 18:43:50.787609 master-0 kubenswrapper[29097]: E0312 18:43:50.787273 29097 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 18:43:50.787609 master-0 kubenswrapper[29097]: E0312 18:43:50.787319 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46-cert podName:f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46 nodeName:}" failed. No retries permitted until 2026-03-12 18:43:54.787306492 +0000 UTC m=+874.341286589 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46-cert") pod "openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh" (UID: "f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 18:43:51.581432 master-0 kubenswrapper[29097]: I0312 18:43:51.579935 29097 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-d5tcw" podUID="d3e5b8c8-a100-4880-a0b9-9c3989d4e739" containerName="registry-server" probeResult="failure" output=< Mar 12 18:43:51.581432 master-0 kubenswrapper[29097]: timeout: failed to connect service ":50051" within 1s Mar 12 18:43:51.581432 master-0 kubenswrapper[29097]: > Mar 12 18:43:51.584038 master-0 kubenswrapper[29097]: I0312 18:43:51.583553 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-rn6wp\" (UID: \"8d400cc8-2921-4006-9ac0-2d6e5e2140d7\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-rn6wp" Mar 12 18:43:51.584460 master-0 kubenswrapper[29097]: I0312 18:43:51.584407 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-rn6wp\" (UID: \"8d400cc8-2921-4006-9ac0-2d6e5e2140d7\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-rn6wp" Mar 12 18:43:51.585011 master-0 kubenswrapper[29097]: E0312 18:43:51.584970 29097 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 18:43:51.585091 master-0 kubenswrapper[29097]: E0312 18:43:51.585060 29097 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 18:43:51.585211 master-0 kubenswrapper[29097]: E0312 18:43:51.585134 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-metrics-certs podName:8d400cc8-2921-4006-9ac0-2d6e5e2140d7 nodeName:}" failed. No retries permitted until 2026-03-12 18:43:55.585114406 +0000 UTC m=+875.139094573 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-metrics-certs") pod "openstack-operator-controller-manager-7795b46f77-rn6wp" (UID: "8d400cc8-2921-4006-9ac0-2d6e5e2140d7") : secret "metrics-server-cert" not found Mar 12 18:43:51.585211 master-0 kubenswrapper[29097]: E0312 18:43:51.585153 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-webhook-certs podName:8d400cc8-2921-4006-9ac0-2d6e5e2140d7 nodeName:}" failed. No retries permitted until 2026-03-12 18:43:55.585143887 +0000 UTC m=+875.139124084 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-webhook-certs") pod "openstack-operator-controller-manager-7795b46f77-rn6wp" (UID: "8d400cc8-2921-4006-9ac0-2d6e5e2140d7") : secret "webhook-server-cert" not found Mar 12 18:43:53.231887 master-0 kubenswrapper[29097]: I0312 18:43:53.226727 29097 patch_prober.go:28] interesting pod/catalog-operator-7d9c49f57b-pslh7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 18:43:53.231887 master-0 kubenswrapper[29097]: I0312 18:43:53.226865 29097 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-pslh7" podUID="47850839-bb4b-41e9-ac31-f1cabbb4926d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.128.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 18:43:54.338201 master-0 kubenswrapper[29097]: I0312 18:43:54.337305 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a341a0dd-9612-4ebc-a88f-c0afe26c6859-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-z8trl\" (UID: \"a341a0dd-9612-4ebc-a88f-c0afe26c6859\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-z8trl" Mar 12 18:43:54.338201 master-0 kubenswrapper[29097]: E0312 18:43:54.337495 29097 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 18:43:54.338201 master-0 kubenswrapper[29097]: E0312 18:43:54.337591 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a341a0dd-9612-4ebc-a88f-c0afe26c6859-cert podName:a341a0dd-9612-4ebc-a88f-c0afe26c6859 nodeName:}" failed. No retries permitted until 2026-03-12 18:44:02.33757111 +0000 UTC m=+881.891551207 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a341a0dd-9612-4ebc-a88f-c0afe26c6859-cert") pod "infra-operator-controller-manager-b8c8d7cc8-z8trl" (UID: "a341a0dd-9612-4ebc-a88f-c0afe26c6859") : secret "infra-operator-webhook-server-cert" not found Mar 12 18:43:54.845414 master-0 kubenswrapper[29097]: I0312 18:43:54.845302 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh\" (UID: \"f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh" Mar 12 18:43:54.846529 master-0 kubenswrapper[29097]: E0312 18:43:54.845597 29097 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 18:43:54.846529 master-0 kubenswrapper[29097]: E0312 18:43:54.845800 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46-cert podName:f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46 nodeName:}" failed. No retries permitted until 2026-03-12 18:44:02.845774809 +0000 UTC m=+882.399754926 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46-cert") pod "openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh" (UID: "f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 18:43:55.657781 master-0 kubenswrapper[29097]: I0312 18:43:55.657719 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-rn6wp\" (UID: \"8d400cc8-2921-4006-9ac0-2d6e5e2140d7\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-rn6wp" Mar 12 18:43:55.658375 master-0 kubenswrapper[29097]: I0312 18:43:55.657950 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-rn6wp\" (UID: \"8d400cc8-2921-4006-9ac0-2d6e5e2140d7\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-rn6wp" Mar 12 18:43:55.658375 master-0 kubenswrapper[29097]: E0312 18:43:55.657953 29097 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 18:43:55.658375 master-0 kubenswrapper[29097]: E0312 18:43:55.658065 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-metrics-certs podName:8d400cc8-2921-4006-9ac0-2d6e5e2140d7 nodeName:}" failed. No retries permitted until 2026-03-12 18:44:03.658042995 +0000 UTC m=+883.212023162 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-metrics-certs") pod "openstack-operator-controller-manager-7795b46f77-rn6wp" (UID: "8d400cc8-2921-4006-9ac0-2d6e5e2140d7") : secret "metrics-server-cert" not found Mar 12 18:43:55.658375 master-0 kubenswrapper[29097]: E0312 18:43:55.658073 29097 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 18:43:55.658375 master-0 kubenswrapper[29097]: E0312 18:43:55.658122 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-webhook-certs podName:8d400cc8-2921-4006-9ac0-2d6e5e2140d7 nodeName:}" failed. No retries permitted until 2026-03-12 18:44:03.658106387 +0000 UTC m=+883.212086564 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-webhook-certs") pod "openstack-operator-controller-manager-7795b46f77-rn6wp" (UID: "8d400cc8-2921-4006-9ac0-2d6e5e2140d7") : secret "webhook-server-cert" not found Mar 12 18:44:02.425964 master-0 kubenswrapper[29097]: I0312 18:44:02.425732 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a341a0dd-9612-4ebc-a88f-c0afe26c6859-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-z8trl\" (UID: \"a341a0dd-9612-4ebc-a88f-c0afe26c6859\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-z8trl" Mar 12 18:44:02.429916 master-0 kubenswrapper[29097]: I0312 18:44:02.429880 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a341a0dd-9612-4ebc-a88f-c0afe26c6859-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-z8trl\" (UID: \"a341a0dd-9612-4ebc-a88f-c0afe26c6859\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-z8trl" Mar 12 18:44:02.706342 master-0 kubenswrapper[29097]: I0312 18:44:02.706222 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-z8trl" Mar 12 18:44:02.934238 master-0 kubenswrapper[29097]: I0312 18:44:02.934139 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh\" (UID: \"f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh" Mar 12 18:44:02.940350 master-0 kubenswrapper[29097]: I0312 18:44:02.940281 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh\" (UID: \"f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh" Mar 12 18:44:03.182399 master-0 kubenswrapper[29097]: I0312 18:44:03.182322 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh" Mar 12 18:44:03.821239 master-0 kubenswrapper[29097]: I0312 18:44:03.748426 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-rn6wp\" (UID: \"8d400cc8-2921-4006-9ac0-2d6e5e2140d7\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-rn6wp" Mar 12 18:44:03.821239 master-0 kubenswrapper[29097]: E0312 18:44:03.748674 29097 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 18:44:03.821239 master-0 kubenswrapper[29097]: E0312 18:44:03.748769 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-metrics-certs podName:8d400cc8-2921-4006-9ac0-2d6e5e2140d7 nodeName:}" failed. No retries permitted until 2026-03-12 18:44:19.748748749 +0000 UTC m=+899.302728846 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-metrics-certs") pod "openstack-operator-controller-manager-7795b46f77-rn6wp" (UID: "8d400cc8-2921-4006-9ac0-2d6e5e2140d7") : secret "metrics-server-cert" not found Mar 12 18:44:03.821239 master-0 kubenswrapper[29097]: E0312 18:44:03.748772 29097 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 18:44:03.821239 master-0 kubenswrapper[29097]: I0312 18:44:03.748681 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-rn6wp\" (UID: \"8d400cc8-2921-4006-9ac0-2d6e5e2140d7\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-rn6wp" Mar 12 18:44:03.821239 master-0 kubenswrapper[29097]: E0312 18:44:03.748861 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-webhook-certs podName:8d400cc8-2921-4006-9ac0-2d6e5e2140d7 nodeName:}" failed. No retries permitted until 2026-03-12 18:44:19.748842741 +0000 UTC m=+899.302822878 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-webhook-certs") pod "openstack-operator-controller-manager-7795b46f77-rn6wp" (UID: "8d400cc8-2921-4006-9ac0-2d6e5e2140d7") : secret "webhook-server-cert" not found Mar 12 18:44:19.758641 master-0 kubenswrapper[29097]: I0312 18:44:19.758479 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-rn6wp\" (UID: \"8d400cc8-2921-4006-9ac0-2d6e5e2140d7\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-rn6wp" Mar 12 18:44:19.759961 master-0 kubenswrapper[29097]: I0312 18:44:19.759922 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-rn6wp\" (UID: \"8d400cc8-2921-4006-9ac0-2d6e5e2140d7\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-rn6wp" Mar 12 18:44:19.764972 master-0 kubenswrapper[29097]: I0312 18:44:19.764889 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-rn6wp\" (UID: \"8d400cc8-2921-4006-9ac0-2d6e5e2140d7\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-rn6wp" Mar 12 18:44:19.765996 master-0 kubenswrapper[29097]: I0312 18:44:19.765942 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8d400cc8-2921-4006-9ac0-2d6e5e2140d7-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-rn6wp\" (UID: \"8d400cc8-2921-4006-9ac0-2d6e5e2140d7\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-rn6wp" Mar 12 18:44:19.835546 master-0 kubenswrapper[29097]: I0312 18:44:19.835377 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-rn6wp" Mar 12 18:44:21.233887 master-0 kubenswrapper[29097]: E0312 18:44:21.233818 29097 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33feec78_4592_4343_965b_aa1b7044fcf3.slice/crio-26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Error finding container 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Status 404 returned error can't find the container with id 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547 Mar 12 18:44:30.596359 master-0 kubenswrapper[29097]: I0312 18:44:30.579357 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-b8c8d7cc8-z8trl"] Mar 12 18:44:30.615881 master-0 kubenswrapper[29097]: W0312 18:44:30.613069 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda341a0dd_9612_4ebc_a88f_c0afe26c6859.slice/crio-6d72259f442385789b9c51f97bf2146ffe9d680e2272b1c179f1e1370dee3c86 WatchSource:0}: Error finding container 6d72259f442385789b9c51f97bf2146ffe9d680e2272b1c179f1e1370dee3c86: Status 404 returned error can't find the container with id 6d72259f442385789b9c51f97bf2146ffe9d680e2272b1c179f1e1370dee3c86 Mar 12 18:44:30.808587 master-0 kubenswrapper[29097]: I0312 18:44:30.808435 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-z8trl" event={"ID":"a341a0dd-9612-4ebc-a88f-c0afe26c6859","Type":"ContainerStarted","Data":"6d72259f442385789b9c51f97bf2146ffe9d680e2272b1c179f1e1370dee3c86"} Mar 12 18:44:31.857834 master-0 kubenswrapper[29097]: I0312 18:44:31.857771 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-7pdcm" event={"ID":"180f0122-4a3b-40bd-b8e4-c81368e5be7f","Type":"ContainerStarted","Data":"2718108e0001e297f9b2f81971c73b5749aa8be2df3a168a9d29aeebf387cb2e"} Mar 12 18:44:31.858955 master-0 kubenswrapper[29097]: I0312 18:44:31.858918 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-7pdcm" Mar 12 18:44:31.876359 master-0 kubenswrapper[29097]: I0312 18:44:31.876291 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-7f7bk" event={"ID":"bd3cb300-0aee-4771-bd27-162e219b892a","Type":"ContainerStarted","Data":"3bea1fc287ed70a7218c9fd0a1c539cdfd01b9bbb13262b7c08a736cee383160"} Mar 12 18:44:31.892900 master-0 kubenswrapper[29097]: I0312 18:44:31.892855 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-2j959" event={"ID":"a9aded67-7bb9-4df6-8334-ddffef99aa7f","Type":"ContainerStarted","Data":"33e19606f7c1c58bd8621fbe6087922dcc46b09ccaf2e8d1b24db45cb9684c28"} Mar 12 18:44:31.893422 master-0 kubenswrapper[29097]: I0312 18:44:31.893391 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-2j959" Mar 12 18:44:31.898268 master-0 kubenswrapper[29097]: I0312 18:44:31.898217 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-l6v2h" event={"ID":"22702d59-1bb1-494c-a0a8-f9f0fd34b196","Type":"ContainerStarted","Data":"e4ffd4987fb3a24302ea7dda1f01977692368b4dfa62265eaa2cf64403dc514d"} Mar 12 18:44:31.898990 master-0 kubenswrapper[29097]: I0312 18:44:31.898955 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-l6v2h" Mar 12 18:44:31.914534 master-0 kubenswrapper[29097]: I0312 18:44:31.912800 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5npkd" event={"ID":"80304a52-fbd4-40c6-9467-92570ac930a3","Type":"ContainerStarted","Data":"d7371f64b545cec0899e9951f6763251b0d1aa81b90f954642cd653252d1fa3f"} Mar 12 18:44:31.939534 master-0 kubenswrapper[29097]: I0312 18:44:31.935822 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-2l7g8" event={"ID":"d5559f37-caa9-4f7c-913f-cea72d79eb03","Type":"ContainerStarted","Data":"a66a1df9192d1b6acd2f987ced7cc9a2a126b3321f08b6a6b156a47029a9f532"} Mar 12 18:44:31.939534 master-0 kubenswrapper[29097]: I0312 18:44:31.935902 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-2l7g8" Mar 12 18:44:31.953526 master-0 kubenswrapper[29097]: I0312 18:44:31.951653 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-46wm7" event={"ID":"e9ded411-b09e-46f8-aee1-f6c168f2edd8","Type":"ContainerStarted","Data":"db8c456451d241ea9797701639476dc79c6e1f499207102be66953a23d3a4b52"} Mar 12 18:44:31.953526 master-0 kubenswrapper[29097]: I0312 18:44:31.952579 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-46wm7" Mar 12 18:44:31.973525 master-0 kubenswrapper[29097]: I0312 18:44:31.967674 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-44fbq" event={"ID":"13c9cdbc-00fa-4ca0-b136-46f9df179f76","Type":"ContainerStarted","Data":"7619732b18cbd03dd3c03d38ff21d4f4d3706387fd0398194d27c639ef318e06"} Mar 12 18:44:31.996591 master-0 kubenswrapper[29097]: I0312 18:44:31.990722 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-qkhtq" event={"ID":"f7180d7f-df3c-48a8-b7f2-88c910fb2917","Type":"ContainerStarted","Data":"53577c99cc5ebae3b0c861207baaa6024fa8c23f5a60c1e60ecaaeded112629f"} Mar 12 18:44:31.996591 master-0 kubenswrapper[29097]: I0312 18:44:31.991563 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-qkhtq" Mar 12 18:44:32.010534 master-0 kubenswrapper[29097]: I0312 18:44:32.004915 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-hszdc" event={"ID":"f23f2008-a1bd-41d2-90e0-8885562051bb","Type":"ContainerStarted","Data":"a1f654ff3679fcf0c0ad7091be333c4a01e0ad006ca3baf6226975e105685ec5"} Mar 12 18:44:32.010534 master-0 kubenswrapper[29097]: I0312 18:44:32.005922 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-hszdc" Mar 12 18:44:32.010534 master-0 kubenswrapper[29097]: I0312 18:44:32.008844 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-tjm8v" event={"ID":"2712f389-625e-4afa-8e4c-cc844ba4c169","Type":"ContainerStarted","Data":"067c1b8333a0ecbffa7568994de3bf6323536ea0318ec786de316da949be6ad8"} Mar 12 18:44:32.010534 master-0 kubenswrapper[29097]: I0312 18:44:32.010009 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-75nl8" event={"ID":"b9e311e9-c85b-4072-972d-3dc6f8ff5f64","Type":"ContainerStarted","Data":"c00aae79c28da5fbf26b519e3ff9497124a014f68b63dd46e4c488f0c3a5c407"} Mar 12 18:44:32.015526 master-0 kubenswrapper[29097]: I0312 18:44:32.011175 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-snrcc" event={"ID":"0a6c3e65-e541-4970-bf55-f303817995c0","Type":"ContainerStarted","Data":"6c8ea2c0046b3fb7e8d3a638eecd96df8351066bd12d44d8ef32e4d85747a37a"} Mar 12 18:44:32.015526 master-0 kubenswrapper[29097]: I0312 18:44:32.011670 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-snrcc" Mar 12 18:44:32.015526 master-0 kubenswrapper[29097]: I0312 18:44:32.012549 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-v8xwb" event={"ID":"b7201b74-50e0-42cb-a896-0870ab9b41ce","Type":"ContainerStarted","Data":"1e8c699ff37e86ee7b35edc2b6a86a6cf2e287d6f859a6e3c2f81ebea2e672c2"} Mar 12 18:44:32.019529 master-0 kubenswrapper[29097]: I0312 18:44:32.019335 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-677c674df7-svkjf" event={"ID":"27cf3106-f02c-4b8b-b35e-aa261037f930","Type":"ContainerStarted","Data":"915cbaf7d6f17551dab24fc35b42a316262a2047bb8cf98f05bb76715bd1d8d7"} Mar 12 18:44:32.025537 master-0 kubenswrapper[29097]: I0312 18:44:32.019862 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-677c674df7-svkjf" Mar 12 18:44:32.041534 master-0 kubenswrapper[29097]: I0312 18:44:32.040715 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sc2sm" event={"ID":"a909c01e-f800-45ee-94d1-bb4880efd178","Type":"ContainerStarted","Data":"d60ffd8131137b7d1c11f7e2d8a38eee1b5de3b1b954b09feb4c9dc656cf0b5b"} Mar 12 18:44:32.070537 master-0 kubenswrapper[29097]: I0312 18:44:32.063823 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-c4xmt" event={"ID":"fa6bbd68-101b-47ce-a9d6-9ad81777a1d3","Type":"ContainerStarted","Data":"58dbe89a082673e56059f3f41b0c7b93a48a115b1b2e07e612e115c5acd90ea6"} Mar 12 18:44:32.084351 master-0 kubenswrapper[29097]: I0312 18:44:32.084099 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-p5v49" event={"ID":"b4a40675-9bcf-43e8-b9a3-2016eb61e4da","Type":"ContainerStarted","Data":"21ffac36bd0187466195e2625ceaf69f22d8deafdc76edb07d173e8407de8cef"} Mar 12 18:44:32.084351 master-0 kubenswrapper[29097]: I0312 18:44:32.084138 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-p5v49" Mar 12 18:44:32.098849 master-0 kubenswrapper[29097]: I0312 18:44:32.095923 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-b92h7" event={"ID":"827cb92d-3fbc-48fc-b74d-5a8a9046cadb","Type":"ContainerStarted","Data":"3c1e068f3ec658a13da88a3fc1437c1c6d8651c7fe3587236fa3f26b771e5731"} Mar 12 18:44:32.098849 master-0 kubenswrapper[29097]: I0312 18:44:32.096659 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-b92h7" Mar 12 18:44:32.103129 master-0 kubenswrapper[29097]: I0312 18:44:32.101237 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-d29kk" event={"ID":"76af05d5-a6fb-4aea-949a-fa0329c4739c","Type":"ContainerStarted","Data":"79759e4e1157e3ce9d9072eab8251e2252a464d0f093b5c5ae9a81f04e7b4449"} Mar 12 18:44:32.103129 master-0 kubenswrapper[29097]: I0312 18:44:32.102059 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-d29kk" Mar 12 18:44:32.143933 master-0 kubenswrapper[29097]: I0312 18:44:32.138239 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7795b46f77-rn6wp"] Mar 12 18:44:32.162154 master-0 kubenswrapper[29097]: I0312 18:44:32.154660 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh"] Mar 12 18:44:32.189000 master-0 kubenswrapper[29097]: I0312 18:44:32.188894 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-7pdcm" podStartSLOduration=6.483262465 podStartE2EDuration="46.1888698s" podCreationTimestamp="2026-03-12 18:43:46 +0000 UTC" firstStartedPulling="2026-03-12 18:43:47.283047051 +0000 UTC m=+866.837027138" lastFinishedPulling="2026-03-12 18:44:26.988654366 +0000 UTC m=+906.542634473" observedRunningTime="2026-03-12 18:44:32.178846769 +0000 UTC m=+911.732826856" watchObservedRunningTime="2026-03-12 18:44:32.1888698 +0000 UTC m=+911.742849887" Mar 12 18:44:32.652539 master-0 kubenswrapper[29097]: I0312 18:44:32.650196 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-2j959" podStartSLOduration=6.985874025 podStartE2EDuration="46.650166859s" podCreationTimestamp="2026-03-12 18:43:46 +0000 UTC" firstStartedPulling="2026-03-12 18:43:47.326865344 +0000 UTC m=+866.880845451" lastFinishedPulling="2026-03-12 18:44:26.991158178 +0000 UTC m=+906.545138285" observedRunningTime="2026-03-12 18:44:32.649777889 +0000 UTC m=+912.203757986" watchObservedRunningTime="2026-03-12 18:44:32.650166859 +0000 UTC m=+912.204146956" Mar 12 18:44:32.809577 master-0 kubenswrapper[29097]: I0312 18:44:32.807931 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-b92h7" podStartSLOduration=4.121666274 podStartE2EDuration="46.807909995s" podCreationTimestamp="2026-03-12 18:43:46 +0000 UTC" firstStartedPulling="2026-03-12 18:43:47.97762388 +0000 UTC m=+867.531603987" lastFinishedPulling="2026-03-12 18:44:30.663867601 +0000 UTC m=+910.217847708" observedRunningTime="2026-03-12 18:44:32.789079775 +0000 UTC m=+912.343059872" watchObservedRunningTime="2026-03-12 18:44:32.807909995 +0000 UTC m=+912.361890092" Mar 12 18:44:32.885748 master-0 kubenswrapper[29097]: I0312 18:44:32.885599 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-p5v49" podStartSLOduration=7.851912052 podStartE2EDuration="46.885579683s" podCreationTimestamp="2026-03-12 18:43:46 +0000 UTC" firstStartedPulling="2026-03-12 18:43:47.956153684 +0000 UTC m=+867.510133781" lastFinishedPulling="2026-03-12 18:44:26.989821305 +0000 UTC m=+906.543801412" observedRunningTime="2026-03-12 18:44:32.843762419 +0000 UTC m=+912.397742536" watchObservedRunningTime="2026-03-12 18:44:32.885579683 +0000 UTC m=+912.439559780" Mar 12 18:44:32.957079 master-0 kubenswrapper[29097]: I0312 18:44:32.956986 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-677c674df7-svkjf" podStartSLOduration=7.562349689 podStartE2EDuration="46.956963034s" podCreationTimestamp="2026-03-12 18:43:46 +0000 UTC" firstStartedPulling="2026-03-12 18:43:49.114896415 +0000 UTC m=+868.668876512" lastFinishedPulling="2026-03-12 18:44:28.50950974 +0000 UTC m=+908.063489857" observedRunningTime="2026-03-12 18:44:32.86665451 +0000 UTC m=+912.420634607" watchObservedRunningTime="2026-03-12 18:44:32.956963034 +0000 UTC m=+912.510943131" Mar 12 18:44:32.983190 master-0 kubenswrapper[29097]: I0312 18:44:32.983105 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-snrcc" podStartSLOduration=6.582638944 podStartE2EDuration="46.983084355s" podCreationTimestamp="2026-03-12 18:43:46 +0000 UTC" firstStartedPulling="2026-03-12 18:43:49.126534285 +0000 UTC m=+868.680514382" lastFinishedPulling="2026-03-12 18:44:29.526979666 +0000 UTC m=+909.080959793" observedRunningTime="2026-03-12 18:44:32.93118931 +0000 UTC m=+912.485169407" watchObservedRunningTime="2026-03-12 18:44:32.983084355 +0000 UTC m=+912.537064452" Mar 12 18:44:32.989910 master-0 kubenswrapper[29097]: I0312 18:44:32.989818 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-hszdc" podStartSLOduration=8.134034752 podStartE2EDuration="46.989796843s" podCreationTimestamp="2026-03-12 18:43:46 +0000 UTC" firstStartedPulling="2026-03-12 18:43:48.683482951 +0000 UTC m=+868.237463038" lastFinishedPulling="2026-03-12 18:44:27.539245042 +0000 UTC m=+907.093225129" observedRunningTime="2026-03-12 18:44:32.984017419 +0000 UTC m=+912.537997516" watchObservedRunningTime="2026-03-12 18:44:32.989796843 +0000 UTC m=+912.543776940" Mar 12 18:44:33.059136 master-0 kubenswrapper[29097]: I0312 18:44:33.059065 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-qkhtq" podStartSLOduration=7.114251159 podStartE2EDuration="47.059046611s" podCreationTimestamp="2026-03-12 18:43:46 +0000 UTC" firstStartedPulling="2026-03-12 18:43:47.000398639 +0000 UTC m=+866.554378766" lastFinishedPulling="2026-03-12 18:44:26.945194111 +0000 UTC m=+906.499174218" observedRunningTime="2026-03-12 18:44:33.029853772 +0000 UTC m=+912.583833869" watchObservedRunningTime="2026-03-12 18:44:33.059046611 +0000 UTC m=+912.613026708" Mar 12 18:44:33.096155 master-0 kubenswrapper[29097]: I0312 18:44:33.096066 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-l6v2h" podStartSLOduration=7.269776769 podStartE2EDuration="47.096045864s" podCreationTimestamp="2026-03-12 18:43:46 +0000 UTC" firstStartedPulling="2026-03-12 18:43:48.68263393 +0000 UTC m=+868.236614017" lastFinishedPulling="2026-03-12 18:44:28.508903025 +0000 UTC m=+908.062883112" observedRunningTime="2026-03-12 18:44:33.083768757 +0000 UTC m=+912.637748854" watchObservedRunningTime="2026-03-12 18:44:33.096045864 +0000 UTC m=+912.650025951" Mar 12 18:44:33.132562 master-0 kubenswrapper[29097]: I0312 18:44:33.131764 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-d29kk" podStartSLOduration=8.159360842 podStartE2EDuration="47.131745374s" podCreationTimestamp="2026-03-12 18:43:46 +0000 UTC" firstStartedPulling="2026-03-12 18:43:48.018168321 +0000 UTC m=+867.572148418" lastFinishedPulling="2026-03-12 18:44:26.990552843 +0000 UTC m=+906.544532950" observedRunningTime="2026-03-12 18:44:33.130974065 +0000 UTC m=+912.684954162" watchObservedRunningTime="2026-03-12 18:44:33.131745374 +0000 UTC m=+912.685725481" Mar 12 18:44:33.138923 master-0 kubenswrapper[29097]: I0312 18:44:33.138808 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh" event={"ID":"f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46","Type":"ContainerStarted","Data":"85032b146d41bc2717af030e95c1f97668392489d81b21cdd2e859979c158601"} Mar 12 18:44:33.154029 master-0 kubenswrapper[29097]: I0312 18:44:33.153978 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-rn6wp" event={"ID":"8d400cc8-2921-4006-9ac0-2d6e5e2140d7","Type":"ContainerStarted","Data":"42c9c40eb9e02c676a6b587b0d2fc6d56c834651561d17ab90c5b7f8274735ea"} Mar 12 18:44:33.154029 master-0 kubenswrapper[29097]: I0312 18:44:33.154026 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-rn6wp" Mar 12 18:44:33.154029 master-0 kubenswrapper[29097]: I0312 18:44:33.154036 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-rn6wp" event={"ID":"8d400cc8-2921-4006-9ac0-2d6e5e2140d7","Type":"ContainerStarted","Data":"d67c8f81eb49e2de7ba6d6ba52fa98d1290df0938f9e3da39d73fa03e213aa55"} Mar 12 18:44:33.184024 master-0 kubenswrapper[29097]: I0312 18:44:33.182813 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-46wm7" podStartSLOduration=7.939501127 podStartE2EDuration="47.182792298s" podCreationTimestamp="2026-03-12 18:43:46 +0000 UTC" firstStartedPulling="2026-03-12 18:43:47.747122199 +0000 UTC m=+867.301102306" lastFinishedPulling="2026-03-12 18:44:26.99041337 +0000 UTC m=+906.544393477" observedRunningTime="2026-03-12 18:44:33.180187593 +0000 UTC m=+912.734167690" watchObservedRunningTime="2026-03-12 18:44:33.182792298 +0000 UTC m=+912.736772395" Mar 12 18:44:33.253536 master-0 kubenswrapper[29097]: I0312 18:44:33.249455 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-2l7g8" podStartSLOduration=9.534683628 podStartE2EDuration="47.24941896s" podCreationTimestamp="2026-03-12 18:43:46 +0000 UTC" firstStartedPulling="2026-03-12 18:43:49.294752992 +0000 UTC m=+868.848733089" lastFinishedPulling="2026-03-12 18:44:27.009488324 +0000 UTC m=+906.563468421" observedRunningTime="2026-03-12 18:44:33.221169236 +0000 UTC m=+912.775149333" watchObservedRunningTime="2026-03-12 18:44:33.24941896 +0000 UTC m=+912.803399057" Mar 12 18:44:33.309128 master-0 kubenswrapper[29097]: I0312 18:44:33.309029 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-c4xmt" podStartSLOduration=5.460553698 podStartE2EDuration="47.309006607s" podCreationTimestamp="2026-03-12 18:43:46 +0000 UTC" firstStartedPulling="2026-03-12 18:43:48.723363296 +0000 UTC m=+868.277343403" lastFinishedPulling="2026-03-12 18:44:30.571816155 +0000 UTC m=+910.125796312" observedRunningTime="2026-03-12 18:44:33.275220264 +0000 UTC m=+912.829200361" watchObservedRunningTime="2026-03-12 18:44:33.309006607 +0000 UTC m=+912.862986704" Mar 12 18:44:33.327750 master-0 kubenswrapper[29097]: I0312 18:44:33.327642 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-v8xwb" podStartSLOduration=8.470803244 podStartE2EDuration="47.327615271s" podCreationTimestamp="2026-03-12 18:43:46 +0000 UTC" firstStartedPulling="2026-03-12 18:43:48.682493456 +0000 UTC m=+868.236473543" lastFinishedPulling="2026-03-12 18:44:27.539305463 +0000 UTC m=+907.093285570" observedRunningTime="2026-03-12 18:44:33.320766751 +0000 UTC m=+912.874746858" watchObservedRunningTime="2026-03-12 18:44:33.327615271 +0000 UTC m=+912.881595378" Mar 12 18:44:33.448078 master-0 kubenswrapper[29097]: I0312 18:44:33.445704 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-rn6wp" podStartSLOduration=47.445690147 podStartE2EDuration="47.445690147s" podCreationTimestamp="2026-03-12 18:43:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:44:33.443316928 +0000 UTC m=+912.997297025" watchObservedRunningTime="2026-03-12 18:44:33.445690147 +0000 UTC m=+912.999670244" Mar 12 18:44:33.456025 master-0 kubenswrapper[29097]: I0312 18:44:33.455945 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-7f7bk" podStartSLOduration=6.575908717 podStartE2EDuration="47.455925183s" podCreationTimestamp="2026-03-12 18:43:46 +0000 UTC" firstStartedPulling="2026-03-12 18:43:48.647729249 +0000 UTC m=+868.201709346" lastFinishedPulling="2026-03-12 18:44:29.527745715 +0000 UTC m=+909.081725812" observedRunningTime="2026-03-12 18:44:33.364795509 +0000 UTC m=+912.918775606" watchObservedRunningTime="2026-03-12 18:44:33.455925183 +0000 UTC m=+913.009905280" Mar 12 18:44:33.496257 master-0 kubenswrapper[29097]: I0312 18:44:33.496175 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-75nl8" podStartSLOduration=6.698426194 podStartE2EDuration="47.496155277s" podCreationTimestamp="2026-03-12 18:43:46 +0000 UTC" firstStartedPulling="2026-03-12 18:43:48.729222852 +0000 UTC m=+868.283202959" lastFinishedPulling="2026-03-12 18:44:29.526951905 +0000 UTC m=+909.080932042" observedRunningTime="2026-03-12 18:44:33.488109936 +0000 UTC m=+913.042090033" watchObservedRunningTime="2026-03-12 18:44:33.496155277 +0000 UTC m=+913.050135374" Mar 12 18:44:33.532559 master-0 kubenswrapper[29097]: I0312 18:44:33.529354 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sc2sm" podStartSLOduration=5.99766213 podStartE2EDuration="47.529334294s" podCreationTimestamp="2026-03-12 18:43:46 +0000 UTC" firstStartedPulling="2026-03-12 18:43:49.360684537 +0000 UTC m=+868.914664634" lastFinishedPulling="2026-03-12 18:44:30.892356681 +0000 UTC m=+910.446336798" observedRunningTime="2026-03-12 18:44:33.516980926 +0000 UTC m=+913.070961023" watchObservedRunningTime="2026-03-12 18:44:33.529334294 +0000 UTC m=+913.083314391" Mar 12 18:44:33.575596 master-0 kubenswrapper[29097]: I0312 18:44:33.573938 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-tjm8v" podStartSLOduration=7.403886615 podStartE2EDuration="47.573923197s" podCreationTimestamp="2026-03-12 18:43:46 +0000 UTC" firstStartedPulling="2026-03-12 18:43:49.357006316 +0000 UTC m=+868.910986413" lastFinishedPulling="2026-03-12 18:44:29.527042868 +0000 UTC m=+909.081022995" observedRunningTime="2026-03-12 18:44:33.573026094 +0000 UTC m=+913.127006201" watchObservedRunningTime="2026-03-12 18:44:33.573923197 +0000 UTC m=+913.127903294" Mar 12 18:44:33.614002 master-0 kubenswrapper[29097]: I0312 18:44:33.613910 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-44fbq" podStartSLOduration=5.834247852 podStartE2EDuration="47.613887344s" podCreationTimestamp="2026-03-12 18:43:46 +0000 UTC" firstStartedPulling="2026-03-12 18:43:47.746937184 +0000 UTC m=+867.300917281" lastFinishedPulling="2026-03-12 18:44:29.526576646 +0000 UTC m=+909.080556773" observedRunningTime="2026-03-12 18:44:33.604733116 +0000 UTC m=+913.158713213" watchObservedRunningTime="2026-03-12 18:44:33.613887344 +0000 UTC m=+913.167867441" Mar 12 18:44:33.650848 master-0 kubenswrapper[29097]: I0312 18:44:33.641646 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5npkd" podStartSLOduration=7.153697442 podStartE2EDuration="47.641628726s" podCreationTimestamp="2026-03-12 18:43:46 +0000 UTC" firstStartedPulling="2026-03-12 18:43:49.039666478 +0000 UTC m=+868.593646575" lastFinishedPulling="2026-03-12 18:44:29.527597722 +0000 UTC m=+909.081577859" observedRunningTime="2026-03-12 18:44:33.641013791 +0000 UTC m=+913.194993898" watchObservedRunningTime="2026-03-12 18:44:33.641628726 +0000 UTC m=+913.195608823" Mar 12 18:44:36.179404 master-0 kubenswrapper[29097]: I0312 18:44:36.179306 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh" event={"ID":"f74d6a71-4eae-4ad6-bf37-ea4c1c1acf46","Type":"ContainerStarted","Data":"2d14a7003c251d8c097d26d5fdc044fb6db0e411d984bdf75bc35c0ec615f8f6"} Mar 12 18:44:36.180709 master-0 kubenswrapper[29097]: I0312 18:44:36.180633 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh" Mar 12 18:44:36.182092 master-0 kubenswrapper[29097]: I0312 18:44:36.182041 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-z8trl" event={"ID":"a341a0dd-9612-4ebc-a88f-c0afe26c6859","Type":"ContainerStarted","Data":"628acefda486497da881b085628e10de31d0a5da2fac8165d34df27a9f4b5765"} Mar 12 18:44:36.182260 master-0 kubenswrapper[29097]: I0312 18:44:36.182233 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-z8trl" Mar 12 18:44:36.226898 master-0 kubenswrapper[29097]: I0312 18:44:36.226802 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh" podStartSLOduration=46.580347627 podStartE2EDuration="50.226753144s" podCreationTimestamp="2026-03-12 18:43:46 +0000 UTC" firstStartedPulling="2026-03-12 18:44:32.124604066 +0000 UTC m=+911.678584163" lastFinishedPulling="2026-03-12 18:44:35.771009593 +0000 UTC m=+915.324989680" observedRunningTime="2026-03-12 18:44:36.221187295 +0000 UTC m=+915.775167392" watchObservedRunningTime="2026-03-12 18:44:36.226753144 +0000 UTC m=+915.780733251" Mar 12 18:44:36.269698 master-0 kubenswrapper[29097]: I0312 18:44:36.269583 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-z8trl" podStartSLOduration=45.135693372 podStartE2EDuration="50.269525871s" podCreationTimestamp="2026-03-12 18:43:46 +0000 UTC" firstStartedPulling="2026-03-12 18:44:30.632066587 +0000 UTC m=+910.186046724" lastFinishedPulling="2026-03-12 18:44:35.765899126 +0000 UTC m=+915.319879223" observedRunningTime="2026-03-12 18:44:36.262010504 +0000 UTC m=+915.815990591" watchObservedRunningTime="2026-03-12 18:44:36.269525871 +0000 UTC m=+915.823505968" Mar 12 18:44:36.406008 master-0 kubenswrapper[29097]: I0312 18:44:36.405915 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-2j959" Mar 12 18:44:36.439961 master-0 kubenswrapper[29097]: I0312 18:44:36.439775 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-qkhtq" Mar 12 18:44:36.516936 master-0 kubenswrapper[29097]: I0312 18:44:36.512786 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-7pdcm" Mar 12 18:44:36.564508 master-0 kubenswrapper[29097]: I0312 18:44:36.564453 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-44fbq" Mar 12 18:44:36.566126 master-0 kubenswrapper[29097]: I0312 18:44:36.565877 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-44fbq" Mar 12 18:44:36.606817 master-0 kubenswrapper[29097]: I0312 18:44:36.605860 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-46wm7" Mar 12 18:44:36.743256 master-0 kubenswrapper[29097]: I0312 18:44:36.743050 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-p5v49" Mar 12 18:44:37.008426 master-0 kubenswrapper[29097]: I0312 18:44:37.008257 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-d29kk" Mar 12 18:44:37.040435 master-0 kubenswrapper[29097]: I0312 18:44:37.040369 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-b92h7" Mar 12 18:44:37.140934 master-0 kubenswrapper[29097]: I0312 18:44:37.140826 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-7f7bk" Mar 12 18:44:37.143832 master-0 kubenswrapper[29097]: I0312 18:44:37.143801 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-7f7bk" Mar 12 18:44:37.147501 master-0 kubenswrapper[29097]: I0312 18:44:37.147466 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-l6v2h" Mar 12 18:44:37.174114 master-0 kubenswrapper[29097]: I0312 18:44:37.174053 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-hszdc" Mar 12 18:44:37.199666 master-0 kubenswrapper[29097]: I0312 18:44:37.199594 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-c4xmt" Mar 12 18:44:37.202989 master-0 kubenswrapper[29097]: I0312 18:44:37.201136 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-c4xmt" Mar 12 18:44:37.253318 master-0 kubenswrapper[29097]: I0312 18:44:37.253253 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-75nl8" Mar 12 18:44:37.258373 master-0 kubenswrapper[29097]: I0312 18:44:37.257097 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-75nl8" Mar 12 18:44:37.307614 master-0 kubenswrapper[29097]: I0312 18:44:37.307503 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-v8xwb" Mar 12 18:44:37.333561 master-0 kubenswrapper[29097]: I0312 18:44:37.320653 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-v8xwb" Mar 12 18:44:37.445563 master-0 kubenswrapper[29097]: I0312 18:44:37.445453 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5npkd" Mar 12 18:44:37.451765 master-0 kubenswrapper[29097]: I0312 18:44:37.451659 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-5npkd" Mar 12 18:44:37.637806 master-0 kubenswrapper[29097]: I0312 18:44:37.637653 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-snrcc" Mar 12 18:44:37.682458 master-0 kubenswrapper[29097]: I0312 18:44:37.682344 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-677c674df7-svkjf" Mar 12 18:44:37.702228 master-0 kubenswrapper[29097]: I0312 18:44:37.702176 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-tjm8v" Mar 12 18:44:37.715561 master-0 kubenswrapper[29097]: I0312 18:44:37.715144 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-tjm8v" Mar 12 18:44:37.729169 master-0 kubenswrapper[29097]: I0312 18:44:37.729122 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-2l7g8" Mar 12 18:44:39.846597 master-0 kubenswrapper[29097]: I0312 18:44:39.846480 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-rn6wp" Mar 12 18:44:42.712247 master-0 kubenswrapper[29097]: I0312 18:44:42.712162 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-z8trl" Mar 12 18:44:43.188613 master-0 kubenswrapper[29097]: I0312 18:44:43.188561 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-tt2jh" Mar 12 18:45:21.224404 master-0 kubenswrapper[29097]: E0312 18:45:21.224340 29097 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33feec78_4592_4343_965b_aa1b7044fcf3.slice/crio-26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Error finding container 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Status 404 returned error can't find the container with id 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547 Mar 12 18:45:25.806302 master-0 kubenswrapper[29097]: I0312 18:45:25.806239 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-tfc89"] Mar 12 18:45:25.819097 master-0 kubenswrapper[29097]: I0312 18:45:25.808358 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685c76cf85-tfc89" Mar 12 18:45:25.820629 master-0 kubenswrapper[29097]: I0312 18:45:25.820592 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 12 18:45:25.821536 master-0 kubenswrapper[29097]: I0312 18:45:25.820821 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 12 18:45:25.821536 master-0 kubenswrapper[29097]: I0312 18:45:25.821009 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 12 18:45:25.854544 master-0 kubenswrapper[29097]: I0312 18:45:25.848712 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-tfc89"] Mar 12 18:45:25.903550 master-0 kubenswrapper[29097]: I0312 18:45:25.890487 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-8g7pv"] Mar 12 18:45:25.903550 master-0 kubenswrapper[29097]: I0312 18:45:25.897929 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476fd89bc-8g7pv" Mar 12 18:45:25.915654 master-0 kubenswrapper[29097]: I0312 18:45:25.912192 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 12 18:45:25.921102 master-0 kubenswrapper[29097]: I0312 18:45:25.921046 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4da79c5-a6b0-4e60-8816-a9e69f3a7e96-config\") pod \"dnsmasq-dns-685c76cf85-tfc89\" (UID: \"b4da79c5-a6b0-4e60-8816-a9e69f3a7e96\") " pod="openstack/dnsmasq-dns-685c76cf85-tfc89" Mar 12 18:45:25.921473 master-0 kubenswrapper[29097]: I0312 18:45:25.921451 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dn24\" (UniqueName: \"kubernetes.io/projected/b4da79c5-a6b0-4e60-8816-a9e69f3a7e96-kube-api-access-7dn24\") pod \"dnsmasq-dns-685c76cf85-tfc89\" (UID: \"b4da79c5-a6b0-4e60-8816-a9e69f3a7e96\") " pod="openstack/dnsmasq-dns-685c76cf85-tfc89" Mar 12 18:45:25.953549 master-0 kubenswrapper[29097]: I0312 18:45:25.941858 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-8g7pv"] Mar 12 18:45:26.023537 master-0 kubenswrapper[29097]: I0312 18:45:26.022919 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krfsq\" (UniqueName: \"kubernetes.io/projected/aa942752-02ac-4a81-9822-6b5adf5c5b91-kube-api-access-krfsq\") pod \"dnsmasq-dns-8476fd89bc-8g7pv\" (UID: \"aa942752-02ac-4a81-9822-6b5adf5c5b91\") " pod="openstack/dnsmasq-dns-8476fd89bc-8g7pv" Mar 12 18:45:26.023537 master-0 kubenswrapper[29097]: I0312 18:45:26.022993 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4da79c5-a6b0-4e60-8816-a9e69f3a7e96-config\") pod \"dnsmasq-dns-685c76cf85-tfc89\" (UID: \"b4da79c5-a6b0-4e60-8816-a9e69f3a7e96\") " pod="openstack/dnsmasq-dns-685c76cf85-tfc89" Mar 12 18:45:26.023537 master-0 kubenswrapper[29097]: I0312 18:45:26.023018 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa942752-02ac-4a81-9822-6b5adf5c5b91-config\") pod \"dnsmasq-dns-8476fd89bc-8g7pv\" (UID: \"aa942752-02ac-4a81-9822-6b5adf5c5b91\") " pod="openstack/dnsmasq-dns-8476fd89bc-8g7pv" Mar 12 18:45:26.023537 master-0 kubenswrapper[29097]: I0312 18:45:26.023073 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dn24\" (UniqueName: \"kubernetes.io/projected/b4da79c5-a6b0-4e60-8816-a9e69f3a7e96-kube-api-access-7dn24\") pod \"dnsmasq-dns-685c76cf85-tfc89\" (UID: \"b4da79c5-a6b0-4e60-8816-a9e69f3a7e96\") " pod="openstack/dnsmasq-dns-685c76cf85-tfc89" Mar 12 18:45:26.023537 master-0 kubenswrapper[29097]: I0312 18:45:26.023154 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa942752-02ac-4a81-9822-6b5adf5c5b91-dns-svc\") pod \"dnsmasq-dns-8476fd89bc-8g7pv\" (UID: \"aa942752-02ac-4a81-9822-6b5adf5c5b91\") " pod="openstack/dnsmasq-dns-8476fd89bc-8g7pv" Mar 12 18:45:26.057717 master-0 kubenswrapper[29097]: I0312 18:45:26.057599 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4da79c5-a6b0-4e60-8816-a9e69f3a7e96-config\") pod \"dnsmasq-dns-685c76cf85-tfc89\" (UID: \"b4da79c5-a6b0-4e60-8816-a9e69f3a7e96\") " pod="openstack/dnsmasq-dns-685c76cf85-tfc89" Mar 12 18:45:26.102537 master-0 kubenswrapper[29097]: I0312 18:45:26.102214 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dn24\" (UniqueName: \"kubernetes.io/projected/b4da79c5-a6b0-4e60-8816-a9e69f3a7e96-kube-api-access-7dn24\") pod \"dnsmasq-dns-685c76cf85-tfc89\" (UID: \"b4da79c5-a6b0-4e60-8816-a9e69f3a7e96\") " pod="openstack/dnsmasq-dns-685c76cf85-tfc89" Mar 12 18:45:26.126456 master-0 kubenswrapper[29097]: I0312 18:45:26.126377 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa942752-02ac-4a81-9822-6b5adf5c5b91-dns-svc\") pod \"dnsmasq-dns-8476fd89bc-8g7pv\" (UID: \"aa942752-02ac-4a81-9822-6b5adf5c5b91\") " pod="openstack/dnsmasq-dns-8476fd89bc-8g7pv" Mar 12 18:45:26.126733 master-0 kubenswrapper[29097]: I0312 18:45:26.126492 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krfsq\" (UniqueName: \"kubernetes.io/projected/aa942752-02ac-4a81-9822-6b5adf5c5b91-kube-api-access-krfsq\") pod \"dnsmasq-dns-8476fd89bc-8g7pv\" (UID: \"aa942752-02ac-4a81-9822-6b5adf5c5b91\") " pod="openstack/dnsmasq-dns-8476fd89bc-8g7pv" Mar 12 18:45:26.126858 master-0 kubenswrapper[29097]: I0312 18:45:26.126829 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa942752-02ac-4a81-9822-6b5adf5c5b91-config\") pod \"dnsmasq-dns-8476fd89bc-8g7pv\" (UID: \"aa942752-02ac-4a81-9822-6b5adf5c5b91\") " pod="openstack/dnsmasq-dns-8476fd89bc-8g7pv" Mar 12 18:45:26.127567 master-0 kubenswrapper[29097]: I0312 18:45:26.127501 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa942752-02ac-4a81-9822-6b5adf5c5b91-dns-svc\") pod \"dnsmasq-dns-8476fd89bc-8g7pv\" (UID: \"aa942752-02ac-4a81-9822-6b5adf5c5b91\") " pod="openstack/dnsmasq-dns-8476fd89bc-8g7pv" Mar 12 18:45:26.127679 master-0 kubenswrapper[29097]: I0312 18:45:26.127647 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa942752-02ac-4a81-9822-6b5adf5c5b91-config\") pod \"dnsmasq-dns-8476fd89bc-8g7pv\" (UID: \"aa942752-02ac-4a81-9822-6b5adf5c5b91\") " pod="openstack/dnsmasq-dns-8476fd89bc-8g7pv" Mar 12 18:45:26.161391 master-0 kubenswrapper[29097]: I0312 18:45:26.161326 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krfsq\" (UniqueName: \"kubernetes.io/projected/aa942752-02ac-4a81-9822-6b5adf5c5b91-kube-api-access-krfsq\") pod \"dnsmasq-dns-8476fd89bc-8g7pv\" (UID: \"aa942752-02ac-4a81-9822-6b5adf5c5b91\") " pod="openstack/dnsmasq-dns-8476fd89bc-8g7pv" Mar 12 18:45:26.180210 master-0 kubenswrapper[29097]: I0312 18:45:26.179879 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685c76cf85-tfc89" Mar 12 18:45:26.283016 master-0 kubenswrapper[29097]: I0312 18:45:26.282965 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476fd89bc-8g7pv" Mar 12 18:45:26.666481 master-0 kubenswrapper[29097]: I0312 18:45:26.666410 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-tfc89"] Mar 12 18:45:26.735858 master-0 kubenswrapper[29097]: I0312 18:45:26.735595 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685c76cf85-tfc89" event={"ID":"b4da79c5-a6b0-4e60-8816-a9e69f3a7e96","Type":"ContainerStarted","Data":"441479740aba602f2e7fe9e120970881f0f1d29f957e6cd13bfc222119cc7908"} Mar 12 18:45:26.766578 master-0 kubenswrapper[29097]: I0312 18:45:26.766492 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-8g7pv"] Mar 12 18:45:27.749812 master-0 kubenswrapper[29097]: I0312 18:45:27.747646 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476fd89bc-8g7pv" event={"ID":"aa942752-02ac-4a81-9822-6b5adf5c5b91","Type":"ContainerStarted","Data":"ebcd55caad03464df9c4c1db1f5d7bb9223334d379d746334c367b427aba1c7a"} Mar 12 18:45:28.221029 master-0 kubenswrapper[29097]: I0312 18:45:28.219810 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-tfc89"] Mar 12 18:45:28.265672 master-0 kubenswrapper[29097]: I0312 18:45:28.265203 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-586dbdbb8c-7gbkn"] Mar 12 18:45:28.267576 master-0 kubenswrapper[29097]: I0312 18:45:28.266813 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586dbdbb8c-7gbkn" Mar 12 18:45:28.278011 master-0 kubenswrapper[29097]: I0312 18:45:28.277959 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586dbdbb8c-7gbkn"] Mar 12 18:45:28.393923 master-0 kubenswrapper[29097]: I0312 18:45:28.392641 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v67qw\" (UniqueName: \"kubernetes.io/projected/3c79941a-6841-4657-ade5-ec4e627743bc-kube-api-access-v67qw\") pod \"dnsmasq-dns-586dbdbb8c-7gbkn\" (UID: \"3c79941a-6841-4657-ade5-ec4e627743bc\") " pod="openstack/dnsmasq-dns-586dbdbb8c-7gbkn" Mar 12 18:45:28.394285 master-0 kubenswrapper[29097]: I0312 18:45:28.394216 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c79941a-6841-4657-ade5-ec4e627743bc-config\") pod \"dnsmasq-dns-586dbdbb8c-7gbkn\" (UID: \"3c79941a-6841-4657-ade5-ec4e627743bc\") " pod="openstack/dnsmasq-dns-586dbdbb8c-7gbkn" Mar 12 18:45:28.394409 master-0 kubenswrapper[29097]: I0312 18:45:28.394387 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c79941a-6841-4657-ade5-ec4e627743bc-dns-svc\") pod \"dnsmasq-dns-586dbdbb8c-7gbkn\" (UID: \"3c79941a-6841-4657-ade5-ec4e627743bc\") " pod="openstack/dnsmasq-dns-586dbdbb8c-7gbkn" Mar 12 18:45:28.500583 master-0 kubenswrapper[29097]: I0312 18:45:28.500158 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c79941a-6841-4657-ade5-ec4e627743bc-config\") pod \"dnsmasq-dns-586dbdbb8c-7gbkn\" (UID: \"3c79941a-6841-4657-ade5-ec4e627743bc\") " pod="openstack/dnsmasq-dns-586dbdbb8c-7gbkn" Mar 12 18:45:28.500583 master-0 kubenswrapper[29097]: I0312 18:45:28.500261 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c79941a-6841-4657-ade5-ec4e627743bc-dns-svc\") pod \"dnsmasq-dns-586dbdbb8c-7gbkn\" (UID: \"3c79941a-6841-4657-ade5-ec4e627743bc\") " pod="openstack/dnsmasq-dns-586dbdbb8c-7gbkn" Mar 12 18:45:28.500583 master-0 kubenswrapper[29097]: I0312 18:45:28.500563 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v67qw\" (UniqueName: \"kubernetes.io/projected/3c79941a-6841-4657-ade5-ec4e627743bc-kube-api-access-v67qw\") pod \"dnsmasq-dns-586dbdbb8c-7gbkn\" (UID: \"3c79941a-6841-4657-ade5-ec4e627743bc\") " pod="openstack/dnsmasq-dns-586dbdbb8c-7gbkn" Mar 12 18:45:28.502149 master-0 kubenswrapper[29097]: I0312 18:45:28.501995 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c79941a-6841-4657-ade5-ec4e627743bc-dns-svc\") pod \"dnsmasq-dns-586dbdbb8c-7gbkn\" (UID: \"3c79941a-6841-4657-ade5-ec4e627743bc\") " pod="openstack/dnsmasq-dns-586dbdbb8c-7gbkn" Mar 12 18:45:28.502893 master-0 kubenswrapper[29097]: I0312 18:45:28.502833 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c79941a-6841-4657-ade5-ec4e627743bc-config\") pod \"dnsmasq-dns-586dbdbb8c-7gbkn\" (UID: \"3c79941a-6841-4657-ade5-ec4e627743bc\") " pod="openstack/dnsmasq-dns-586dbdbb8c-7gbkn" Mar 12 18:45:28.669630 master-0 kubenswrapper[29097]: I0312 18:45:28.668875 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v67qw\" (UniqueName: \"kubernetes.io/projected/3c79941a-6841-4657-ade5-ec4e627743bc-kube-api-access-v67qw\") pod \"dnsmasq-dns-586dbdbb8c-7gbkn\" (UID: \"3c79941a-6841-4657-ade5-ec4e627743bc\") " pod="openstack/dnsmasq-dns-586dbdbb8c-7gbkn" Mar 12 18:45:29.203754 master-0 kubenswrapper[29097]: I0312 18:45:28.907782 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586dbdbb8c-7gbkn" Mar 12 18:45:29.878979 master-0 kubenswrapper[29097]: I0312 18:45:29.878764 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586dbdbb8c-7gbkn"] Mar 12 18:45:30.588094 master-0 kubenswrapper[29097]: I0312 18:45:30.587985 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-8g7pv"] Mar 12 18:45:30.626956 master-0 kubenswrapper[29097]: I0312 18:45:30.626885 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-f289q"] Mar 12 18:45:30.628719 master-0 kubenswrapper[29097]: I0312 18:45:30.628689 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff8fd9d5c-f289q" Mar 12 18:45:30.657246 master-0 kubenswrapper[29097]: I0312 18:45:30.657196 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-f289q"] Mar 12 18:45:30.775060 master-0 kubenswrapper[29097]: I0312 18:45:30.772249 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ceeba9c-e67f-49da-9b94-4359cfcd448e-dns-svc\") pod \"dnsmasq-dns-6ff8fd9d5c-f289q\" (UID: \"6ceeba9c-e67f-49da-9b94-4359cfcd448e\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-f289q" Mar 12 18:45:30.796146 master-0 kubenswrapper[29097]: I0312 18:45:30.793117 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ceeba9c-e67f-49da-9b94-4359cfcd448e-config\") pod \"dnsmasq-dns-6ff8fd9d5c-f289q\" (UID: \"6ceeba9c-e67f-49da-9b94-4359cfcd448e\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-f289q" Mar 12 18:45:30.796146 master-0 kubenswrapper[29097]: I0312 18:45:30.793372 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfj86\" (UniqueName: \"kubernetes.io/projected/6ceeba9c-e67f-49da-9b94-4359cfcd448e-kube-api-access-cfj86\") pod \"dnsmasq-dns-6ff8fd9d5c-f289q\" (UID: \"6ceeba9c-e67f-49da-9b94-4359cfcd448e\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-f289q" Mar 12 18:45:30.822625 master-0 kubenswrapper[29097]: I0312 18:45:30.818177 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586dbdbb8c-7gbkn" event={"ID":"3c79941a-6841-4657-ade5-ec4e627743bc","Type":"ContainerStarted","Data":"0131ba4da1985331e70751da29910d2fa67376c20904a5850f7b238bfcb3675d"} Mar 12 18:45:30.895507 master-0 kubenswrapper[29097]: I0312 18:45:30.895447 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ceeba9c-e67f-49da-9b94-4359cfcd448e-config\") pod \"dnsmasq-dns-6ff8fd9d5c-f289q\" (UID: \"6ceeba9c-e67f-49da-9b94-4359cfcd448e\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-f289q" Mar 12 18:45:30.895745 master-0 kubenswrapper[29097]: I0312 18:45:30.895568 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfj86\" (UniqueName: \"kubernetes.io/projected/6ceeba9c-e67f-49da-9b94-4359cfcd448e-kube-api-access-cfj86\") pod \"dnsmasq-dns-6ff8fd9d5c-f289q\" (UID: \"6ceeba9c-e67f-49da-9b94-4359cfcd448e\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-f289q" Mar 12 18:45:30.895745 master-0 kubenswrapper[29097]: I0312 18:45:30.895630 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ceeba9c-e67f-49da-9b94-4359cfcd448e-dns-svc\") pod \"dnsmasq-dns-6ff8fd9d5c-f289q\" (UID: \"6ceeba9c-e67f-49da-9b94-4359cfcd448e\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-f289q" Mar 12 18:45:30.897253 master-0 kubenswrapper[29097]: I0312 18:45:30.897191 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ceeba9c-e67f-49da-9b94-4359cfcd448e-dns-svc\") pod \"dnsmasq-dns-6ff8fd9d5c-f289q\" (UID: \"6ceeba9c-e67f-49da-9b94-4359cfcd448e\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-f289q" Mar 12 18:45:30.897419 master-0 kubenswrapper[29097]: I0312 18:45:30.897392 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ceeba9c-e67f-49da-9b94-4359cfcd448e-config\") pod \"dnsmasq-dns-6ff8fd9d5c-f289q\" (UID: \"6ceeba9c-e67f-49da-9b94-4359cfcd448e\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-f289q" Mar 12 18:45:31.186438 master-0 kubenswrapper[29097]: I0312 18:45:31.186275 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfj86\" (UniqueName: \"kubernetes.io/projected/6ceeba9c-e67f-49da-9b94-4359cfcd448e-kube-api-access-cfj86\") pod \"dnsmasq-dns-6ff8fd9d5c-f289q\" (UID: \"6ceeba9c-e67f-49da-9b94-4359cfcd448e\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-f289q" Mar 12 18:45:31.346951 master-0 kubenswrapper[29097]: I0312 18:45:31.337287 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff8fd9d5c-f289q" Mar 12 18:45:32.105918 master-0 kubenswrapper[29097]: I0312 18:45:32.105823 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-f289q"] Mar 12 18:45:32.435809 master-0 kubenswrapper[29097]: I0312 18:45:32.435006 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 18:45:32.437922 master-0 kubenswrapper[29097]: I0312 18:45:32.437011 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 12 18:45:32.446319 master-0 kubenswrapper[29097]: I0312 18:45:32.446247 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 12 18:45:32.446891 master-0 kubenswrapper[29097]: I0312 18:45:32.446879 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 12 18:45:32.447164 master-0 kubenswrapper[29097]: I0312 18:45:32.447143 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 12 18:45:32.466407 master-0 kubenswrapper[29097]: I0312 18:45:32.450849 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 12 18:45:32.472315 master-0 kubenswrapper[29097]: I0312 18:45:32.450886 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 12 18:45:32.472653 master-0 kubenswrapper[29097]: I0312 18:45:32.450924 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 12 18:45:32.487937 master-0 kubenswrapper[29097]: I0312 18:45:32.487894 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 18:45:32.597105 master-0 kubenswrapper[29097]: I0312 18:45:32.597028 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/49290c2f-177f-4a5e-8e1e-cf105e962c5b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"49290c2f-177f-4a5e-8e1e-cf105e962c5b\") " pod="openstack/rabbitmq-server-0" Mar 12 18:45:32.597105 master-0 kubenswrapper[29097]: I0312 18:45:32.597083 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/49290c2f-177f-4a5e-8e1e-cf105e962c5b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"49290c2f-177f-4a5e-8e1e-cf105e962c5b\") " pod="openstack/rabbitmq-server-0" Mar 12 18:45:32.597105 master-0 kubenswrapper[29097]: I0312 18:45:32.597101 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/49290c2f-177f-4a5e-8e1e-cf105e962c5b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"49290c2f-177f-4a5e-8e1e-cf105e962c5b\") " pod="openstack/rabbitmq-server-0" Mar 12 18:45:32.597372 master-0 kubenswrapper[29097]: I0312 18:45:32.597123 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/49290c2f-177f-4a5e-8e1e-cf105e962c5b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"49290c2f-177f-4a5e-8e1e-cf105e962c5b\") " pod="openstack/rabbitmq-server-0" Mar 12 18:45:32.597372 master-0 kubenswrapper[29097]: I0312 18:45:32.597141 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49290c2f-177f-4a5e-8e1e-cf105e962c5b-config-data\") pod \"rabbitmq-server-0\" (UID: \"49290c2f-177f-4a5e-8e1e-cf105e962c5b\") " pod="openstack/rabbitmq-server-0" Mar 12 18:45:32.597372 master-0 kubenswrapper[29097]: I0312 18:45:32.597182 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/49290c2f-177f-4a5e-8e1e-cf105e962c5b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"49290c2f-177f-4a5e-8e1e-cf105e962c5b\") " pod="openstack/rabbitmq-server-0" Mar 12 18:45:32.597372 master-0 kubenswrapper[29097]: I0312 18:45:32.597205 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5ab541a7-f32a-488e-a587-f5622550e5fe\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a36b4049-373d-42e8-9c4b-ad0d9d80ebbf\") pod \"rabbitmq-server-0\" (UID: \"49290c2f-177f-4a5e-8e1e-cf105e962c5b\") " pod="openstack/rabbitmq-server-0" Mar 12 18:45:32.597372 master-0 kubenswrapper[29097]: I0312 18:45:32.597221 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/49290c2f-177f-4a5e-8e1e-cf105e962c5b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"49290c2f-177f-4a5e-8e1e-cf105e962c5b\") " pod="openstack/rabbitmq-server-0" Mar 12 18:45:32.597372 master-0 kubenswrapper[29097]: I0312 18:45:32.597261 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/49290c2f-177f-4a5e-8e1e-cf105e962c5b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"49290c2f-177f-4a5e-8e1e-cf105e962c5b\") " pod="openstack/rabbitmq-server-0" Mar 12 18:45:32.597372 master-0 kubenswrapper[29097]: I0312 18:45:32.597319 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/49290c2f-177f-4a5e-8e1e-cf105e962c5b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"49290c2f-177f-4a5e-8e1e-cf105e962c5b\") " pod="openstack/rabbitmq-server-0" Mar 12 18:45:32.597372 master-0 kubenswrapper[29097]: I0312 18:45:32.597338 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hs5t\" (UniqueName: \"kubernetes.io/projected/49290c2f-177f-4a5e-8e1e-cf105e962c5b-kube-api-access-2hs5t\") pod \"rabbitmq-server-0\" (UID: \"49290c2f-177f-4a5e-8e1e-cf105e962c5b\") " pod="openstack/rabbitmq-server-0" Mar 12 18:45:32.698421 master-0 kubenswrapper[29097]: I0312 18:45:32.698308 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5ab541a7-f32a-488e-a587-f5622550e5fe\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a36b4049-373d-42e8-9c4b-ad0d9d80ebbf\") pod \"rabbitmq-server-0\" (UID: \"49290c2f-177f-4a5e-8e1e-cf105e962c5b\") " pod="openstack/rabbitmq-server-0" Mar 12 18:45:32.698421 master-0 kubenswrapper[29097]: I0312 18:45:32.698365 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/49290c2f-177f-4a5e-8e1e-cf105e962c5b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"49290c2f-177f-4a5e-8e1e-cf105e962c5b\") " pod="openstack/rabbitmq-server-0" Mar 12 18:45:32.698421 master-0 kubenswrapper[29097]: I0312 18:45:32.698417 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/49290c2f-177f-4a5e-8e1e-cf105e962c5b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"49290c2f-177f-4a5e-8e1e-cf105e962c5b\") " pod="openstack/rabbitmq-server-0" Mar 12 18:45:32.699281 master-0 kubenswrapper[29097]: I0312 18:45:32.698486 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/49290c2f-177f-4a5e-8e1e-cf105e962c5b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"49290c2f-177f-4a5e-8e1e-cf105e962c5b\") " pod="openstack/rabbitmq-server-0" Mar 12 18:45:32.699281 master-0 kubenswrapper[29097]: I0312 18:45:32.698505 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hs5t\" (UniqueName: \"kubernetes.io/projected/49290c2f-177f-4a5e-8e1e-cf105e962c5b-kube-api-access-2hs5t\") pod \"rabbitmq-server-0\" (UID: \"49290c2f-177f-4a5e-8e1e-cf105e962c5b\") " pod="openstack/rabbitmq-server-0" Mar 12 18:45:32.699281 master-0 kubenswrapper[29097]: I0312 18:45:32.698933 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/49290c2f-177f-4a5e-8e1e-cf105e962c5b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"49290c2f-177f-4a5e-8e1e-cf105e962c5b\") " pod="openstack/rabbitmq-server-0" Mar 12 18:45:32.699281 master-0 kubenswrapper[29097]: I0312 18:45:32.698964 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/49290c2f-177f-4a5e-8e1e-cf105e962c5b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"49290c2f-177f-4a5e-8e1e-cf105e962c5b\") " pod="openstack/rabbitmq-server-0" Mar 12 18:45:32.699281 master-0 kubenswrapper[29097]: I0312 18:45:32.699225 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/49290c2f-177f-4a5e-8e1e-cf105e962c5b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"49290c2f-177f-4a5e-8e1e-cf105e962c5b\") " pod="openstack/rabbitmq-server-0" Mar 12 18:45:32.699281 master-0 kubenswrapper[29097]: I0312 18:45:32.699259 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/49290c2f-177f-4a5e-8e1e-cf105e962c5b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"49290c2f-177f-4a5e-8e1e-cf105e962c5b\") " pod="openstack/rabbitmq-server-0" Mar 12 18:45:32.699281 master-0 kubenswrapper[29097]: I0312 18:45:32.699274 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49290c2f-177f-4a5e-8e1e-cf105e962c5b-config-data\") pod \"rabbitmq-server-0\" (UID: \"49290c2f-177f-4a5e-8e1e-cf105e962c5b\") " pod="openstack/rabbitmq-server-0" Mar 12 18:45:32.700154 master-0 kubenswrapper[29097]: I0312 18:45:32.699313 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/49290c2f-177f-4a5e-8e1e-cf105e962c5b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"49290c2f-177f-4a5e-8e1e-cf105e962c5b\") " pod="openstack/rabbitmq-server-0" Mar 12 18:45:32.700154 master-0 kubenswrapper[29097]: I0312 18:45:32.699738 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/49290c2f-177f-4a5e-8e1e-cf105e962c5b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"49290c2f-177f-4a5e-8e1e-cf105e962c5b\") " pod="openstack/rabbitmq-server-0" Mar 12 18:45:32.702345 master-0 kubenswrapper[29097]: I0312 18:45:32.700376 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/49290c2f-177f-4a5e-8e1e-cf105e962c5b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"49290c2f-177f-4a5e-8e1e-cf105e962c5b\") " pod="openstack/rabbitmq-server-0" Mar 12 18:45:32.702345 master-0 kubenswrapper[29097]: I0312 18:45:32.700933 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/49290c2f-177f-4a5e-8e1e-cf105e962c5b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"49290c2f-177f-4a5e-8e1e-cf105e962c5b\") " pod="openstack/rabbitmq-server-0" Mar 12 18:45:32.702345 master-0 kubenswrapper[29097]: I0312 18:45:32.702081 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49290c2f-177f-4a5e-8e1e-cf105e962c5b-config-data\") pod \"rabbitmq-server-0\" (UID: \"49290c2f-177f-4a5e-8e1e-cf105e962c5b\") " pod="openstack/rabbitmq-server-0" Mar 12 18:45:32.702345 master-0 kubenswrapper[29097]: I0312 18:45:32.702176 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/49290c2f-177f-4a5e-8e1e-cf105e962c5b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"49290c2f-177f-4a5e-8e1e-cf105e962c5b\") " pod="openstack/rabbitmq-server-0" Mar 12 18:45:32.706159 master-0 kubenswrapper[29097]: I0312 18:45:32.706070 29097 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 18:45:32.706159 master-0 kubenswrapper[29097]: I0312 18:45:32.706111 29097 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5ab541a7-f32a-488e-a587-f5622550e5fe\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a36b4049-373d-42e8-9c4b-ad0d9d80ebbf\") pod \"rabbitmq-server-0\" (UID: \"49290c2f-177f-4a5e-8e1e-cf105e962c5b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/6df21e95eda6b451ffe66173271904b1f16af3e6454ab30853aef784f0a69d56/globalmount\"" pod="openstack/rabbitmq-server-0" Mar 12 18:45:32.712328 master-0 kubenswrapper[29097]: I0312 18:45:32.712016 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/49290c2f-177f-4a5e-8e1e-cf105e962c5b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"49290c2f-177f-4a5e-8e1e-cf105e962c5b\") " pod="openstack/rabbitmq-server-0" Mar 12 18:45:32.716133 master-0 kubenswrapper[29097]: I0312 18:45:32.716000 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/49290c2f-177f-4a5e-8e1e-cf105e962c5b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"49290c2f-177f-4a5e-8e1e-cf105e962c5b\") " pod="openstack/rabbitmq-server-0" Mar 12 18:45:32.723743 master-0 kubenswrapper[29097]: I0312 18:45:32.723703 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/49290c2f-177f-4a5e-8e1e-cf105e962c5b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"49290c2f-177f-4a5e-8e1e-cf105e962c5b\") " pod="openstack/rabbitmq-server-0" Mar 12 18:45:32.727193 master-0 kubenswrapper[29097]: I0312 18:45:32.727136 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hs5t\" (UniqueName: \"kubernetes.io/projected/49290c2f-177f-4a5e-8e1e-cf105e962c5b-kube-api-access-2hs5t\") pod \"rabbitmq-server-0\" (UID: \"49290c2f-177f-4a5e-8e1e-cf105e962c5b\") " pod="openstack/rabbitmq-server-0" Mar 12 18:45:32.734505 master-0 kubenswrapper[29097]: I0312 18:45:32.733860 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/49290c2f-177f-4a5e-8e1e-cf105e962c5b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"49290c2f-177f-4a5e-8e1e-cf105e962c5b\") " pod="openstack/rabbitmq-server-0" Mar 12 18:45:32.868752 master-0 kubenswrapper[29097]: I0312 18:45:32.868391 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-f289q" event={"ID":"6ceeba9c-e67f-49da-9b94-4359cfcd448e","Type":"ContainerStarted","Data":"de6dbfa664d77581d275eec398cf25477503e269b7302c717d3a0741d0b4a1d8"} Mar 12 18:45:33.181843 master-0 kubenswrapper[29097]: I0312 18:45:33.181687 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 12 18:45:33.183475 master-0 kubenswrapper[29097]: I0312 18:45:33.183411 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 12 18:45:33.193509 master-0 kubenswrapper[29097]: I0312 18:45:33.193235 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 12 18:45:33.196050 master-0 kubenswrapper[29097]: I0312 18:45:33.195960 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 12 18:45:33.196137 master-0 kubenswrapper[29097]: I0312 18:45:33.196120 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 12 18:45:33.270146 master-0 kubenswrapper[29097]: I0312 18:45:33.269778 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 12 18:45:33.307491 master-0 kubenswrapper[29097]: I0312 18:45:33.307407 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 18:45:33.309339 master-0 kubenswrapper[29097]: I0312 18:45:33.309312 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:33.313344 master-0 kubenswrapper[29097]: I0312 18:45:33.313287 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3c377976-30da-4335-b35e-e2e65789e21d-kolla-config\") pod \"memcached-0\" (UID: \"3c377976-30da-4335-b35e-e2e65789e21d\") " pod="openstack/memcached-0" Mar 12 18:45:33.313606 master-0 kubenswrapper[29097]: I0312 18:45:33.313559 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c377976-30da-4335-b35e-e2e65789e21d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3c377976-30da-4335-b35e-e2e65789e21d\") " pod="openstack/memcached-0" Mar 12 18:45:33.313660 master-0 kubenswrapper[29097]: I0312 18:45:33.313646 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c377976-30da-4335-b35e-e2e65789e21d-config-data\") pod \"memcached-0\" (UID: \"3c377976-30da-4335-b35e-e2e65789e21d\") " pod="openstack/memcached-0" Mar 12 18:45:33.313918 master-0 kubenswrapper[29097]: I0312 18:45:33.313893 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c377976-30da-4335-b35e-e2e65789e21d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3c377976-30da-4335-b35e-e2e65789e21d\") " pod="openstack/memcached-0" Mar 12 18:45:33.313966 master-0 kubenswrapper[29097]: I0312 18:45:33.313939 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzzsm\" (UniqueName: \"kubernetes.io/projected/3c377976-30da-4335-b35e-e2e65789e21d-kube-api-access-pzzsm\") pod \"memcached-0\" (UID: \"3c377976-30da-4335-b35e-e2e65789e21d\") " pod="openstack/memcached-0" Mar 12 18:45:33.315175 master-0 kubenswrapper[29097]: I0312 18:45:33.315128 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 18:45:33.315909 master-0 kubenswrapper[29097]: I0312 18:45:33.315873 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 12 18:45:33.319067 master-0 kubenswrapper[29097]: I0312 18:45:33.319043 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 12 18:45:33.320374 master-0 kubenswrapper[29097]: I0312 18:45:33.319382 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 12 18:45:33.320374 master-0 kubenswrapper[29097]: I0312 18:45:33.319587 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 12 18:45:33.320374 master-0 kubenswrapper[29097]: I0312 18:45:33.319726 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 12 18:45:33.320374 master-0 kubenswrapper[29097]: I0312 18:45:33.320013 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 12 18:45:33.415477 master-0 kubenswrapper[29097]: I0312 18:45:33.415319 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3c377976-30da-4335-b35e-e2e65789e21d-kolla-config\") pod \"memcached-0\" (UID: \"3c377976-30da-4335-b35e-e2e65789e21d\") " pod="openstack/memcached-0" Mar 12 18:45:33.415477 master-0 kubenswrapper[29097]: I0312 18:45:33.415384 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:33.415477 master-0 kubenswrapper[29097]: I0312 18:45:33.415405 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:33.415477 master-0 kubenswrapper[29097]: I0312 18:45:33.415456 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:33.415477 master-0 kubenswrapper[29097]: I0312 18:45:33.415486 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c377976-30da-4335-b35e-e2e65789e21d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3c377976-30da-4335-b35e-e2e65789e21d\") " pod="openstack/memcached-0" Mar 12 18:45:33.416091 master-0 kubenswrapper[29097]: I0312 18:45:33.416034 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3c377976-30da-4335-b35e-e2e65789e21d-kolla-config\") pod \"memcached-0\" (UID: \"3c377976-30da-4335-b35e-e2e65789e21d\") " pod="openstack/memcached-0" Mar 12 18:45:33.416145 master-0 kubenswrapper[29097]: I0312 18:45:33.416042 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c377976-30da-4335-b35e-e2e65789e21d-config-data\") pod \"memcached-0\" (UID: \"3c377976-30da-4335-b35e-e2e65789e21d\") " pod="openstack/memcached-0" Mar 12 18:45:33.419178 master-0 kubenswrapper[29097]: I0312 18:45:33.419131 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:33.419267 master-0 kubenswrapper[29097]: I0312 18:45:33.419188 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:33.419267 master-0 kubenswrapper[29097]: I0312 18:45:33.419237 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:33.419267 master-0 kubenswrapper[29097]: I0312 18:45:33.419260 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-87d55060-cb6a-4b88-b0c5-318bd71db07a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b6dbd025-9b64-4c96-a217-75fad2671135\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:33.419369 master-0 kubenswrapper[29097]: I0312 18:45:33.419326 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:33.419529 master-0 kubenswrapper[29097]: I0312 18:45:33.419479 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45s5g\" (UniqueName: \"kubernetes.io/projected/cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b-kube-api-access-45s5g\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:33.419578 master-0 kubenswrapper[29097]: I0312 18:45:33.419537 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:33.419578 master-0 kubenswrapper[29097]: I0312 18:45:33.419570 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c377976-30da-4335-b35e-e2e65789e21d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3c377976-30da-4335-b35e-e2e65789e21d\") " pod="openstack/memcached-0" Mar 12 18:45:33.419638 master-0 kubenswrapper[29097]: I0312 18:45:33.419592 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzzsm\" (UniqueName: \"kubernetes.io/projected/3c377976-30da-4335-b35e-e2e65789e21d-kube-api-access-pzzsm\") pod \"memcached-0\" (UID: \"3c377976-30da-4335-b35e-e2e65789e21d\") " pod="openstack/memcached-0" Mar 12 18:45:33.419680 master-0 kubenswrapper[29097]: I0312 18:45:33.419664 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:33.420735 master-0 kubenswrapper[29097]: I0312 18:45:33.420669 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c377976-30da-4335-b35e-e2e65789e21d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3c377976-30da-4335-b35e-e2e65789e21d\") " pod="openstack/memcached-0" Mar 12 18:45:33.421674 master-0 kubenswrapper[29097]: I0312 18:45:33.421649 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c377976-30da-4335-b35e-e2e65789e21d-config-data\") pod \"memcached-0\" (UID: \"3c377976-30da-4335-b35e-e2e65789e21d\") " pod="openstack/memcached-0" Mar 12 18:45:33.426967 master-0 kubenswrapper[29097]: I0312 18:45:33.426503 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c377976-30da-4335-b35e-e2e65789e21d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3c377976-30da-4335-b35e-e2e65789e21d\") " pod="openstack/memcached-0" Mar 12 18:45:33.442043 master-0 kubenswrapper[29097]: I0312 18:45:33.440335 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzzsm\" (UniqueName: \"kubernetes.io/projected/3c377976-30da-4335-b35e-e2e65789e21d-kube-api-access-pzzsm\") pod \"memcached-0\" (UID: \"3c377976-30da-4335-b35e-e2e65789e21d\") " pod="openstack/memcached-0" Mar 12 18:45:33.524401 master-0 kubenswrapper[29097]: I0312 18:45:33.524316 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:33.524401 master-0 kubenswrapper[29097]: I0312 18:45:33.524398 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:33.524652 master-0 kubenswrapper[29097]: I0312 18:45:33.524467 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:33.524652 master-0 kubenswrapper[29097]: I0312 18:45:33.524508 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:33.524652 master-0 kubenswrapper[29097]: I0312 18:45:33.524562 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:33.524652 master-0 kubenswrapper[29097]: I0312 18:45:33.524598 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:33.524828 master-0 kubenswrapper[29097]: I0312 18:45:33.524629 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-87d55060-cb6a-4b88-b0c5-318bd71db07a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b6dbd025-9b64-4c96-a217-75fad2671135\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:33.525065 master-0 kubenswrapper[29097]: I0312 18:45:33.525023 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:33.525230 master-0 kubenswrapper[29097]: I0312 18:45:33.525192 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45s5g\" (UniqueName: \"kubernetes.io/projected/cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b-kube-api-access-45s5g\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:33.525352 master-0 kubenswrapper[29097]: I0312 18:45:33.525299 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:33.525665 master-0 kubenswrapper[29097]: I0312 18:45:33.525563 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:33.527547 master-0 kubenswrapper[29097]: I0312 18:45:33.527471 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:33.528007 master-0 kubenswrapper[29097]: I0312 18:45:33.527976 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:33.528312 master-0 kubenswrapper[29097]: I0312 18:45:33.528270 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:33.528419 master-0 kubenswrapper[29097]: I0312 18:45:33.528387 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:33.528866 master-0 kubenswrapper[29097]: I0312 18:45:33.528834 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:33.530172 master-0 kubenswrapper[29097]: I0312 18:45:33.530148 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:33.533451 master-0 kubenswrapper[29097]: I0312 18:45:33.533411 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:33.534254 master-0 kubenswrapper[29097]: I0312 18:45:33.534203 29097 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 18:45:33.534312 master-0 kubenswrapper[29097]: I0312 18:45:33.534251 29097 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-87d55060-cb6a-4b88-b0c5-318bd71db07a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b6dbd025-9b64-4c96-a217-75fad2671135\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/32be74258c3f6792249a55d7e355c619975afcf6c7b7a8a057d5c13198196205/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:33.535710 master-0 kubenswrapper[29097]: I0312 18:45:33.535662 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:33.580396 master-0 kubenswrapper[29097]: I0312 18:45:33.580337 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:33.580679 master-0 kubenswrapper[29097]: I0312 18:45:33.580398 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45s5g\" (UniqueName: \"kubernetes.io/projected/cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b-kube-api-access-45s5g\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:33.634001 master-0 kubenswrapper[29097]: I0312 18:45:33.633725 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 12 18:45:34.344226 master-0 kubenswrapper[29097]: I0312 18:45:34.344183 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5ab541a7-f32a-488e-a587-f5622550e5fe\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a36b4049-373d-42e8-9c4b-ad0d9d80ebbf\") pod \"rabbitmq-server-0\" (UID: \"49290c2f-177f-4a5e-8e1e-cf105e962c5b\") " pod="openstack/rabbitmq-server-0" Mar 12 18:45:34.370290 master-0 kubenswrapper[29097]: I0312 18:45:34.369834 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 12 18:45:34.382567 master-0 kubenswrapper[29097]: I0312 18:45:34.381746 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 12 18:45:34.413110 master-0 kubenswrapper[29097]: I0312 18:45:34.412604 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 12 18:45:34.413110 master-0 kubenswrapper[29097]: I0312 18:45:34.412809 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 12 18:45:34.413110 master-0 kubenswrapper[29097]: I0312 18:45:34.412857 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 12 18:45:34.462644 master-0 kubenswrapper[29097]: I0312 18:45:34.451042 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dd774b8c-0550-44b3-bbf6-07da19cafb59\" (UniqueName: \"kubernetes.io/csi/topolvm.io^7e769c9b-abc1-4435-894c-2300c54f2005\") pod \"openstack-galera-0\" (UID: \"89da455f-45f0-4844-9a54-1ad46fe41d43\") " pod="openstack/openstack-galera-0" Mar 12 18:45:34.462644 master-0 kubenswrapper[29097]: I0312 18:45:34.451113 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89da455f-45f0-4844-9a54-1ad46fe41d43-operator-scripts\") pod \"openstack-galera-0\" (UID: \"89da455f-45f0-4844-9a54-1ad46fe41d43\") " pod="openstack/openstack-galera-0" Mar 12 18:45:34.462644 master-0 kubenswrapper[29097]: I0312 18:45:34.451135 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/89da455f-45f0-4844-9a54-1ad46fe41d43-kolla-config\") pod \"openstack-galera-0\" (UID: \"89da455f-45f0-4844-9a54-1ad46fe41d43\") " pod="openstack/openstack-galera-0" Mar 12 18:45:34.462644 master-0 kubenswrapper[29097]: I0312 18:45:34.451174 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/89da455f-45f0-4844-9a54-1ad46fe41d43-config-data-generated\") pod \"openstack-galera-0\" (UID: \"89da455f-45f0-4844-9a54-1ad46fe41d43\") " pod="openstack/openstack-galera-0" Mar 12 18:45:34.462644 master-0 kubenswrapper[29097]: I0312 18:45:34.451193 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftjft\" (UniqueName: \"kubernetes.io/projected/89da455f-45f0-4844-9a54-1ad46fe41d43-kube-api-access-ftjft\") pod \"openstack-galera-0\" (UID: \"89da455f-45f0-4844-9a54-1ad46fe41d43\") " pod="openstack/openstack-galera-0" Mar 12 18:45:34.462644 master-0 kubenswrapper[29097]: I0312 18:45:34.451219 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89da455f-45f0-4844-9a54-1ad46fe41d43-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"89da455f-45f0-4844-9a54-1ad46fe41d43\") " pod="openstack/openstack-galera-0" Mar 12 18:45:34.462644 master-0 kubenswrapper[29097]: I0312 18:45:34.451637 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/89da455f-45f0-4844-9a54-1ad46fe41d43-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"89da455f-45f0-4844-9a54-1ad46fe41d43\") " pod="openstack/openstack-galera-0" Mar 12 18:45:34.462644 master-0 kubenswrapper[29097]: I0312 18:45:34.451727 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/89da455f-45f0-4844-9a54-1ad46fe41d43-config-data-default\") pod \"openstack-galera-0\" (UID: \"89da455f-45f0-4844-9a54-1ad46fe41d43\") " pod="openstack/openstack-galera-0" Mar 12 18:45:34.462644 master-0 kubenswrapper[29097]: I0312 18:45:34.461331 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 12 18:45:34.563400 master-0 kubenswrapper[29097]: I0312 18:45:34.563081 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dd774b8c-0550-44b3-bbf6-07da19cafb59\" (UniqueName: \"kubernetes.io/csi/topolvm.io^7e769c9b-abc1-4435-894c-2300c54f2005\") pod \"openstack-galera-0\" (UID: \"89da455f-45f0-4844-9a54-1ad46fe41d43\") " pod="openstack/openstack-galera-0" Mar 12 18:45:34.563400 master-0 kubenswrapper[29097]: I0312 18:45:34.563195 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89da455f-45f0-4844-9a54-1ad46fe41d43-operator-scripts\") pod \"openstack-galera-0\" (UID: \"89da455f-45f0-4844-9a54-1ad46fe41d43\") " pod="openstack/openstack-galera-0" Mar 12 18:45:34.563400 master-0 kubenswrapper[29097]: I0312 18:45:34.563223 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/89da455f-45f0-4844-9a54-1ad46fe41d43-kolla-config\") pod \"openstack-galera-0\" (UID: \"89da455f-45f0-4844-9a54-1ad46fe41d43\") " pod="openstack/openstack-galera-0" Mar 12 18:45:34.563400 master-0 kubenswrapper[29097]: I0312 18:45:34.563256 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/89da455f-45f0-4844-9a54-1ad46fe41d43-config-data-generated\") pod \"openstack-galera-0\" (UID: \"89da455f-45f0-4844-9a54-1ad46fe41d43\") " pod="openstack/openstack-galera-0" Mar 12 18:45:34.563400 master-0 kubenswrapper[29097]: I0312 18:45:34.563282 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftjft\" (UniqueName: \"kubernetes.io/projected/89da455f-45f0-4844-9a54-1ad46fe41d43-kube-api-access-ftjft\") pod \"openstack-galera-0\" (UID: \"89da455f-45f0-4844-9a54-1ad46fe41d43\") " pod="openstack/openstack-galera-0" Mar 12 18:45:34.565072 master-0 kubenswrapper[29097]: I0312 18:45:34.565013 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/89da455f-45f0-4844-9a54-1ad46fe41d43-config-data-generated\") pod \"openstack-galera-0\" (UID: \"89da455f-45f0-4844-9a54-1ad46fe41d43\") " pod="openstack/openstack-galera-0" Mar 12 18:45:34.565245 master-0 kubenswrapper[29097]: I0312 18:45:34.565185 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89da455f-45f0-4844-9a54-1ad46fe41d43-operator-scripts\") pod \"openstack-galera-0\" (UID: \"89da455f-45f0-4844-9a54-1ad46fe41d43\") " pod="openstack/openstack-galera-0" Mar 12 18:45:34.565351 master-0 kubenswrapper[29097]: I0312 18:45:34.565311 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89da455f-45f0-4844-9a54-1ad46fe41d43-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"89da455f-45f0-4844-9a54-1ad46fe41d43\") " pod="openstack/openstack-galera-0" Mar 12 18:45:34.565888 master-0 kubenswrapper[29097]: I0312 18:45:34.565844 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/89da455f-45f0-4844-9a54-1ad46fe41d43-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"89da455f-45f0-4844-9a54-1ad46fe41d43\") " pod="openstack/openstack-galera-0" Mar 12 18:45:34.565947 master-0 kubenswrapper[29097]: I0312 18:45:34.565900 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/89da455f-45f0-4844-9a54-1ad46fe41d43-config-data-default\") pod \"openstack-galera-0\" (UID: \"89da455f-45f0-4844-9a54-1ad46fe41d43\") " pod="openstack/openstack-galera-0" Mar 12 18:45:34.566827 master-0 kubenswrapper[29097]: I0312 18:45:34.566799 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/89da455f-45f0-4844-9a54-1ad46fe41d43-config-data-default\") pod \"openstack-galera-0\" (UID: \"89da455f-45f0-4844-9a54-1ad46fe41d43\") " pod="openstack/openstack-galera-0" Mar 12 18:45:34.568019 master-0 kubenswrapper[29097]: I0312 18:45:34.567976 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/89da455f-45f0-4844-9a54-1ad46fe41d43-kolla-config\") pod \"openstack-galera-0\" (UID: \"89da455f-45f0-4844-9a54-1ad46fe41d43\") " pod="openstack/openstack-galera-0" Mar 12 18:45:34.572032 master-0 kubenswrapper[29097]: I0312 18:45:34.571031 29097 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 18:45:34.572032 master-0 kubenswrapper[29097]: I0312 18:45:34.571073 29097 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dd774b8c-0550-44b3-bbf6-07da19cafb59\" (UniqueName: \"kubernetes.io/csi/topolvm.io^7e769c9b-abc1-4435-894c-2300c54f2005\") pod \"openstack-galera-0\" (UID: \"89da455f-45f0-4844-9a54-1ad46fe41d43\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/5a3f423c381b6fcb86f31b23a968426e707348b39886aad151bc5df4534af2a3/globalmount\"" pod="openstack/openstack-galera-0" Mar 12 18:45:34.572811 master-0 kubenswrapper[29097]: I0312 18:45:34.572731 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89da455f-45f0-4844-9a54-1ad46fe41d43-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"89da455f-45f0-4844-9a54-1ad46fe41d43\") " pod="openstack/openstack-galera-0" Mar 12 18:45:34.579628 master-0 kubenswrapper[29097]: I0312 18:45:34.578069 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/89da455f-45f0-4844-9a54-1ad46fe41d43-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"89da455f-45f0-4844-9a54-1ad46fe41d43\") " pod="openstack/openstack-galera-0" Mar 12 18:45:34.590211 master-0 kubenswrapper[29097]: I0312 18:45:34.588077 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftjft\" (UniqueName: \"kubernetes.io/projected/89da455f-45f0-4844-9a54-1ad46fe41d43-kube-api-access-ftjft\") pod \"openstack-galera-0\" (UID: \"89da455f-45f0-4844-9a54-1ad46fe41d43\") " pod="openstack/openstack-galera-0" Mar 12 18:45:34.591975 master-0 kubenswrapper[29097]: I0312 18:45:34.591943 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 12 18:45:35.154335 master-0 kubenswrapper[29097]: I0312 18:45:35.153508 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 12 18:45:35.163583 master-0 kubenswrapper[29097]: I0312 18:45:35.157391 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 12 18:45:35.167571 master-0 kubenswrapper[29097]: I0312 18:45:35.164371 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 12 18:45:35.167571 master-0 kubenswrapper[29097]: I0312 18:45:35.164573 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 12 18:45:35.167571 master-0 kubenswrapper[29097]: I0312 18:45:35.164956 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 12 18:45:35.206867 master-0 kubenswrapper[29097]: I0312 18:45:35.205162 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 12 18:45:35.291512 master-0 kubenswrapper[29097]: I0312 18:45:35.291367 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7ec3f557-015f-4fc3-b6cd-9d7f0f976e32-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7ec3f557-015f-4fc3-b6cd-9d7f0f976e32\") " pod="openstack/openstack-cell1-galera-0" Mar 12 18:45:35.291512 master-0 kubenswrapper[29097]: I0312 18:45:35.291473 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ec3f557-015f-4fc3-b6cd-9d7f0f976e32-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7ec3f557-015f-4fc3-b6cd-9d7f0f976e32\") " pod="openstack/openstack-cell1-galera-0" Mar 12 18:45:35.291512 master-0 kubenswrapper[29097]: I0312 18:45:35.291508 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ec3f557-015f-4fc3-b6cd-9d7f0f976e32-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7ec3f557-015f-4fc3-b6cd-9d7f0f976e32\") " pod="openstack/openstack-cell1-galera-0" Mar 12 18:45:35.291512 master-0 kubenswrapper[29097]: I0312 18:45:35.291546 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec3f557-015f-4fc3-b6cd-9d7f0f976e32-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7ec3f557-015f-4fc3-b6cd-9d7f0f976e32\") " pod="openstack/openstack-cell1-galera-0" Mar 12 18:45:35.291858 master-0 kubenswrapper[29097]: I0312 18:45:35.291569 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7ec3f557-015f-4fc3-b6cd-9d7f0f976e32-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7ec3f557-015f-4fc3-b6cd-9d7f0f976e32\") " pod="openstack/openstack-cell1-galera-0" Mar 12 18:45:35.291858 master-0 kubenswrapper[29097]: I0312 18:45:35.291613 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dce831a8-84c8-4d61-8944-ba3f33fb8972\" (UniqueName: \"kubernetes.io/csi/topolvm.io^7848d901-8c89-407d-bd2d-1d2ed13f65ca\") pod \"openstack-cell1-galera-0\" (UID: \"7ec3f557-015f-4fc3-b6cd-9d7f0f976e32\") " pod="openstack/openstack-cell1-galera-0" Mar 12 18:45:35.291858 master-0 kubenswrapper[29097]: I0312 18:45:35.291647 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fqv5\" (UniqueName: \"kubernetes.io/projected/7ec3f557-015f-4fc3-b6cd-9d7f0f976e32-kube-api-access-7fqv5\") pod \"openstack-cell1-galera-0\" (UID: \"7ec3f557-015f-4fc3-b6cd-9d7f0f976e32\") " pod="openstack/openstack-cell1-galera-0" Mar 12 18:45:35.291858 master-0 kubenswrapper[29097]: I0312 18:45:35.291698 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7ec3f557-015f-4fc3-b6cd-9d7f0f976e32-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7ec3f557-015f-4fc3-b6cd-9d7f0f976e32\") " pod="openstack/openstack-cell1-galera-0" Mar 12 18:45:35.394668 master-0 kubenswrapper[29097]: I0312 18:45:35.394559 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dce831a8-84c8-4d61-8944-ba3f33fb8972\" (UniqueName: \"kubernetes.io/csi/topolvm.io^7848d901-8c89-407d-bd2d-1d2ed13f65ca\") pod \"openstack-cell1-galera-0\" (UID: \"7ec3f557-015f-4fc3-b6cd-9d7f0f976e32\") " pod="openstack/openstack-cell1-galera-0" Mar 12 18:45:35.394668 master-0 kubenswrapper[29097]: I0312 18:45:35.394615 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fqv5\" (UniqueName: \"kubernetes.io/projected/7ec3f557-015f-4fc3-b6cd-9d7f0f976e32-kube-api-access-7fqv5\") pod \"openstack-cell1-galera-0\" (UID: \"7ec3f557-015f-4fc3-b6cd-9d7f0f976e32\") " pod="openstack/openstack-cell1-galera-0" Mar 12 18:45:35.395145 master-0 kubenswrapper[29097]: I0312 18:45:35.394668 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7ec3f557-015f-4fc3-b6cd-9d7f0f976e32-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7ec3f557-015f-4fc3-b6cd-9d7f0f976e32\") " pod="openstack/openstack-cell1-galera-0" Mar 12 18:45:35.395145 master-0 kubenswrapper[29097]: I0312 18:45:35.394732 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7ec3f557-015f-4fc3-b6cd-9d7f0f976e32-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7ec3f557-015f-4fc3-b6cd-9d7f0f976e32\") " pod="openstack/openstack-cell1-galera-0" Mar 12 18:45:35.395145 master-0 kubenswrapper[29097]: I0312 18:45:35.394776 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ec3f557-015f-4fc3-b6cd-9d7f0f976e32-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7ec3f557-015f-4fc3-b6cd-9d7f0f976e32\") " pod="openstack/openstack-cell1-galera-0" Mar 12 18:45:35.395145 master-0 kubenswrapper[29097]: I0312 18:45:35.394798 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ec3f557-015f-4fc3-b6cd-9d7f0f976e32-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7ec3f557-015f-4fc3-b6cd-9d7f0f976e32\") " pod="openstack/openstack-cell1-galera-0" Mar 12 18:45:35.395145 master-0 kubenswrapper[29097]: I0312 18:45:35.394817 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7ec3f557-015f-4fc3-b6cd-9d7f0f976e32-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7ec3f557-015f-4fc3-b6cd-9d7f0f976e32\") " pod="openstack/openstack-cell1-galera-0" Mar 12 18:45:35.395145 master-0 kubenswrapper[29097]: I0312 18:45:35.394832 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec3f557-015f-4fc3-b6cd-9d7f0f976e32-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7ec3f557-015f-4fc3-b6cd-9d7f0f976e32\") " pod="openstack/openstack-cell1-galera-0" Mar 12 18:45:35.396441 master-0 kubenswrapper[29097]: I0312 18:45:35.396136 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7ec3f557-015f-4fc3-b6cd-9d7f0f976e32-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7ec3f557-015f-4fc3-b6cd-9d7f0f976e32\") " pod="openstack/openstack-cell1-galera-0" Mar 12 18:45:35.397253 master-0 kubenswrapper[29097]: I0312 18:45:35.397189 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7ec3f557-015f-4fc3-b6cd-9d7f0f976e32-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7ec3f557-015f-4fc3-b6cd-9d7f0f976e32\") " pod="openstack/openstack-cell1-galera-0" Mar 12 18:45:35.398618 master-0 kubenswrapper[29097]: I0312 18:45:35.398483 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ec3f557-015f-4fc3-b6cd-9d7f0f976e32-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7ec3f557-015f-4fc3-b6cd-9d7f0f976e32\") " pod="openstack/openstack-cell1-galera-0" Mar 12 18:45:35.398969 master-0 kubenswrapper[29097]: I0312 18:45:35.398938 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec3f557-015f-4fc3-b6cd-9d7f0f976e32-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7ec3f557-015f-4fc3-b6cd-9d7f0f976e32\") " pod="openstack/openstack-cell1-galera-0" Mar 12 18:45:35.399146 master-0 kubenswrapper[29097]: I0312 18:45:35.399115 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7ec3f557-015f-4fc3-b6cd-9d7f0f976e32-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7ec3f557-015f-4fc3-b6cd-9d7f0f976e32\") " pod="openstack/openstack-cell1-galera-0" Mar 12 18:45:35.403528 master-0 kubenswrapper[29097]: I0312 18:45:35.403284 29097 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 18:45:35.404647 master-0 kubenswrapper[29097]: I0312 18:45:35.403505 29097 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dce831a8-84c8-4d61-8944-ba3f33fb8972\" (UniqueName: \"kubernetes.io/csi/topolvm.io^7848d901-8c89-407d-bd2d-1d2ed13f65ca\") pod \"openstack-cell1-galera-0\" (UID: \"7ec3f557-015f-4fc3-b6cd-9d7f0f976e32\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/7a91756e69bae22fb2769bb4e690ce5af443d2c967e2e795952f512c6f8b98ac/globalmount\"" pod="openstack/openstack-cell1-galera-0" Mar 12 18:45:35.426624 master-0 kubenswrapper[29097]: I0312 18:45:35.426561 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fqv5\" (UniqueName: \"kubernetes.io/projected/7ec3f557-015f-4fc3-b6cd-9d7f0f976e32-kube-api-access-7fqv5\") pod \"openstack-cell1-galera-0\" (UID: \"7ec3f557-015f-4fc3-b6cd-9d7f0f976e32\") " pod="openstack/openstack-cell1-galera-0" Mar 12 18:45:35.430920 master-0 kubenswrapper[29097]: I0312 18:45:35.430851 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ec3f557-015f-4fc3-b6cd-9d7f0f976e32-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7ec3f557-015f-4fc3-b6cd-9d7f0f976e32\") " pod="openstack/openstack-cell1-galera-0" Mar 12 18:45:35.745784 master-0 kubenswrapper[29097]: I0312 18:45:35.745636 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-87d55060-cb6a-4b88-b0c5-318bd71db07a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b6dbd025-9b64-4c96-a217-75fad2671135\") pod \"rabbitmq-cell1-server-0\" (UID: \"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:35.765313 master-0 kubenswrapper[29097]: I0312 18:45:35.765238 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:45:36.772637 master-0 kubenswrapper[29097]: I0312 18:45:36.770866 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dd774b8c-0550-44b3-bbf6-07da19cafb59\" (UniqueName: \"kubernetes.io/csi/topolvm.io^7e769c9b-abc1-4435-894c-2300c54f2005\") pod \"openstack-galera-0\" (UID: \"89da455f-45f0-4844-9a54-1ad46fe41d43\") " pod="openstack/openstack-galera-0" Mar 12 18:45:36.882640 master-0 kubenswrapper[29097]: I0312 18:45:36.880987 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 12 18:45:37.800910 master-0 kubenswrapper[29097]: I0312 18:45:37.800851 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dce831a8-84c8-4d61-8944-ba3f33fb8972\" (UniqueName: \"kubernetes.io/csi/topolvm.io^7848d901-8c89-407d-bd2d-1d2ed13f65ca\") pod \"openstack-cell1-galera-0\" (UID: \"7ec3f557-015f-4fc3-b6cd-9d7f0f976e32\") " pod="openstack/openstack-cell1-galera-0" Mar 12 18:45:37.912207 master-0 kubenswrapper[29097]: I0312 18:45:37.909905 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 12 18:45:38.887914 master-0 kubenswrapper[29097]: I0312 18:45:38.887861 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zrpnx"] Mar 12 18:45:38.889985 master-0 kubenswrapper[29097]: I0312 18:45:38.889162 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zrpnx" Mar 12 18:45:38.894340 master-0 kubenswrapper[29097]: I0312 18:45:38.894292 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Mar 12 18:45:38.894657 master-0 kubenswrapper[29097]: I0312 18:45:38.894633 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Mar 12 18:45:38.914240 master-0 kubenswrapper[29097]: I0312 18:45:38.914192 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zrpnx"] Mar 12 18:45:38.924858 master-0 kubenswrapper[29097]: I0312 18:45:38.924805 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-8vq5w"] Mar 12 18:45:38.926960 master-0 kubenswrapper[29097]: I0312 18:45:38.926909 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-8vq5w" Mar 12 18:45:38.967536 master-0 kubenswrapper[29097]: I0312 18:45:38.966919 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-8vq5w"] Mar 12 18:45:38.998439 master-0 kubenswrapper[29097]: I0312 18:45:38.998366 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2-var-run\") pod \"ovn-controller-ovs-8vq5w\" (UID: \"4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2\") " pod="openstack/ovn-controller-ovs-8vq5w" Mar 12 18:45:38.998439 master-0 kubenswrapper[29097]: I0312 18:45:38.998438 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76877\" (UniqueName: \"kubernetes.io/projected/4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2-kube-api-access-76877\") pod \"ovn-controller-ovs-8vq5w\" (UID: \"4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2\") " pod="openstack/ovn-controller-ovs-8vq5w" Mar 12 18:45:38.998727 master-0 kubenswrapper[29097]: I0312 18:45:38.998477 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnl6d\" (UniqueName: \"kubernetes.io/projected/a5e5d447-ad0f-45a1-9613-8be6ff16ce62-kube-api-access-lnl6d\") pod \"ovn-controller-zrpnx\" (UID: \"a5e5d447-ad0f-45a1-9613-8be6ff16ce62\") " pod="openstack/ovn-controller-zrpnx" Mar 12 18:45:39.002215 master-0 kubenswrapper[29097]: I0312 18:45:39.002166 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2-var-lib\") pod \"ovn-controller-ovs-8vq5w\" (UID: \"4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2\") " pod="openstack/ovn-controller-ovs-8vq5w" Mar 12 18:45:39.002296 master-0 kubenswrapper[29097]: I0312 18:45:39.002261 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5e5d447-ad0f-45a1-9613-8be6ff16ce62-ovn-controller-tls-certs\") pod \"ovn-controller-zrpnx\" (UID: \"a5e5d447-ad0f-45a1-9613-8be6ff16ce62\") " pod="openstack/ovn-controller-zrpnx" Mar 12 18:45:39.002340 master-0 kubenswrapper[29097]: I0312 18:45:39.002300 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a5e5d447-ad0f-45a1-9613-8be6ff16ce62-var-run\") pod \"ovn-controller-zrpnx\" (UID: \"a5e5d447-ad0f-45a1-9613-8be6ff16ce62\") " pod="openstack/ovn-controller-zrpnx" Mar 12 18:45:39.002369 master-0 kubenswrapper[29097]: I0312 18:45:39.002345 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a5e5d447-ad0f-45a1-9613-8be6ff16ce62-var-run-ovn\") pod \"ovn-controller-zrpnx\" (UID: \"a5e5d447-ad0f-45a1-9613-8be6ff16ce62\") " pod="openstack/ovn-controller-zrpnx" Mar 12 18:45:39.002403 master-0 kubenswrapper[29097]: I0312 18:45:39.002382 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2-scripts\") pod \"ovn-controller-ovs-8vq5w\" (UID: \"4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2\") " pod="openstack/ovn-controller-ovs-8vq5w" Mar 12 18:45:39.002457 master-0 kubenswrapper[29097]: I0312 18:45:39.002437 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a5e5d447-ad0f-45a1-9613-8be6ff16ce62-var-log-ovn\") pod \"ovn-controller-zrpnx\" (UID: \"a5e5d447-ad0f-45a1-9613-8be6ff16ce62\") " pod="openstack/ovn-controller-zrpnx" Mar 12 18:45:39.002925 master-0 kubenswrapper[29097]: I0312 18:45:39.002879 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5e5d447-ad0f-45a1-9613-8be6ff16ce62-scripts\") pod \"ovn-controller-zrpnx\" (UID: \"a5e5d447-ad0f-45a1-9613-8be6ff16ce62\") " pod="openstack/ovn-controller-zrpnx" Mar 12 18:45:39.003089 master-0 kubenswrapper[29097]: I0312 18:45:39.003061 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5e5d447-ad0f-45a1-9613-8be6ff16ce62-combined-ca-bundle\") pod \"ovn-controller-zrpnx\" (UID: \"a5e5d447-ad0f-45a1-9613-8be6ff16ce62\") " pod="openstack/ovn-controller-zrpnx" Mar 12 18:45:39.004026 master-0 kubenswrapper[29097]: I0312 18:45:39.003392 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2-etc-ovs\") pod \"ovn-controller-ovs-8vq5w\" (UID: \"4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2\") " pod="openstack/ovn-controller-ovs-8vq5w" Mar 12 18:45:39.004159 master-0 kubenswrapper[29097]: I0312 18:45:39.004134 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2-var-log\") pod \"ovn-controller-ovs-8vq5w\" (UID: \"4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2\") " pod="openstack/ovn-controller-ovs-8vq5w" Mar 12 18:45:39.106864 master-0 kubenswrapper[29097]: I0312 18:45:39.106793 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2-etc-ovs\") pod \"ovn-controller-ovs-8vq5w\" (UID: \"4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2\") " pod="openstack/ovn-controller-ovs-8vq5w" Mar 12 18:45:39.107083 master-0 kubenswrapper[29097]: I0312 18:45:39.106877 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2-var-log\") pod \"ovn-controller-ovs-8vq5w\" (UID: \"4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2\") " pod="openstack/ovn-controller-ovs-8vq5w" Mar 12 18:45:39.107083 master-0 kubenswrapper[29097]: I0312 18:45:39.107034 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2-var-run\") pod \"ovn-controller-ovs-8vq5w\" (UID: \"4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2\") " pod="openstack/ovn-controller-ovs-8vq5w" Mar 12 18:45:39.107083 master-0 kubenswrapper[29097]: I0312 18:45:39.107063 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76877\" (UniqueName: \"kubernetes.io/projected/4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2-kube-api-access-76877\") pod \"ovn-controller-ovs-8vq5w\" (UID: \"4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2\") " pod="openstack/ovn-controller-ovs-8vq5w" Mar 12 18:45:39.107254 master-0 kubenswrapper[29097]: I0312 18:45:39.107230 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnl6d\" (UniqueName: \"kubernetes.io/projected/a5e5d447-ad0f-45a1-9613-8be6ff16ce62-kube-api-access-lnl6d\") pod \"ovn-controller-zrpnx\" (UID: \"a5e5d447-ad0f-45a1-9613-8be6ff16ce62\") " pod="openstack/ovn-controller-zrpnx" Mar 12 18:45:39.107300 master-0 kubenswrapper[29097]: I0312 18:45:39.107267 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2-var-lib\") pod \"ovn-controller-ovs-8vq5w\" (UID: \"4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2\") " pod="openstack/ovn-controller-ovs-8vq5w" Mar 12 18:45:39.107300 master-0 kubenswrapper[29097]: I0312 18:45:39.107294 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5e5d447-ad0f-45a1-9613-8be6ff16ce62-ovn-controller-tls-certs\") pod \"ovn-controller-zrpnx\" (UID: \"a5e5d447-ad0f-45a1-9613-8be6ff16ce62\") " pod="openstack/ovn-controller-zrpnx" Mar 12 18:45:39.107366 master-0 kubenswrapper[29097]: I0312 18:45:39.107315 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a5e5d447-ad0f-45a1-9613-8be6ff16ce62-var-run\") pod \"ovn-controller-zrpnx\" (UID: \"a5e5d447-ad0f-45a1-9613-8be6ff16ce62\") " pod="openstack/ovn-controller-zrpnx" Mar 12 18:45:39.107366 master-0 kubenswrapper[29097]: I0312 18:45:39.107338 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a5e5d447-ad0f-45a1-9613-8be6ff16ce62-var-run-ovn\") pod \"ovn-controller-zrpnx\" (UID: \"a5e5d447-ad0f-45a1-9613-8be6ff16ce62\") " pod="openstack/ovn-controller-zrpnx" Mar 12 18:45:39.107366 master-0 kubenswrapper[29097]: I0312 18:45:39.107358 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2-scripts\") pod \"ovn-controller-ovs-8vq5w\" (UID: \"4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2\") " pod="openstack/ovn-controller-ovs-8vq5w" Mar 12 18:45:39.107467 master-0 kubenswrapper[29097]: I0312 18:45:39.107382 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a5e5d447-ad0f-45a1-9613-8be6ff16ce62-var-log-ovn\") pod \"ovn-controller-zrpnx\" (UID: \"a5e5d447-ad0f-45a1-9613-8be6ff16ce62\") " pod="openstack/ovn-controller-zrpnx" Mar 12 18:45:39.107467 master-0 kubenswrapper[29097]: I0312 18:45:39.107396 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2-etc-ovs\") pod \"ovn-controller-ovs-8vq5w\" (UID: \"4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2\") " pod="openstack/ovn-controller-ovs-8vq5w" Mar 12 18:45:39.107467 master-0 kubenswrapper[29097]: I0312 18:45:39.107436 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5e5d447-ad0f-45a1-9613-8be6ff16ce62-scripts\") pod \"ovn-controller-zrpnx\" (UID: \"a5e5d447-ad0f-45a1-9613-8be6ff16ce62\") " pod="openstack/ovn-controller-zrpnx" Mar 12 18:45:39.107467 master-0 kubenswrapper[29097]: I0312 18:45:39.107467 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5e5d447-ad0f-45a1-9613-8be6ff16ce62-combined-ca-bundle\") pod \"ovn-controller-zrpnx\" (UID: \"a5e5d447-ad0f-45a1-9613-8be6ff16ce62\") " pod="openstack/ovn-controller-zrpnx" Mar 12 18:45:39.107620 master-0 kubenswrapper[29097]: I0312 18:45:39.107533 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2-var-log\") pod \"ovn-controller-ovs-8vq5w\" (UID: \"4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2\") " pod="openstack/ovn-controller-ovs-8vq5w" Mar 12 18:45:39.108394 master-0 kubenswrapper[29097]: I0312 18:45:39.108345 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a5e5d447-ad0f-45a1-9613-8be6ff16ce62-var-run\") pod \"ovn-controller-zrpnx\" (UID: \"a5e5d447-ad0f-45a1-9613-8be6ff16ce62\") " pod="openstack/ovn-controller-zrpnx" Mar 12 18:45:39.108508 master-0 kubenswrapper[29097]: I0312 18:45:39.108483 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a5e5d447-ad0f-45a1-9613-8be6ff16ce62-var-log-ovn\") pod \"ovn-controller-zrpnx\" (UID: \"a5e5d447-ad0f-45a1-9613-8be6ff16ce62\") " pod="openstack/ovn-controller-zrpnx" Mar 12 18:45:39.111599 master-0 kubenswrapper[29097]: I0312 18:45:39.110158 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a5e5d447-ad0f-45a1-9613-8be6ff16ce62-var-run-ovn\") pod \"ovn-controller-zrpnx\" (UID: \"a5e5d447-ad0f-45a1-9613-8be6ff16ce62\") " pod="openstack/ovn-controller-zrpnx" Mar 12 18:45:39.111599 master-0 kubenswrapper[29097]: I0312 18:45:39.110627 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5e5d447-ad0f-45a1-9613-8be6ff16ce62-combined-ca-bundle\") pod \"ovn-controller-zrpnx\" (UID: \"a5e5d447-ad0f-45a1-9613-8be6ff16ce62\") " pod="openstack/ovn-controller-zrpnx" Mar 12 18:45:39.111599 master-0 kubenswrapper[29097]: I0312 18:45:39.110775 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2-scripts\") pod \"ovn-controller-ovs-8vq5w\" (UID: \"4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2\") " pod="openstack/ovn-controller-ovs-8vq5w" Mar 12 18:45:39.111599 master-0 kubenswrapper[29097]: I0312 18:45:39.110808 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5e5d447-ad0f-45a1-9613-8be6ff16ce62-scripts\") pod \"ovn-controller-zrpnx\" (UID: \"a5e5d447-ad0f-45a1-9613-8be6ff16ce62\") " pod="openstack/ovn-controller-zrpnx" Mar 12 18:45:39.111599 master-0 kubenswrapper[29097]: I0312 18:45:39.110899 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2-var-run\") pod \"ovn-controller-ovs-8vq5w\" (UID: \"4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2\") " pod="openstack/ovn-controller-ovs-8vq5w" Mar 12 18:45:39.111599 master-0 kubenswrapper[29097]: I0312 18:45:39.110933 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2-var-lib\") pod \"ovn-controller-ovs-8vq5w\" (UID: \"4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2\") " pod="openstack/ovn-controller-ovs-8vq5w" Mar 12 18:45:39.117120 master-0 kubenswrapper[29097]: I0312 18:45:39.117071 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5e5d447-ad0f-45a1-9613-8be6ff16ce62-ovn-controller-tls-certs\") pod \"ovn-controller-zrpnx\" (UID: \"a5e5d447-ad0f-45a1-9613-8be6ff16ce62\") " pod="openstack/ovn-controller-zrpnx" Mar 12 18:45:39.126913 master-0 kubenswrapper[29097]: I0312 18:45:39.126883 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76877\" (UniqueName: \"kubernetes.io/projected/4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2-kube-api-access-76877\") pod \"ovn-controller-ovs-8vq5w\" (UID: \"4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2\") " pod="openstack/ovn-controller-ovs-8vq5w" Mar 12 18:45:39.128714 master-0 kubenswrapper[29097]: I0312 18:45:39.128651 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnl6d\" (UniqueName: \"kubernetes.io/projected/a5e5d447-ad0f-45a1-9613-8be6ff16ce62-kube-api-access-lnl6d\") pod \"ovn-controller-zrpnx\" (UID: \"a5e5d447-ad0f-45a1-9613-8be6ff16ce62\") " pod="openstack/ovn-controller-zrpnx" Mar 12 18:45:39.215133 master-0 kubenswrapper[29097]: I0312 18:45:39.215000 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zrpnx" Mar 12 18:45:39.252600 master-0 kubenswrapper[29097]: I0312 18:45:39.252403 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-8vq5w" Mar 12 18:45:40.151743 master-0 kubenswrapper[29097]: I0312 18:45:40.149956 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 12 18:45:40.159037 master-0 kubenswrapper[29097]: I0312 18:45:40.158968 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 12 18:45:40.164168 master-0 kubenswrapper[29097]: I0312 18:45:40.164114 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 12 18:45:40.164463 master-0 kubenswrapper[29097]: I0312 18:45:40.164430 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 12 18:45:40.165396 master-0 kubenswrapper[29097]: I0312 18:45:40.165203 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 12 18:45:40.165705 master-0 kubenswrapper[29097]: I0312 18:45:40.165635 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 12 18:45:40.167691 master-0 kubenswrapper[29097]: I0312 18:45:40.167635 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 12 18:45:40.238242 master-0 kubenswrapper[29097]: I0312 18:45:40.238185 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e9bd650-99fd-4a45-9742-0b23b242d8b6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6e9bd650-99fd-4a45-9742-0b23b242d8b6\") " pod="openstack/ovsdbserver-sb-0" Mar 12 18:45:40.238622 master-0 kubenswrapper[29097]: I0312 18:45:40.238601 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1b7d85ec-1ad9-42d4-a5e2-3ea20c7e8a97\" (UniqueName: \"kubernetes.io/csi/topolvm.io^aa66b661-3afd-4ce1-a338-0545d12bcf3e\") pod \"ovsdbserver-sb-0\" (UID: \"6e9bd650-99fd-4a45-9742-0b23b242d8b6\") " pod="openstack/ovsdbserver-sb-0" Mar 12 18:45:40.238962 master-0 kubenswrapper[29097]: I0312 18:45:40.238942 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kpnt\" (UniqueName: \"kubernetes.io/projected/6e9bd650-99fd-4a45-9742-0b23b242d8b6-kube-api-access-6kpnt\") pod \"ovsdbserver-sb-0\" (UID: \"6e9bd650-99fd-4a45-9742-0b23b242d8b6\") " pod="openstack/ovsdbserver-sb-0" Mar 12 18:45:40.239020 master-0 kubenswrapper[29097]: I0312 18:45:40.239010 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e9bd650-99fd-4a45-9742-0b23b242d8b6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6e9bd650-99fd-4a45-9742-0b23b242d8b6\") " pod="openstack/ovsdbserver-sb-0" Mar 12 18:45:40.240087 master-0 kubenswrapper[29097]: I0312 18:45:40.240051 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6e9bd650-99fd-4a45-9742-0b23b242d8b6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6e9bd650-99fd-4a45-9742-0b23b242d8b6\") " pod="openstack/ovsdbserver-sb-0" Mar 12 18:45:40.240139 master-0 kubenswrapper[29097]: I0312 18:45:40.240097 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e9bd650-99fd-4a45-9742-0b23b242d8b6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6e9bd650-99fd-4a45-9742-0b23b242d8b6\") " pod="openstack/ovsdbserver-sb-0" Mar 12 18:45:40.240139 master-0 kubenswrapper[29097]: I0312 18:45:40.240114 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e9bd650-99fd-4a45-9742-0b23b242d8b6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6e9bd650-99fd-4a45-9742-0b23b242d8b6\") " pod="openstack/ovsdbserver-sb-0" Mar 12 18:45:40.240221 master-0 kubenswrapper[29097]: I0312 18:45:40.240147 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e9bd650-99fd-4a45-9742-0b23b242d8b6-config\") pod \"ovsdbserver-sb-0\" (UID: \"6e9bd650-99fd-4a45-9742-0b23b242d8b6\") " pod="openstack/ovsdbserver-sb-0" Mar 12 18:45:40.341527 master-0 kubenswrapper[29097]: I0312 18:45:40.341455 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e9bd650-99fd-4a45-9742-0b23b242d8b6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6e9bd650-99fd-4a45-9742-0b23b242d8b6\") " pod="openstack/ovsdbserver-sb-0" Mar 12 18:45:40.341527 master-0 kubenswrapper[29097]: I0312 18:45:40.341537 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1b7d85ec-1ad9-42d4-a5e2-3ea20c7e8a97\" (UniqueName: \"kubernetes.io/csi/topolvm.io^aa66b661-3afd-4ce1-a338-0545d12bcf3e\") pod \"ovsdbserver-sb-0\" (UID: \"6e9bd650-99fd-4a45-9742-0b23b242d8b6\") " pod="openstack/ovsdbserver-sb-0" Mar 12 18:45:40.341787 master-0 kubenswrapper[29097]: I0312 18:45:40.341569 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kpnt\" (UniqueName: \"kubernetes.io/projected/6e9bd650-99fd-4a45-9742-0b23b242d8b6-kube-api-access-6kpnt\") pod \"ovsdbserver-sb-0\" (UID: \"6e9bd650-99fd-4a45-9742-0b23b242d8b6\") " pod="openstack/ovsdbserver-sb-0" Mar 12 18:45:40.341787 master-0 kubenswrapper[29097]: I0312 18:45:40.341593 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e9bd650-99fd-4a45-9742-0b23b242d8b6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6e9bd650-99fd-4a45-9742-0b23b242d8b6\") " pod="openstack/ovsdbserver-sb-0" Mar 12 18:45:40.341787 master-0 kubenswrapper[29097]: I0312 18:45:40.341667 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6e9bd650-99fd-4a45-9742-0b23b242d8b6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6e9bd650-99fd-4a45-9742-0b23b242d8b6\") " pod="openstack/ovsdbserver-sb-0" Mar 12 18:45:40.341787 master-0 kubenswrapper[29097]: I0312 18:45:40.341691 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e9bd650-99fd-4a45-9742-0b23b242d8b6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6e9bd650-99fd-4a45-9742-0b23b242d8b6\") " pod="openstack/ovsdbserver-sb-0" Mar 12 18:45:40.341787 master-0 kubenswrapper[29097]: I0312 18:45:40.341707 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e9bd650-99fd-4a45-9742-0b23b242d8b6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6e9bd650-99fd-4a45-9742-0b23b242d8b6\") " pod="openstack/ovsdbserver-sb-0" Mar 12 18:45:40.341787 master-0 kubenswrapper[29097]: I0312 18:45:40.341735 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e9bd650-99fd-4a45-9742-0b23b242d8b6-config\") pod \"ovsdbserver-sb-0\" (UID: \"6e9bd650-99fd-4a45-9742-0b23b242d8b6\") " pod="openstack/ovsdbserver-sb-0" Mar 12 18:45:40.343997 master-0 kubenswrapper[29097]: I0312 18:45:40.342484 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e9bd650-99fd-4a45-9742-0b23b242d8b6-config\") pod \"ovsdbserver-sb-0\" (UID: \"6e9bd650-99fd-4a45-9742-0b23b242d8b6\") " pod="openstack/ovsdbserver-sb-0" Mar 12 18:45:40.343997 master-0 kubenswrapper[29097]: I0312 18:45:40.343945 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6e9bd650-99fd-4a45-9742-0b23b242d8b6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6e9bd650-99fd-4a45-9742-0b23b242d8b6\") " pod="openstack/ovsdbserver-sb-0" Mar 12 18:45:40.344748 master-0 kubenswrapper[29097]: I0312 18:45:40.344725 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e9bd650-99fd-4a45-9742-0b23b242d8b6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6e9bd650-99fd-4a45-9742-0b23b242d8b6\") " pod="openstack/ovsdbserver-sb-0" Mar 12 18:45:40.346232 master-0 kubenswrapper[29097]: I0312 18:45:40.345380 29097 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 18:45:40.346232 master-0 kubenswrapper[29097]: I0312 18:45:40.345410 29097 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1b7d85ec-1ad9-42d4-a5e2-3ea20c7e8a97\" (UniqueName: \"kubernetes.io/csi/topolvm.io^aa66b661-3afd-4ce1-a338-0545d12bcf3e\") pod \"ovsdbserver-sb-0\" (UID: \"6e9bd650-99fd-4a45-9742-0b23b242d8b6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/be9dffb0fd95add59bd02f8e7b1221d025b31d5ed78a03345afce4d858a86731/globalmount\"" pod="openstack/ovsdbserver-sb-0" Mar 12 18:45:40.346232 master-0 kubenswrapper[29097]: I0312 18:45:40.346191 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e9bd650-99fd-4a45-9742-0b23b242d8b6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6e9bd650-99fd-4a45-9742-0b23b242d8b6\") " pod="openstack/ovsdbserver-sb-0" Mar 12 18:45:40.347430 master-0 kubenswrapper[29097]: I0312 18:45:40.347400 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e9bd650-99fd-4a45-9742-0b23b242d8b6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6e9bd650-99fd-4a45-9742-0b23b242d8b6\") " pod="openstack/ovsdbserver-sb-0" Mar 12 18:45:40.348900 master-0 kubenswrapper[29097]: I0312 18:45:40.348864 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e9bd650-99fd-4a45-9742-0b23b242d8b6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6e9bd650-99fd-4a45-9742-0b23b242d8b6\") " pod="openstack/ovsdbserver-sb-0" Mar 12 18:45:40.360592 master-0 kubenswrapper[29097]: I0312 18:45:40.359641 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kpnt\" (UniqueName: \"kubernetes.io/projected/6e9bd650-99fd-4a45-9742-0b23b242d8b6-kube-api-access-6kpnt\") pod \"ovsdbserver-sb-0\" (UID: \"6e9bd650-99fd-4a45-9742-0b23b242d8b6\") " pod="openstack/ovsdbserver-sb-0" Mar 12 18:45:41.817006 master-0 kubenswrapper[29097]: I0312 18:45:41.816970 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1b7d85ec-1ad9-42d4-a5e2-3ea20c7e8a97\" (UniqueName: \"kubernetes.io/csi/topolvm.io^aa66b661-3afd-4ce1-a338-0545d12bcf3e\") pod \"ovsdbserver-sb-0\" (UID: \"6e9bd650-99fd-4a45-9742-0b23b242d8b6\") " pod="openstack/ovsdbserver-sb-0" Mar 12 18:45:41.994683 master-0 kubenswrapper[29097]: I0312 18:45:41.994037 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 12 18:45:42.473951 master-0 kubenswrapper[29097]: I0312 18:45:42.473877 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 18:45:42.531803 master-0 kubenswrapper[29097]: I0312 18:45:42.531464 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 12 18:45:42.533427 master-0 kubenswrapper[29097]: I0312 18:45:42.533381 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 12 18:45:42.538634 master-0 kubenswrapper[29097]: I0312 18:45:42.538592 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 12 18:45:42.539684 master-0 kubenswrapper[29097]: I0312 18:45:42.539657 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 12 18:45:42.545791 master-0 kubenswrapper[29097]: I0312 18:45:42.540222 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 12 18:45:42.545791 master-0 kubenswrapper[29097]: I0312 18:45:42.543213 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 12 18:45:42.664534 master-0 kubenswrapper[29097]: I0312 18:45:42.659961 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/636bd264-9a47-4480-8beb-f45a4b8c45fe-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"636bd264-9a47-4480-8beb-f45a4b8c45fe\") " pod="openstack/ovsdbserver-nb-0" Mar 12 18:45:42.664534 master-0 kubenswrapper[29097]: I0312 18:45:42.660026 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/636bd264-9a47-4480-8beb-f45a4b8c45fe-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"636bd264-9a47-4480-8beb-f45a4b8c45fe\") " pod="openstack/ovsdbserver-nb-0" Mar 12 18:45:42.664534 master-0 kubenswrapper[29097]: I0312 18:45:42.660074 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6ae0f8ef-1fe5-4239-8290-9ad281a8559d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^9f1198ce-0747-4057-a7a7-8bb7c346f1cb\") pod \"ovsdbserver-nb-0\" (UID: \"636bd264-9a47-4480-8beb-f45a4b8c45fe\") " pod="openstack/ovsdbserver-nb-0" Mar 12 18:45:42.664534 master-0 kubenswrapper[29097]: I0312 18:45:42.660117 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/636bd264-9a47-4480-8beb-f45a4b8c45fe-config\") pod \"ovsdbserver-nb-0\" (UID: \"636bd264-9a47-4480-8beb-f45a4b8c45fe\") " pod="openstack/ovsdbserver-nb-0" Mar 12 18:45:42.664534 master-0 kubenswrapper[29097]: I0312 18:45:42.660135 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/636bd264-9a47-4480-8beb-f45a4b8c45fe-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"636bd264-9a47-4480-8beb-f45a4b8c45fe\") " pod="openstack/ovsdbserver-nb-0" Mar 12 18:45:42.664534 master-0 kubenswrapper[29097]: I0312 18:45:42.660177 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/636bd264-9a47-4480-8beb-f45a4b8c45fe-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"636bd264-9a47-4480-8beb-f45a4b8c45fe\") " pod="openstack/ovsdbserver-nb-0" Mar 12 18:45:42.664534 master-0 kubenswrapper[29097]: I0312 18:45:42.660220 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwpdg\" (UniqueName: \"kubernetes.io/projected/636bd264-9a47-4480-8beb-f45a4b8c45fe-kube-api-access-rwpdg\") pod \"ovsdbserver-nb-0\" (UID: \"636bd264-9a47-4480-8beb-f45a4b8c45fe\") " pod="openstack/ovsdbserver-nb-0" Mar 12 18:45:42.664534 master-0 kubenswrapper[29097]: I0312 18:45:42.660237 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/636bd264-9a47-4480-8beb-f45a4b8c45fe-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"636bd264-9a47-4480-8beb-f45a4b8c45fe\") " pod="openstack/ovsdbserver-nb-0" Mar 12 18:45:42.763602 master-0 kubenswrapper[29097]: I0312 18:45:42.763046 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/636bd264-9a47-4480-8beb-f45a4b8c45fe-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"636bd264-9a47-4480-8beb-f45a4b8c45fe\") " pod="openstack/ovsdbserver-nb-0" Mar 12 18:45:42.763602 master-0 kubenswrapper[29097]: I0312 18:45:42.763110 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/636bd264-9a47-4480-8beb-f45a4b8c45fe-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"636bd264-9a47-4480-8beb-f45a4b8c45fe\") " pod="openstack/ovsdbserver-nb-0" Mar 12 18:45:42.763602 master-0 kubenswrapper[29097]: I0312 18:45:42.763162 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6ae0f8ef-1fe5-4239-8290-9ad281a8559d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^9f1198ce-0747-4057-a7a7-8bb7c346f1cb\") pod \"ovsdbserver-nb-0\" (UID: \"636bd264-9a47-4480-8beb-f45a4b8c45fe\") " pod="openstack/ovsdbserver-nb-0" Mar 12 18:45:42.763602 master-0 kubenswrapper[29097]: I0312 18:45:42.763209 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/636bd264-9a47-4480-8beb-f45a4b8c45fe-config\") pod \"ovsdbserver-nb-0\" (UID: \"636bd264-9a47-4480-8beb-f45a4b8c45fe\") " pod="openstack/ovsdbserver-nb-0" Mar 12 18:45:42.763602 master-0 kubenswrapper[29097]: I0312 18:45:42.763228 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/636bd264-9a47-4480-8beb-f45a4b8c45fe-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"636bd264-9a47-4480-8beb-f45a4b8c45fe\") " pod="openstack/ovsdbserver-nb-0" Mar 12 18:45:42.763602 master-0 kubenswrapper[29097]: I0312 18:45:42.763271 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/636bd264-9a47-4480-8beb-f45a4b8c45fe-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"636bd264-9a47-4480-8beb-f45a4b8c45fe\") " pod="openstack/ovsdbserver-nb-0" Mar 12 18:45:42.763602 master-0 kubenswrapper[29097]: I0312 18:45:42.763323 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwpdg\" (UniqueName: \"kubernetes.io/projected/636bd264-9a47-4480-8beb-f45a4b8c45fe-kube-api-access-rwpdg\") pod \"ovsdbserver-nb-0\" (UID: \"636bd264-9a47-4480-8beb-f45a4b8c45fe\") " pod="openstack/ovsdbserver-nb-0" Mar 12 18:45:42.763602 master-0 kubenswrapper[29097]: I0312 18:45:42.763345 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/636bd264-9a47-4480-8beb-f45a4b8c45fe-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"636bd264-9a47-4480-8beb-f45a4b8c45fe\") " pod="openstack/ovsdbserver-nb-0" Mar 12 18:45:42.763986 master-0 kubenswrapper[29097]: I0312 18:45:42.763896 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/636bd264-9a47-4480-8beb-f45a4b8c45fe-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"636bd264-9a47-4480-8beb-f45a4b8c45fe\") " pod="openstack/ovsdbserver-nb-0" Mar 12 18:45:42.774528 master-0 kubenswrapper[29097]: I0312 18:45:42.764839 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/636bd264-9a47-4480-8beb-f45a4b8c45fe-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"636bd264-9a47-4480-8beb-f45a4b8c45fe\") " pod="openstack/ovsdbserver-nb-0" Mar 12 18:45:42.775648 master-0 kubenswrapper[29097]: I0312 18:45:42.775622 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/636bd264-9a47-4480-8beb-f45a4b8c45fe-config\") pod \"ovsdbserver-nb-0\" (UID: \"636bd264-9a47-4480-8beb-f45a4b8c45fe\") " pod="openstack/ovsdbserver-nb-0" Mar 12 18:45:42.776128 master-0 kubenswrapper[29097]: I0312 18:45:42.776114 29097 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 18:45:42.776223 master-0 kubenswrapper[29097]: I0312 18:45:42.776207 29097 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6ae0f8ef-1fe5-4239-8290-9ad281a8559d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^9f1198ce-0747-4057-a7a7-8bb7c346f1cb\") pod \"ovsdbserver-nb-0\" (UID: \"636bd264-9a47-4480-8beb-f45a4b8c45fe\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/6e59082f5e98502351f654eb6ad0e1fb65d94ff9b23fe978b4330f7a2d1c02fb/globalmount\"" pod="openstack/ovsdbserver-nb-0" Mar 12 18:45:42.798534 master-0 kubenswrapper[29097]: I0312 18:45:42.780740 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/636bd264-9a47-4480-8beb-f45a4b8c45fe-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"636bd264-9a47-4480-8beb-f45a4b8c45fe\") " pod="openstack/ovsdbserver-nb-0" Mar 12 18:45:42.798534 master-0 kubenswrapper[29097]: I0312 18:45:42.781480 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/636bd264-9a47-4480-8beb-f45a4b8c45fe-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"636bd264-9a47-4480-8beb-f45a4b8c45fe\") " pod="openstack/ovsdbserver-nb-0" Mar 12 18:45:42.798534 master-0 kubenswrapper[29097]: I0312 18:45:42.787910 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/636bd264-9a47-4480-8beb-f45a4b8c45fe-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"636bd264-9a47-4480-8beb-f45a4b8c45fe\") " pod="openstack/ovsdbserver-nb-0" Mar 12 18:45:42.812529 master-0 kubenswrapper[29097]: I0312 18:45:42.806999 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwpdg\" (UniqueName: \"kubernetes.io/projected/636bd264-9a47-4480-8beb-f45a4b8c45fe-kube-api-access-rwpdg\") pod \"ovsdbserver-nb-0\" (UID: \"636bd264-9a47-4480-8beb-f45a4b8c45fe\") " pod="openstack/ovsdbserver-nb-0" Mar 12 18:45:44.226669 master-0 kubenswrapper[29097]: I0312 18:45:44.226617 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6ae0f8ef-1fe5-4239-8290-9ad281a8559d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^9f1198ce-0747-4057-a7a7-8bb7c346f1cb\") pod \"ovsdbserver-nb-0\" (UID: \"636bd264-9a47-4480-8beb-f45a4b8c45fe\") " pod="openstack/ovsdbserver-nb-0" Mar 12 18:45:44.357540 master-0 kubenswrapper[29097]: I0312 18:45:44.357097 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 12 18:45:46.677572 master-0 kubenswrapper[29097]: W0312 18:45:46.677482 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49290c2f_177f_4a5e_8e1e_cf105e962c5b.slice/crio-0493de6df1090b5b7a0446f1b865c545681c60de8a7928e5b6a40a748bf199da WatchSource:0}: Error finding container 0493de6df1090b5b7a0446f1b865c545681c60de8a7928e5b6a40a748bf199da: Status 404 returned error can't find the container with id 0493de6df1090b5b7a0446f1b865c545681c60de8a7928e5b6a40a748bf199da Mar 12 18:45:47.104219 master-0 kubenswrapper[29097]: I0312 18:45:47.104150 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"49290c2f-177f-4a5e-8e1e-cf105e962c5b","Type":"ContainerStarted","Data":"0493de6df1090b5b7a0446f1b865c545681c60de8a7928e5b6a40a748bf199da"} Mar 12 18:45:47.858620 master-0 kubenswrapper[29097]: I0312 18:45:47.858505 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 12 18:45:47.868829 master-0 kubenswrapper[29097]: W0312 18:45:47.868785 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c377976_30da_4335_b35e_e2e65789e21d.slice/crio-b41e7fdaa84022b6c04bd4e515ae2408fdfb5a3849efe85e48de9c656cfb1751 WatchSource:0}: Error finding container b41e7fdaa84022b6c04bd4e515ae2408fdfb5a3849efe85e48de9c656cfb1751: Status 404 returned error can't find the container with id b41e7fdaa84022b6c04bd4e515ae2408fdfb5a3849efe85e48de9c656cfb1751 Mar 12 18:45:48.004142 master-0 kubenswrapper[29097]: I0312 18:45:48.004091 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 18:45:48.118872 master-0 kubenswrapper[29097]: I0312 18:45:48.118816 29097 generic.go:334] "Generic (PLEG): container finished" podID="aa942752-02ac-4a81-9822-6b5adf5c5b91" containerID="abbaf029f5fc31f58e9847b8ff1d91066c93da12a28b98d234e7167ca8941e0e" exitCode=0 Mar 12 18:45:48.119090 master-0 kubenswrapper[29097]: I0312 18:45:48.118896 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476fd89bc-8g7pv" event={"ID":"aa942752-02ac-4a81-9822-6b5adf5c5b91","Type":"ContainerDied","Data":"abbaf029f5fc31f58e9847b8ff1d91066c93da12a28b98d234e7167ca8941e0e"} Mar 12 18:45:48.121171 master-0 kubenswrapper[29097]: I0312 18:45:48.121129 29097 generic.go:334] "Generic (PLEG): container finished" podID="b4da79c5-a6b0-4e60-8816-a9e69f3a7e96" containerID="523281f644753ac37841a92be7ec76433a1f8e2f4dc684fc3796fef25948bbdf" exitCode=0 Mar 12 18:45:48.121248 master-0 kubenswrapper[29097]: I0312 18:45:48.121198 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685c76cf85-tfc89" event={"ID":"b4da79c5-a6b0-4e60-8816-a9e69f3a7e96","Type":"ContainerDied","Data":"523281f644753ac37841a92be7ec76433a1f8e2f4dc684fc3796fef25948bbdf"} Mar 12 18:45:48.123390 master-0 kubenswrapper[29097]: I0312 18:45:48.123360 29097 generic.go:334] "Generic (PLEG): container finished" podID="3c79941a-6841-4657-ade5-ec4e627743bc" containerID="6ba80a6432ef18bd1a8bfab359d471df8e909e147c55d6f189b52926364fbc3d" exitCode=0 Mar 12 18:45:48.123503 master-0 kubenswrapper[29097]: I0312 18:45:48.123479 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586dbdbb8c-7gbkn" event={"ID":"3c79941a-6841-4657-ade5-ec4e627743bc","Type":"ContainerDied","Data":"6ba80a6432ef18bd1a8bfab359d471df8e909e147c55d6f189b52926364fbc3d"} Mar 12 18:45:48.126393 master-0 kubenswrapper[29097]: I0312 18:45:48.125300 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3c377976-30da-4335-b35e-e2e65789e21d","Type":"ContainerStarted","Data":"b41e7fdaa84022b6c04bd4e515ae2408fdfb5a3849efe85e48de9c656cfb1751"} Mar 12 18:45:48.126393 master-0 kubenswrapper[29097]: I0312 18:45:48.126322 29097 generic.go:334] "Generic (PLEG): container finished" podID="6ceeba9c-e67f-49da-9b94-4359cfcd448e" containerID="d45d2155ac3be9b74c1b97a8b6033ee144b105dba23f89c78647023b3a18f72b" exitCode=0 Mar 12 18:45:48.126393 master-0 kubenswrapper[29097]: I0312 18:45:48.126341 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-f289q" event={"ID":"6ceeba9c-e67f-49da-9b94-4359cfcd448e","Type":"ContainerDied","Data":"d45d2155ac3be9b74c1b97a8b6033ee144b105dba23f89c78647023b3a18f72b"} Mar 12 18:45:48.342311 master-0 kubenswrapper[29097]: I0312 18:45:48.342264 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zrpnx"] Mar 12 18:45:48.357677 master-0 kubenswrapper[29097]: I0312 18:45:48.357631 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 12 18:45:48.374053 master-0 kubenswrapper[29097]: W0312 18:45:48.373837 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89da455f_45f0_4844_9a54_1ad46fe41d43.slice/crio-37b3350d1a7d812d99d09745f3593e34996ca02992e53e38484fa0af3d494671 WatchSource:0}: Error finding container 37b3350d1a7d812d99d09745f3593e34996ca02992e53e38484fa0af3d494671: Status 404 returned error can't find the container with id 37b3350d1a7d812d99d09745f3593e34996ca02992e53e38484fa0af3d494671 Mar 12 18:45:48.374207 master-0 kubenswrapper[29097]: I0312 18:45:48.374043 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 12 18:45:48.374207 master-0 kubenswrapper[29097]: W0312 18:45:48.374168 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5e5d447_ad0f_45a1_9613_8be6ff16ce62.slice/crio-d4de187441605d97556cf2ac68c45eeb508aaeb8378aede1038e552105fd21ce WatchSource:0}: Error finding container d4de187441605d97556cf2ac68c45eeb508aaeb8378aede1038e552105fd21ce: Status 404 returned error can't find the container with id d4de187441605d97556cf2ac68c45eeb508aaeb8378aede1038e552105fd21ce Mar 12 18:45:48.447989 master-0 kubenswrapper[29097]: E0312 18:45:48.446284 29097 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Mar 12 18:45:48.447989 master-0 kubenswrapper[29097]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/3c79941a-6841-4657-ade5-ec4e627743bc/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Mar 12 18:45:48.447989 master-0 kubenswrapper[29097]: > podSandboxID="0131ba4da1985331e70751da29910d2fa67376c20904a5850f7b238bfcb3675d" Mar 12 18:45:48.447989 master-0 kubenswrapper[29097]: E0312 18:45:48.446572 29097 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 18:45:48.447989 master-0 kubenswrapper[29097]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nbchf8h696h5ffh5cdh585hc5hbfh597h58dhfh554h67bh9bh5c9hfch7dh5fbhbbh567h78h669hf8h65dh55dh588h5ddh88h694h669h95h8q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v67qw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000800000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-586dbdbb8c-7gbkn_openstack(3c79941a-6841-4657-ade5-ec4e627743bc): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/3c79941a-6841-4657-ade5-ec4e627743bc/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Mar 12 18:45:48.447989 master-0 kubenswrapper[29097]: > logger="UnhandledError" Mar 12 18:45:48.448368 master-0 kubenswrapper[29097]: E0312 18:45:48.448329 29097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/3c79941a-6841-4657-ade5-ec4e627743bc/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-586dbdbb8c-7gbkn" podUID="3c79941a-6841-4657-ade5-ec4e627743bc" Mar 12 18:45:48.518503 master-0 kubenswrapper[29097]: W0312 18:45:48.518444 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod636bd264_9a47_4480_8beb_f45a4b8c45fe.slice/crio-5da39e8976519ed7b2bb530a92aa45c17d3a2f363688b314e2e3e4b543a24f03 WatchSource:0}: Error finding container 5da39e8976519ed7b2bb530a92aa45c17d3a2f363688b314e2e3e4b543a24f03: Status 404 returned error can't find the container with id 5da39e8976519ed7b2bb530a92aa45c17d3a2f363688b314e2e3e4b543a24f03 Mar 12 18:45:48.519739 master-0 kubenswrapper[29097]: I0312 18:45:48.519680 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 12 18:45:48.868559 master-0 kubenswrapper[29097]: I0312 18:45:48.868049 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 12 18:45:48.949583 master-0 kubenswrapper[29097]: I0312 18:45:48.949471 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476fd89bc-8g7pv" Mar 12 18:45:48.956211 master-0 kubenswrapper[29097]: I0312 18:45:48.956179 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685c76cf85-tfc89" Mar 12 18:45:49.097038 master-0 kubenswrapper[29097]: I0312 18:45:49.096979 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4da79c5-a6b0-4e60-8816-a9e69f3a7e96-config\") pod \"b4da79c5-a6b0-4e60-8816-a9e69f3a7e96\" (UID: \"b4da79c5-a6b0-4e60-8816-a9e69f3a7e96\") " Mar 12 18:45:49.097280 master-0 kubenswrapper[29097]: I0312 18:45:49.097171 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa942752-02ac-4a81-9822-6b5adf5c5b91-dns-svc\") pod \"aa942752-02ac-4a81-9822-6b5adf5c5b91\" (UID: \"aa942752-02ac-4a81-9822-6b5adf5c5b91\") " Mar 12 18:45:49.097280 master-0 kubenswrapper[29097]: I0312 18:45:49.097199 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krfsq\" (UniqueName: \"kubernetes.io/projected/aa942752-02ac-4a81-9822-6b5adf5c5b91-kube-api-access-krfsq\") pod \"aa942752-02ac-4a81-9822-6b5adf5c5b91\" (UID: \"aa942752-02ac-4a81-9822-6b5adf5c5b91\") " Mar 12 18:45:49.097280 master-0 kubenswrapper[29097]: I0312 18:45:49.097228 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dn24\" (UniqueName: \"kubernetes.io/projected/b4da79c5-a6b0-4e60-8816-a9e69f3a7e96-kube-api-access-7dn24\") pod \"b4da79c5-a6b0-4e60-8816-a9e69f3a7e96\" (UID: \"b4da79c5-a6b0-4e60-8816-a9e69f3a7e96\") " Mar 12 18:45:49.097280 master-0 kubenswrapper[29097]: I0312 18:45:49.097261 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa942752-02ac-4a81-9822-6b5adf5c5b91-config\") pod \"aa942752-02ac-4a81-9822-6b5adf5c5b91\" (UID: \"aa942752-02ac-4a81-9822-6b5adf5c5b91\") " Mar 12 18:45:49.100903 master-0 kubenswrapper[29097]: I0312 18:45:49.100861 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa942752-02ac-4a81-9822-6b5adf5c5b91-kube-api-access-krfsq" (OuterVolumeSpecName: "kube-api-access-krfsq") pod "aa942752-02ac-4a81-9822-6b5adf5c5b91" (UID: "aa942752-02ac-4a81-9822-6b5adf5c5b91"). InnerVolumeSpecName "kube-api-access-krfsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:45:49.101120 master-0 kubenswrapper[29097]: I0312 18:45:49.101050 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4da79c5-a6b0-4e60-8816-a9e69f3a7e96-kube-api-access-7dn24" (OuterVolumeSpecName: "kube-api-access-7dn24") pod "b4da79c5-a6b0-4e60-8816-a9e69f3a7e96" (UID: "b4da79c5-a6b0-4e60-8816-a9e69f3a7e96"). InnerVolumeSpecName "kube-api-access-7dn24". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:45:49.118314 master-0 kubenswrapper[29097]: I0312 18:45:49.118248 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa942752-02ac-4a81-9822-6b5adf5c5b91-config" (OuterVolumeSpecName: "config") pod "aa942752-02ac-4a81-9822-6b5adf5c5b91" (UID: "aa942752-02ac-4a81-9822-6b5adf5c5b91"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:45:49.119821 master-0 kubenswrapper[29097]: I0312 18:45:49.119784 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa942752-02ac-4a81-9822-6b5adf5c5b91-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aa942752-02ac-4a81-9822-6b5adf5c5b91" (UID: "aa942752-02ac-4a81-9822-6b5adf5c5b91"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:45:49.125462 master-0 kubenswrapper[29097]: I0312 18:45:49.125430 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4da79c5-a6b0-4e60-8816-a9e69f3a7e96-config" (OuterVolumeSpecName: "config") pod "b4da79c5-a6b0-4e60-8816-a9e69f3a7e96" (UID: "b4da79c5-a6b0-4e60-8816-a9e69f3a7e96"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:45:49.136883 master-0 kubenswrapper[29097]: I0312 18:45:49.136825 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"89da455f-45f0-4844-9a54-1ad46fe41d43","Type":"ContainerStarted","Data":"37b3350d1a7d812d99d09745f3593e34996ca02992e53e38484fa0af3d494671"} Mar 12 18:45:49.138850 master-0 kubenswrapper[29097]: I0312 18:45:49.138762 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"636bd264-9a47-4480-8beb-f45a4b8c45fe","Type":"ContainerStarted","Data":"5da39e8976519ed7b2bb530a92aa45c17d3a2f363688b314e2e3e4b543a24f03"} Mar 12 18:45:49.139749 master-0 kubenswrapper[29097]: I0312 18:45:49.139725 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6e9bd650-99fd-4a45-9742-0b23b242d8b6","Type":"ContainerStarted","Data":"b0746bfe0ab37ae08b406ccc707c8b71b71fde5d5ad00886283e27aee54064e9"} Mar 12 18:45:49.141030 master-0 kubenswrapper[29097]: I0312 18:45:49.140971 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b","Type":"ContainerStarted","Data":"e9459338edd34e3cf69bd75ebec445b666f53e058e8ebdcef3e1cf2d2f4c9ae2"} Mar 12 18:45:49.142781 master-0 kubenswrapper[29097]: I0312 18:45:49.142746 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685c76cf85-tfc89" event={"ID":"b4da79c5-a6b0-4e60-8816-a9e69f3a7e96","Type":"ContainerDied","Data":"441479740aba602f2e7fe9e120970881f0f1d29f957e6cd13bfc222119cc7908"} Mar 12 18:45:49.142874 master-0 kubenswrapper[29097]: I0312 18:45:49.142787 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685c76cf85-tfc89" Mar 12 18:45:49.142874 master-0 kubenswrapper[29097]: I0312 18:45:49.142795 29097 scope.go:117] "RemoveContainer" containerID="523281f644753ac37841a92be7ec76433a1f8e2f4dc684fc3796fef25948bbdf" Mar 12 18:45:49.144632 master-0 kubenswrapper[29097]: I0312 18:45:49.144593 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zrpnx" event={"ID":"a5e5d447-ad0f-45a1-9613-8be6ff16ce62","Type":"ContainerStarted","Data":"d4de187441605d97556cf2ac68c45eeb508aaeb8378aede1038e552105fd21ce"} Mar 12 18:45:49.146390 master-0 kubenswrapper[29097]: I0312 18:45:49.146357 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476fd89bc-8g7pv" Mar 12 18:45:49.146630 master-0 kubenswrapper[29097]: I0312 18:45:49.146398 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476fd89bc-8g7pv" event={"ID":"aa942752-02ac-4a81-9822-6b5adf5c5b91","Type":"ContainerDied","Data":"ebcd55caad03464df9c4c1db1f5d7bb9223334d379d746334c367b427aba1c7a"} Mar 12 18:45:49.148274 master-0 kubenswrapper[29097]: I0312 18:45:49.148241 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7ec3f557-015f-4fc3-b6cd-9d7f0f976e32","Type":"ContainerStarted","Data":"4a888107eae62fd524512b9527501ac82a7aadf9074c7208515755db06af25c2"} Mar 12 18:45:49.151353 master-0 kubenswrapper[29097]: I0312 18:45:49.151222 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-f289q" event={"ID":"6ceeba9c-e67f-49da-9b94-4359cfcd448e","Type":"ContainerStarted","Data":"b19211f40b464894e15be63a8bba3892892832c1f2a504e3bec649306b2d21f5"} Mar 12 18:45:49.199935 master-0 kubenswrapper[29097]: I0312 18:45:49.199896 29097 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa942752-02ac-4a81-9822-6b5adf5c5b91-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 18:45:49.199935 master-0 kubenswrapper[29097]: I0312 18:45:49.199933 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krfsq\" (UniqueName: \"kubernetes.io/projected/aa942752-02ac-4a81-9822-6b5adf5c5b91-kube-api-access-krfsq\") on node \"master-0\" DevicePath \"\"" Mar 12 18:45:49.200104 master-0 kubenswrapper[29097]: I0312 18:45:49.199944 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dn24\" (UniqueName: \"kubernetes.io/projected/b4da79c5-a6b0-4e60-8816-a9e69f3a7e96-kube-api-access-7dn24\") on node \"master-0\" DevicePath \"\"" Mar 12 18:45:49.200104 master-0 kubenswrapper[29097]: I0312 18:45:49.199954 29097 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa942752-02ac-4a81-9822-6b5adf5c5b91-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:45:49.200104 master-0 kubenswrapper[29097]: I0312 18:45:49.199964 29097 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4da79c5-a6b0-4e60-8816-a9e69f3a7e96-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:45:49.205579 master-0 kubenswrapper[29097]: I0312 18:45:49.205552 29097 scope.go:117] "RemoveContainer" containerID="abbaf029f5fc31f58e9847b8ff1d91066c93da12a28b98d234e7167ca8941e0e" Mar 12 18:45:49.246751 master-0 kubenswrapper[29097]: I0312 18:45:49.245630 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6ff8fd9d5c-f289q" podStartSLOduration=3.66070872 podStartE2EDuration="19.245615281s" podCreationTimestamp="2026-03-12 18:45:30 +0000 UTC" firstStartedPulling="2026-03-12 18:45:32.103666101 +0000 UTC m=+971.657646198" lastFinishedPulling="2026-03-12 18:45:47.688572662 +0000 UTC m=+987.242552759" observedRunningTime="2026-03-12 18:45:49.195579732 +0000 UTC m=+988.749559829" watchObservedRunningTime="2026-03-12 18:45:49.245615281 +0000 UTC m=+988.799595378" Mar 12 18:45:49.289357 master-0 kubenswrapper[29097]: I0312 18:45:49.287451 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-8g7pv"] Mar 12 18:45:49.315150 master-0 kubenswrapper[29097]: I0312 18:45:49.315082 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-8g7pv"] Mar 12 18:45:49.370423 master-0 kubenswrapper[29097]: I0312 18:45:49.370380 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-tfc89"] Mar 12 18:45:49.381307 master-0 kubenswrapper[29097]: I0312 18:45:49.381257 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-tfc89"] Mar 12 18:45:49.750084 master-0 kubenswrapper[29097]: I0312 18:45:49.750018 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-8vq5w"] Mar 12 18:45:50.167766 master-0 kubenswrapper[29097]: I0312 18:45:50.167642 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586dbdbb8c-7gbkn" event={"ID":"3c79941a-6841-4657-ade5-ec4e627743bc","Type":"ContainerStarted","Data":"354f1d02fc0c409b1bdc3194f475991d7d8ce76f3d22d1725f745bf09017f06f"} Mar 12 18:45:50.169125 master-0 kubenswrapper[29097]: I0312 18:45:50.168444 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-586dbdbb8c-7gbkn" Mar 12 18:45:50.173641 master-0 kubenswrapper[29097]: I0312 18:45:50.173577 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6ff8fd9d5c-f289q" Mar 12 18:45:50.197895 master-0 kubenswrapper[29097]: I0312 18:45:50.197821 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-586dbdbb8c-7gbkn" podStartSLOduration=4.298615724 podStartE2EDuration="22.197805087s" podCreationTimestamp="2026-03-12 18:45:28 +0000 UTC" firstStartedPulling="2026-03-12 18:45:29.886710958 +0000 UTC m=+969.440691055" lastFinishedPulling="2026-03-12 18:45:47.78590032 +0000 UTC m=+987.339880418" observedRunningTime="2026-03-12 18:45:50.197418317 +0000 UTC m=+989.751398424" watchObservedRunningTime="2026-03-12 18:45:50.197805087 +0000 UTC m=+989.751785184" Mar 12 18:45:50.732844 master-0 kubenswrapper[29097]: I0312 18:45:50.732790 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa942752-02ac-4a81-9822-6b5adf5c5b91" path="/var/lib/kubelet/pods/aa942752-02ac-4a81-9822-6b5adf5c5b91/volumes" Mar 12 18:45:50.733341 master-0 kubenswrapper[29097]: I0312 18:45:50.733312 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4da79c5-a6b0-4e60-8816-a9e69f3a7e96" path="/var/lib/kubelet/pods/b4da79c5-a6b0-4e60-8816-a9e69f3a7e96/volumes" Mar 12 18:45:51.184186 master-0 kubenswrapper[29097]: I0312 18:45:51.184127 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-8vq5w" event={"ID":"4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2","Type":"ContainerStarted","Data":"db09d7cc6c4f9afdfa030f7f47abdf0010fc95e67649d4b410fb1fe770a1511d"} Mar 12 18:45:56.339787 master-0 kubenswrapper[29097]: I0312 18:45:56.339727 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6ff8fd9d5c-f289q" Mar 12 18:45:56.696955 master-0 kubenswrapper[29097]: I0312 18:45:56.696866 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586dbdbb8c-7gbkn"] Mar 12 18:45:56.697345 master-0 kubenswrapper[29097]: I0312 18:45:56.697259 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-586dbdbb8c-7gbkn" podUID="3c79941a-6841-4657-ade5-ec4e627743bc" containerName="dnsmasq-dns" containerID="cri-o://354f1d02fc0c409b1bdc3194f475991d7d8ce76f3d22d1725f745bf09017f06f" gracePeriod=10 Mar 12 18:45:56.701728 master-0 kubenswrapper[29097]: I0312 18:45:56.701678 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-586dbdbb8c-7gbkn" Mar 12 18:45:58.271529 master-0 kubenswrapper[29097]: I0312 18:45:58.271453 29097 generic.go:334] "Generic (PLEG): container finished" podID="3c79941a-6841-4657-ade5-ec4e627743bc" containerID="354f1d02fc0c409b1bdc3194f475991d7d8ce76f3d22d1725f745bf09017f06f" exitCode=0 Mar 12 18:45:58.271529 master-0 kubenswrapper[29097]: I0312 18:45:58.271531 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586dbdbb8c-7gbkn" event={"ID":"3c79941a-6841-4657-ade5-ec4e627743bc","Type":"ContainerDied","Data":"354f1d02fc0c409b1bdc3194f475991d7d8ce76f3d22d1725f745bf09017f06f"} Mar 12 18:46:00.470892 master-0 kubenswrapper[29097]: I0312 18:46:00.470829 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586dbdbb8c-7gbkn" Mar 12 18:46:00.581354 master-0 kubenswrapper[29097]: I0312 18:46:00.581263 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c79941a-6841-4657-ade5-ec4e627743bc-config\") pod \"3c79941a-6841-4657-ade5-ec4e627743bc\" (UID: \"3c79941a-6841-4657-ade5-ec4e627743bc\") " Mar 12 18:46:00.581354 master-0 kubenswrapper[29097]: I0312 18:46:00.581341 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c79941a-6841-4657-ade5-ec4e627743bc-dns-svc\") pod \"3c79941a-6841-4657-ade5-ec4e627743bc\" (UID: \"3c79941a-6841-4657-ade5-ec4e627743bc\") " Mar 12 18:46:00.581855 master-0 kubenswrapper[29097]: I0312 18:46:00.581451 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v67qw\" (UniqueName: \"kubernetes.io/projected/3c79941a-6841-4657-ade5-ec4e627743bc-kube-api-access-v67qw\") pod \"3c79941a-6841-4657-ade5-ec4e627743bc\" (UID: \"3c79941a-6841-4657-ade5-ec4e627743bc\") " Mar 12 18:46:00.589888 master-0 kubenswrapper[29097]: I0312 18:46:00.589805 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c79941a-6841-4657-ade5-ec4e627743bc-kube-api-access-v67qw" (OuterVolumeSpecName: "kube-api-access-v67qw") pod "3c79941a-6841-4657-ade5-ec4e627743bc" (UID: "3c79941a-6841-4657-ade5-ec4e627743bc"). InnerVolumeSpecName "kube-api-access-v67qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:46:00.625117 master-0 kubenswrapper[29097]: I0312 18:46:00.625057 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c79941a-6841-4657-ade5-ec4e627743bc-config" (OuterVolumeSpecName: "config") pod "3c79941a-6841-4657-ade5-ec4e627743bc" (UID: "3c79941a-6841-4657-ade5-ec4e627743bc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:46:00.642248 master-0 kubenswrapper[29097]: I0312 18:46:00.642197 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c79941a-6841-4657-ade5-ec4e627743bc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3c79941a-6841-4657-ade5-ec4e627743bc" (UID: "3c79941a-6841-4657-ade5-ec4e627743bc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:46:00.684065 master-0 kubenswrapper[29097]: I0312 18:46:00.684011 29097 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c79941a-6841-4657-ade5-ec4e627743bc-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:00.684065 master-0 kubenswrapper[29097]: I0312 18:46:00.684058 29097 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c79941a-6841-4657-ade5-ec4e627743bc-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:00.684294 master-0 kubenswrapper[29097]: I0312 18:46:00.684074 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v67qw\" (UniqueName: \"kubernetes.io/projected/3c79941a-6841-4657-ade5-ec4e627743bc-kube-api-access-v67qw\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:01.308684 master-0 kubenswrapper[29097]: I0312 18:46:01.308600 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586dbdbb8c-7gbkn" event={"ID":"3c79941a-6841-4657-ade5-ec4e627743bc","Type":"ContainerDied","Data":"0131ba4da1985331e70751da29910d2fa67376c20904a5850f7b238bfcb3675d"} Mar 12 18:46:01.308920 master-0 kubenswrapper[29097]: I0312 18:46:01.308698 29097 scope.go:117] "RemoveContainer" containerID="354f1d02fc0c409b1bdc3194f475991d7d8ce76f3d22d1725f745bf09017f06f" Mar 12 18:46:01.308920 master-0 kubenswrapper[29097]: I0312 18:46:01.308637 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586dbdbb8c-7gbkn" Mar 12 18:46:01.537232 master-0 kubenswrapper[29097]: I0312 18:46:01.537147 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586dbdbb8c-7gbkn"] Mar 12 18:46:01.632553 master-0 kubenswrapper[29097]: I0312 18:46:01.632400 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-586dbdbb8c-7gbkn"] Mar 12 18:46:02.187716 master-0 kubenswrapper[29097]: I0312 18:46:02.187687 29097 scope.go:117] "RemoveContainer" containerID="6ba80a6432ef18bd1a8bfab359d471df8e909e147c55d6f189b52926364fbc3d" Mar 12 18:46:02.237473 master-0 kubenswrapper[29097]: I0312 18:46:02.236679 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-jjwrg"] Mar 12 18:46:02.237473 master-0 kubenswrapper[29097]: E0312 18:46:02.237098 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c79941a-6841-4657-ade5-ec4e627743bc" containerName="init" Mar 12 18:46:02.237473 master-0 kubenswrapper[29097]: I0312 18:46:02.237110 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c79941a-6841-4657-ade5-ec4e627743bc" containerName="init" Mar 12 18:46:02.237473 master-0 kubenswrapper[29097]: E0312 18:46:02.237130 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa942752-02ac-4a81-9822-6b5adf5c5b91" containerName="init" Mar 12 18:46:02.237473 master-0 kubenswrapper[29097]: I0312 18:46:02.237136 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa942752-02ac-4a81-9822-6b5adf5c5b91" containerName="init" Mar 12 18:46:02.237473 master-0 kubenswrapper[29097]: E0312 18:46:02.237144 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c79941a-6841-4657-ade5-ec4e627743bc" containerName="dnsmasq-dns" Mar 12 18:46:02.237473 master-0 kubenswrapper[29097]: I0312 18:46:02.237151 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c79941a-6841-4657-ade5-ec4e627743bc" containerName="dnsmasq-dns" Mar 12 18:46:02.237473 master-0 kubenswrapper[29097]: E0312 18:46:02.237170 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4da79c5-a6b0-4e60-8816-a9e69f3a7e96" containerName="init" Mar 12 18:46:02.237473 master-0 kubenswrapper[29097]: I0312 18:46:02.237176 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4da79c5-a6b0-4e60-8816-a9e69f3a7e96" containerName="init" Mar 12 18:46:02.237473 master-0 kubenswrapper[29097]: I0312 18:46:02.237348 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa942752-02ac-4a81-9822-6b5adf5c5b91" containerName="init" Mar 12 18:46:02.237473 master-0 kubenswrapper[29097]: I0312 18:46:02.237379 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c79941a-6841-4657-ade5-ec4e627743bc" containerName="dnsmasq-dns" Mar 12 18:46:02.237473 master-0 kubenswrapper[29097]: I0312 18:46:02.237408 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4da79c5-a6b0-4e60-8816-a9e69f3a7e96" containerName="init" Mar 12 18:46:02.238110 master-0 kubenswrapper[29097]: I0312 18:46:02.238054 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jjwrg" Mar 12 18:46:02.254009 master-0 kubenswrapper[29097]: I0312 18:46:02.242212 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-jjwrg"] Mar 12 18:46:02.254009 master-0 kubenswrapper[29097]: I0312 18:46:02.251453 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Mar 12 18:46:02.339626 master-0 kubenswrapper[29097]: I0312 18:46:02.338666 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h7xq\" (UniqueName: \"kubernetes.io/projected/a28d1a0b-dbf6-4b4f-b2d2-c774917032e4-kube-api-access-6h7xq\") pod \"ovn-controller-metrics-jjwrg\" (UID: \"a28d1a0b-dbf6-4b4f-b2d2-c774917032e4\") " pod="openstack/ovn-controller-metrics-jjwrg" Mar 12 18:46:02.339626 master-0 kubenswrapper[29097]: I0312 18:46:02.338724 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a28d1a0b-dbf6-4b4f-b2d2-c774917032e4-ovs-rundir\") pod \"ovn-controller-metrics-jjwrg\" (UID: \"a28d1a0b-dbf6-4b4f-b2d2-c774917032e4\") " pod="openstack/ovn-controller-metrics-jjwrg" Mar 12 18:46:02.339626 master-0 kubenswrapper[29097]: I0312 18:46:02.338774 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a28d1a0b-dbf6-4b4f-b2d2-c774917032e4-ovn-rundir\") pod \"ovn-controller-metrics-jjwrg\" (UID: \"a28d1a0b-dbf6-4b4f-b2d2-c774917032e4\") " pod="openstack/ovn-controller-metrics-jjwrg" Mar 12 18:46:02.339626 master-0 kubenswrapper[29097]: I0312 18:46:02.338811 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a28d1a0b-dbf6-4b4f-b2d2-c774917032e4-config\") pod \"ovn-controller-metrics-jjwrg\" (UID: \"a28d1a0b-dbf6-4b4f-b2d2-c774917032e4\") " pod="openstack/ovn-controller-metrics-jjwrg" Mar 12 18:46:02.339626 master-0 kubenswrapper[29097]: I0312 18:46:02.338830 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a28d1a0b-dbf6-4b4f-b2d2-c774917032e4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jjwrg\" (UID: \"a28d1a0b-dbf6-4b4f-b2d2-c774917032e4\") " pod="openstack/ovn-controller-metrics-jjwrg" Mar 12 18:46:02.339626 master-0 kubenswrapper[29097]: I0312 18:46:02.338855 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a28d1a0b-dbf6-4b4f-b2d2-c774917032e4-combined-ca-bundle\") pod \"ovn-controller-metrics-jjwrg\" (UID: \"a28d1a0b-dbf6-4b4f-b2d2-c774917032e4\") " pod="openstack/ovn-controller-metrics-jjwrg" Mar 12 18:46:02.440262 master-0 kubenswrapper[29097]: I0312 18:46:02.440146 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a28d1a0b-dbf6-4b4f-b2d2-c774917032e4-ovn-rundir\") pod \"ovn-controller-metrics-jjwrg\" (UID: \"a28d1a0b-dbf6-4b4f-b2d2-c774917032e4\") " pod="openstack/ovn-controller-metrics-jjwrg" Mar 12 18:46:02.440262 master-0 kubenswrapper[29097]: I0312 18:46:02.440239 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a28d1a0b-dbf6-4b4f-b2d2-c774917032e4-config\") pod \"ovn-controller-metrics-jjwrg\" (UID: \"a28d1a0b-dbf6-4b4f-b2d2-c774917032e4\") " pod="openstack/ovn-controller-metrics-jjwrg" Mar 12 18:46:02.440262 master-0 kubenswrapper[29097]: I0312 18:46:02.440260 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a28d1a0b-dbf6-4b4f-b2d2-c774917032e4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jjwrg\" (UID: \"a28d1a0b-dbf6-4b4f-b2d2-c774917032e4\") " pod="openstack/ovn-controller-metrics-jjwrg" Mar 12 18:46:02.440544 master-0 kubenswrapper[29097]: I0312 18:46:02.440292 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a28d1a0b-dbf6-4b4f-b2d2-c774917032e4-combined-ca-bundle\") pod \"ovn-controller-metrics-jjwrg\" (UID: \"a28d1a0b-dbf6-4b4f-b2d2-c774917032e4\") " pod="openstack/ovn-controller-metrics-jjwrg" Mar 12 18:46:02.440544 master-0 kubenswrapper[29097]: I0312 18:46:02.440373 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h7xq\" (UniqueName: \"kubernetes.io/projected/a28d1a0b-dbf6-4b4f-b2d2-c774917032e4-kube-api-access-6h7xq\") pod \"ovn-controller-metrics-jjwrg\" (UID: \"a28d1a0b-dbf6-4b4f-b2d2-c774917032e4\") " pod="openstack/ovn-controller-metrics-jjwrg" Mar 12 18:46:02.440544 master-0 kubenswrapper[29097]: I0312 18:46:02.440405 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a28d1a0b-dbf6-4b4f-b2d2-c774917032e4-ovs-rundir\") pod \"ovn-controller-metrics-jjwrg\" (UID: \"a28d1a0b-dbf6-4b4f-b2d2-c774917032e4\") " pod="openstack/ovn-controller-metrics-jjwrg" Mar 12 18:46:02.440544 master-0 kubenswrapper[29097]: I0312 18:46:02.440510 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a28d1a0b-dbf6-4b4f-b2d2-c774917032e4-ovs-rundir\") pod \"ovn-controller-metrics-jjwrg\" (UID: \"a28d1a0b-dbf6-4b4f-b2d2-c774917032e4\") " pod="openstack/ovn-controller-metrics-jjwrg" Mar 12 18:46:02.440675 master-0 kubenswrapper[29097]: I0312 18:46:02.440585 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a28d1a0b-dbf6-4b4f-b2d2-c774917032e4-ovn-rundir\") pod \"ovn-controller-metrics-jjwrg\" (UID: \"a28d1a0b-dbf6-4b4f-b2d2-c774917032e4\") " pod="openstack/ovn-controller-metrics-jjwrg" Mar 12 18:46:02.441454 master-0 kubenswrapper[29097]: I0312 18:46:02.441429 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a28d1a0b-dbf6-4b4f-b2d2-c774917032e4-config\") pod \"ovn-controller-metrics-jjwrg\" (UID: \"a28d1a0b-dbf6-4b4f-b2d2-c774917032e4\") " pod="openstack/ovn-controller-metrics-jjwrg" Mar 12 18:46:02.444023 master-0 kubenswrapper[29097]: I0312 18:46:02.443936 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a28d1a0b-dbf6-4b4f-b2d2-c774917032e4-combined-ca-bundle\") pod \"ovn-controller-metrics-jjwrg\" (UID: \"a28d1a0b-dbf6-4b4f-b2d2-c774917032e4\") " pod="openstack/ovn-controller-metrics-jjwrg" Mar 12 18:46:02.460624 master-0 kubenswrapper[29097]: I0312 18:46:02.458958 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a28d1a0b-dbf6-4b4f-b2d2-c774917032e4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jjwrg\" (UID: \"a28d1a0b-dbf6-4b4f-b2d2-c774917032e4\") " pod="openstack/ovn-controller-metrics-jjwrg" Mar 12 18:46:02.485028 master-0 kubenswrapper[29097]: I0312 18:46:02.484953 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h7xq\" (UniqueName: \"kubernetes.io/projected/a28d1a0b-dbf6-4b4f-b2d2-c774917032e4-kube-api-access-6h7xq\") pod \"ovn-controller-metrics-jjwrg\" (UID: \"a28d1a0b-dbf6-4b4f-b2d2-c774917032e4\") " pod="openstack/ovn-controller-metrics-jjwrg" Mar 12 18:46:02.512808 master-0 kubenswrapper[29097]: I0312 18:46:02.511648 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65db7fd8ff-mz7n4"] Mar 12 18:46:02.514363 master-0 kubenswrapper[29097]: I0312 18:46:02.513331 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65db7fd8ff-mz7n4" Mar 12 18:46:02.527364 master-0 kubenswrapper[29097]: I0312 18:46:02.517028 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 12 18:46:02.527364 master-0 kubenswrapper[29097]: I0312 18:46:02.522346 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65db7fd8ff-mz7n4"] Mar 12 18:46:02.545240 master-0 kubenswrapper[29097]: I0312 18:46:02.544634 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4a5a90f-6d0a-4042-b3c6-b211bef2e42b-config\") pod \"dnsmasq-dns-65db7fd8ff-mz7n4\" (UID: \"a4a5a90f-6d0a-4042-b3c6-b211bef2e42b\") " pod="openstack/dnsmasq-dns-65db7fd8ff-mz7n4" Mar 12 18:46:02.545240 master-0 kubenswrapper[29097]: I0312 18:46:02.544852 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4a5a90f-6d0a-4042-b3c6-b211bef2e42b-dns-svc\") pod \"dnsmasq-dns-65db7fd8ff-mz7n4\" (UID: \"a4a5a90f-6d0a-4042-b3c6-b211bef2e42b\") " pod="openstack/dnsmasq-dns-65db7fd8ff-mz7n4" Mar 12 18:46:02.545240 master-0 kubenswrapper[29097]: I0312 18:46:02.544887 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4a5a90f-6d0a-4042-b3c6-b211bef2e42b-ovsdbserver-sb\") pod \"dnsmasq-dns-65db7fd8ff-mz7n4\" (UID: \"a4a5a90f-6d0a-4042-b3c6-b211bef2e42b\") " pod="openstack/dnsmasq-dns-65db7fd8ff-mz7n4" Mar 12 18:46:02.545240 master-0 kubenswrapper[29097]: I0312 18:46:02.544954 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb9dv\" (UniqueName: \"kubernetes.io/projected/a4a5a90f-6d0a-4042-b3c6-b211bef2e42b-kube-api-access-kb9dv\") pod \"dnsmasq-dns-65db7fd8ff-mz7n4\" (UID: \"a4a5a90f-6d0a-4042-b3c6-b211bef2e42b\") " pod="openstack/dnsmasq-dns-65db7fd8ff-mz7n4" Mar 12 18:46:02.640195 master-0 kubenswrapper[29097]: I0312 18:46:02.640138 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jjwrg" Mar 12 18:46:02.646171 master-0 kubenswrapper[29097]: I0312 18:46:02.646126 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4a5a90f-6d0a-4042-b3c6-b211bef2e42b-config\") pod \"dnsmasq-dns-65db7fd8ff-mz7n4\" (UID: \"a4a5a90f-6d0a-4042-b3c6-b211bef2e42b\") " pod="openstack/dnsmasq-dns-65db7fd8ff-mz7n4" Mar 12 18:46:02.646919 master-0 kubenswrapper[29097]: I0312 18:46:02.646832 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4a5a90f-6d0a-4042-b3c6-b211bef2e42b-dns-svc\") pod \"dnsmasq-dns-65db7fd8ff-mz7n4\" (UID: \"a4a5a90f-6d0a-4042-b3c6-b211bef2e42b\") " pod="openstack/dnsmasq-dns-65db7fd8ff-mz7n4" Mar 12 18:46:02.646919 master-0 kubenswrapper[29097]: I0312 18:46:02.646898 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4a5a90f-6d0a-4042-b3c6-b211bef2e42b-ovsdbserver-sb\") pod \"dnsmasq-dns-65db7fd8ff-mz7n4\" (UID: \"a4a5a90f-6d0a-4042-b3c6-b211bef2e42b\") " pod="openstack/dnsmasq-dns-65db7fd8ff-mz7n4" Mar 12 18:46:02.647008 master-0 kubenswrapper[29097]: I0312 18:46:02.646948 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb9dv\" (UniqueName: \"kubernetes.io/projected/a4a5a90f-6d0a-4042-b3c6-b211bef2e42b-kube-api-access-kb9dv\") pod \"dnsmasq-dns-65db7fd8ff-mz7n4\" (UID: \"a4a5a90f-6d0a-4042-b3c6-b211bef2e42b\") " pod="openstack/dnsmasq-dns-65db7fd8ff-mz7n4" Mar 12 18:46:02.647796 master-0 kubenswrapper[29097]: I0312 18:46:02.647749 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4a5a90f-6d0a-4042-b3c6-b211bef2e42b-ovsdbserver-sb\") pod \"dnsmasq-dns-65db7fd8ff-mz7n4\" (UID: \"a4a5a90f-6d0a-4042-b3c6-b211bef2e42b\") " pod="openstack/dnsmasq-dns-65db7fd8ff-mz7n4" Mar 12 18:46:02.647865 master-0 kubenswrapper[29097]: I0312 18:46:02.647839 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4a5a90f-6d0a-4042-b3c6-b211bef2e42b-dns-svc\") pod \"dnsmasq-dns-65db7fd8ff-mz7n4\" (UID: \"a4a5a90f-6d0a-4042-b3c6-b211bef2e42b\") " pod="openstack/dnsmasq-dns-65db7fd8ff-mz7n4" Mar 12 18:46:02.654681 master-0 kubenswrapper[29097]: I0312 18:46:02.654165 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65db7fd8ff-mz7n4"] Mar 12 18:46:02.655681 master-0 kubenswrapper[29097]: E0312 18:46:02.655623 29097 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config kube-api-access-kb9dv], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-65db7fd8ff-mz7n4" podUID="a4a5a90f-6d0a-4042-b3c6-b211bef2e42b" Mar 12 18:46:02.655759 master-0 kubenswrapper[29097]: I0312 18:46:02.655705 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4a5a90f-6d0a-4042-b3c6-b211bef2e42b-config\") pod \"dnsmasq-dns-65db7fd8ff-mz7n4\" (UID: \"a4a5a90f-6d0a-4042-b3c6-b211bef2e42b\") " pod="openstack/dnsmasq-dns-65db7fd8ff-mz7n4" Mar 12 18:46:02.711998 master-0 kubenswrapper[29097]: I0312 18:46:02.711942 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76f498f559-tw8sw"] Mar 12 18:46:02.718359 master-0 kubenswrapper[29097]: I0312 18:46:02.718319 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76f498f559-tw8sw" Mar 12 18:46:02.722532 master-0 kubenswrapper[29097]: I0312 18:46:02.719872 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 12 18:46:02.736925 master-0 kubenswrapper[29097]: I0312 18:46:02.736604 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb9dv\" (UniqueName: \"kubernetes.io/projected/a4a5a90f-6d0a-4042-b3c6-b211bef2e42b-kube-api-access-kb9dv\") pod \"dnsmasq-dns-65db7fd8ff-mz7n4\" (UID: \"a4a5a90f-6d0a-4042-b3c6-b211bef2e42b\") " pod="openstack/dnsmasq-dns-65db7fd8ff-mz7n4" Mar 12 18:46:02.752454 master-0 kubenswrapper[29097]: I0312 18:46:02.752207 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c79941a-6841-4657-ade5-ec4e627743bc" path="/var/lib/kubelet/pods/3c79941a-6841-4657-ade5-ec4e627743bc/volumes" Mar 12 18:46:02.752952 master-0 kubenswrapper[29097]: I0312 18:46:02.752923 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76f498f559-tw8sw"] Mar 12 18:46:02.861592 master-0 kubenswrapper[29097]: I0312 18:46:02.851407 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1fa9220-74d6-4409-9db2-822727793acf-ovsdbserver-sb\") pod \"dnsmasq-dns-76f498f559-tw8sw\" (UID: \"e1fa9220-74d6-4409-9db2-822727793acf\") " pod="openstack/dnsmasq-dns-76f498f559-tw8sw" Mar 12 18:46:02.861592 master-0 kubenswrapper[29097]: I0312 18:46:02.851478 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1fa9220-74d6-4409-9db2-822727793acf-ovsdbserver-nb\") pod \"dnsmasq-dns-76f498f559-tw8sw\" (UID: \"e1fa9220-74d6-4409-9db2-822727793acf\") " pod="openstack/dnsmasq-dns-76f498f559-tw8sw" Mar 12 18:46:02.861592 master-0 kubenswrapper[29097]: I0312 18:46:02.851676 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1fa9220-74d6-4409-9db2-822727793acf-dns-svc\") pod \"dnsmasq-dns-76f498f559-tw8sw\" (UID: \"e1fa9220-74d6-4409-9db2-822727793acf\") " pod="openstack/dnsmasq-dns-76f498f559-tw8sw" Mar 12 18:46:02.861592 master-0 kubenswrapper[29097]: I0312 18:46:02.859627 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1fa9220-74d6-4409-9db2-822727793acf-config\") pod \"dnsmasq-dns-76f498f559-tw8sw\" (UID: \"e1fa9220-74d6-4409-9db2-822727793acf\") " pod="openstack/dnsmasq-dns-76f498f559-tw8sw" Mar 12 18:46:02.861592 master-0 kubenswrapper[29097]: I0312 18:46:02.859868 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp6hw\" (UniqueName: \"kubernetes.io/projected/e1fa9220-74d6-4409-9db2-822727793acf-kube-api-access-sp6hw\") pod \"dnsmasq-dns-76f498f559-tw8sw\" (UID: \"e1fa9220-74d6-4409-9db2-822727793acf\") " pod="openstack/dnsmasq-dns-76f498f559-tw8sw" Mar 12 18:46:02.963535 master-0 kubenswrapper[29097]: I0312 18:46:02.962446 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1fa9220-74d6-4409-9db2-822727793acf-dns-svc\") pod \"dnsmasq-dns-76f498f559-tw8sw\" (UID: \"e1fa9220-74d6-4409-9db2-822727793acf\") " pod="openstack/dnsmasq-dns-76f498f559-tw8sw" Mar 12 18:46:02.963535 master-0 kubenswrapper[29097]: I0312 18:46:02.963335 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1fa9220-74d6-4409-9db2-822727793acf-config\") pod \"dnsmasq-dns-76f498f559-tw8sw\" (UID: \"e1fa9220-74d6-4409-9db2-822727793acf\") " pod="openstack/dnsmasq-dns-76f498f559-tw8sw" Mar 12 18:46:02.963535 master-0 kubenswrapper[29097]: I0312 18:46:02.963417 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp6hw\" (UniqueName: \"kubernetes.io/projected/e1fa9220-74d6-4409-9db2-822727793acf-kube-api-access-sp6hw\") pod \"dnsmasq-dns-76f498f559-tw8sw\" (UID: \"e1fa9220-74d6-4409-9db2-822727793acf\") " pod="openstack/dnsmasq-dns-76f498f559-tw8sw" Mar 12 18:46:02.963535 master-0 kubenswrapper[29097]: I0312 18:46:02.963474 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1fa9220-74d6-4409-9db2-822727793acf-ovsdbserver-sb\") pod \"dnsmasq-dns-76f498f559-tw8sw\" (UID: \"e1fa9220-74d6-4409-9db2-822727793acf\") " pod="openstack/dnsmasq-dns-76f498f559-tw8sw" Mar 12 18:46:02.963925 master-0 kubenswrapper[29097]: I0312 18:46:02.963888 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1fa9220-74d6-4409-9db2-822727793acf-ovsdbserver-nb\") pod \"dnsmasq-dns-76f498f559-tw8sw\" (UID: \"e1fa9220-74d6-4409-9db2-822727793acf\") " pod="openstack/dnsmasq-dns-76f498f559-tw8sw" Mar 12 18:46:02.964133 master-0 kubenswrapper[29097]: I0312 18:46:02.963287 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1fa9220-74d6-4409-9db2-822727793acf-dns-svc\") pod \"dnsmasq-dns-76f498f559-tw8sw\" (UID: \"e1fa9220-74d6-4409-9db2-822727793acf\") " pod="openstack/dnsmasq-dns-76f498f559-tw8sw" Mar 12 18:46:02.964433 master-0 kubenswrapper[29097]: I0312 18:46:02.964407 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1fa9220-74d6-4409-9db2-822727793acf-config\") pod \"dnsmasq-dns-76f498f559-tw8sw\" (UID: \"e1fa9220-74d6-4409-9db2-822727793acf\") " pod="openstack/dnsmasq-dns-76f498f559-tw8sw" Mar 12 18:46:02.965652 master-0 kubenswrapper[29097]: I0312 18:46:02.965636 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1fa9220-74d6-4409-9db2-822727793acf-ovsdbserver-sb\") pod \"dnsmasq-dns-76f498f559-tw8sw\" (UID: \"e1fa9220-74d6-4409-9db2-822727793acf\") " pod="openstack/dnsmasq-dns-76f498f559-tw8sw" Mar 12 18:46:02.967006 master-0 kubenswrapper[29097]: I0312 18:46:02.966974 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1fa9220-74d6-4409-9db2-822727793acf-ovsdbserver-nb\") pod \"dnsmasq-dns-76f498f559-tw8sw\" (UID: \"e1fa9220-74d6-4409-9db2-822727793acf\") " pod="openstack/dnsmasq-dns-76f498f559-tw8sw" Mar 12 18:46:02.981483 master-0 kubenswrapper[29097]: I0312 18:46:02.981439 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp6hw\" (UniqueName: \"kubernetes.io/projected/e1fa9220-74d6-4409-9db2-822727793acf-kube-api-access-sp6hw\") pod \"dnsmasq-dns-76f498f559-tw8sw\" (UID: \"e1fa9220-74d6-4409-9db2-822727793acf\") " pod="openstack/dnsmasq-dns-76f498f559-tw8sw" Mar 12 18:46:03.120492 master-0 kubenswrapper[29097]: I0312 18:46:03.120418 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76f498f559-tw8sw" Mar 12 18:46:03.202535 master-0 kubenswrapper[29097]: I0312 18:46:03.201529 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-jjwrg"] Mar 12 18:46:03.340926 master-0 kubenswrapper[29097]: I0312 18:46:03.340257 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jjwrg" event={"ID":"a28d1a0b-dbf6-4b4f-b2d2-c774917032e4","Type":"ContainerStarted","Data":"1b65a501445adbb0685b9edf1e85068bd59631491c6234c10b4cd6adbbf13150"} Mar 12 18:46:03.355140 master-0 kubenswrapper[29097]: I0312 18:46:03.354215 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7ec3f557-015f-4fc3-b6cd-9d7f0f976e32","Type":"ContainerStarted","Data":"7c01eddabe66f27d26390d1fac743fe94aae9ac6795dda80ec19a9f4a6a2e07e"} Mar 12 18:46:03.359124 master-0 kubenswrapper[29097]: I0312 18:46:03.359076 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3c377976-30da-4335-b35e-e2e65789e21d","Type":"ContainerStarted","Data":"4d0626092542d5790fa98430a74953c547cc8465625e0441b91785c632fc14de"} Mar 12 18:46:03.359546 master-0 kubenswrapper[29097]: I0312 18:46:03.359502 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 12 18:46:03.361362 master-0 kubenswrapper[29097]: I0312 18:46:03.361315 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"636bd264-9a47-4480-8beb-f45a4b8c45fe","Type":"ContainerStarted","Data":"c539e96149bee975b5db2c9590ab0499cf0f5b9f1ab16223476358edfb0c0dd6"} Mar 12 18:46:03.369035 master-0 kubenswrapper[29097]: I0312 18:46:03.367114 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6e9bd650-99fd-4a45-9742-0b23b242d8b6","Type":"ContainerStarted","Data":"67d6bdbc18490193d6a653620c9c1af55807d9d542348f6e7383aeb61bcdecdc"} Mar 12 18:46:03.369974 master-0 kubenswrapper[29097]: I0312 18:46:03.369921 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65db7fd8ff-mz7n4" Mar 12 18:46:03.370195 master-0 kubenswrapper[29097]: I0312 18:46:03.370154 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zrpnx" event={"ID":"a5e5d447-ad0f-45a1-9613-8be6ff16ce62","Type":"ContainerStarted","Data":"0591e7e26b422a6e9c39a975de327cc095e6e7fb72f1d2d86d8cb9f295d6e5ad"} Mar 12 18:46:03.370247 master-0 kubenswrapper[29097]: I0312 18:46:03.370201 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-zrpnx" Mar 12 18:46:03.425747 master-0 kubenswrapper[29097]: I0312 18:46:03.418934 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=16.160765714 podStartE2EDuration="30.418917254s" podCreationTimestamp="2026-03-12 18:45:33 +0000 UTC" firstStartedPulling="2026-03-12 18:45:47.88408864 +0000 UTC m=+987.438068727" lastFinishedPulling="2026-03-12 18:46:02.14224017 +0000 UTC m=+1001.696220267" observedRunningTime="2026-03-12 18:46:03.413763046 +0000 UTC m=+1002.967743163" watchObservedRunningTime="2026-03-12 18:46:03.418917254 +0000 UTC m=+1002.972897351" Mar 12 18:46:03.452971 master-0 kubenswrapper[29097]: I0312 18:46:03.452894 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-zrpnx" podStartSLOduration=11.640430971 podStartE2EDuration="25.452872421s" podCreationTimestamp="2026-03-12 18:45:38 +0000 UTC" firstStartedPulling="2026-03-12 18:45:48.379981353 +0000 UTC m=+987.933961450" lastFinishedPulling="2026-03-12 18:46:02.192422803 +0000 UTC m=+1001.746402900" observedRunningTime="2026-03-12 18:46:03.444212825 +0000 UTC m=+1002.998192922" watchObservedRunningTime="2026-03-12 18:46:03.452872421 +0000 UTC m=+1003.006852518" Mar 12 18:46:03.559916 master-0 kubenswrapper[29097]: I0312 18:46:03.558956 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65db7fd8ff-mz7n4" Mar 12 18:46:03.596813 master-0 kubenswrapper[29097]: I0312 18:46:03.595937 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76f498f559-tw8sw"] Mar 12 18:46:03.679206 master-0 kubenswrapper[29097]: I0312 18:46:03.678464 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb9dv\" (UniqueName: \"kubernetes.io/projected/a4a5a90f-6d0a-4042-b3c6-b211bef2e42b-kube-api-access-kb9dv\") pod \"a4a5a90f-6d0a-4042-b3c6-b211bef2e42b\" (UID: \"a4a5a90f-6d0a-4042-b3c6-b211bef2e42b\") " Mar 12 18:46:03.679206 master-0 kubenswrapper[29097]: I0312 18:46:03.678603 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4a5a90f-6d0a-4042-b3c6-b211bef2e42b-dns-svc\") pod \"a4a5a90f-6d0a-4042-b3c6-b211bef2e42b\" (UID: \"a4a5a90f-6d0a-4042-b3c6-b211bef2e42b\") " Mar 12 18:46:03.679206 master-0 kubenswrapper[29097]: I0312 18:46:03.678660 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4a5a90f-6d0a-4042-b3c6-b211bef2e42b-config\") pod \"a4a5a90f-6d0a-4042-b3c6-b211bef2e42b\" (UID: \"a4a5a90f-6d0a-4042-b3c6-b211bef2e42b\") " Mar 12 18:46:03.679206 master-0 kubenswrapper[29097]: I0312 18:46:03.678780 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4a5a90f-6d0a-4042-b3c6-b211bef2e42b-ovsdbserver-sb\") pod \"a4a5a90f-6d0a-4042-b3c6-b211bef2e42b\" (UID: \"a4a5a90f-6d0a-4042-b3c6-b211bef2e42b\") " Mar 12 18:46:03.679643 master-0 kubenswrapper[29097]: I0312 18:46:03.679242 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4a5a90f-6d0a-4042-b3c6-b211bef2e42b-config" (OuterVolumeSpecName: "config") pod "a4a5a90f-6d0a-4042-b3c6-b211bef2e42b" (UID: "a4a5a90f-6d0a-4042-b3c6-b211bef2e42b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:46:03.679643 master-0 kubenswrapper[29097]: I0312 18:46:03.679618 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4a5a90f-6d0a-4042-b3c6-b211bef2e42b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a4a5a90f-6d0a-4042-b3c6-b211bef2e42b" (UID: "a4a5a90f-6d0a-4042-b3c6-b211bef2e42b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:46:03.679782 master-0 kubenswrapper[29097]: I0312 18:46:03.679724 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4a5a90f-6d0a-4042-b3c6-b211bef2e42b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a4a5a90f-6d0a-4042-b3c6-b211bef2e42b" (UID: "a4a5a90f-6d0a-4042-b3c6-b211bef2e42b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:46:03.686289 master-0 kubenswrapper[29097]: I0312 18:46:03.686033 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4a5a90f-6d0a-4042-b3c6-b211bef2e42b-kube-api-access-kb9dv" (OuterVolumeSpecName: "kube-api-access-kb9dv") pod "a4a5a90f-6d0a-4042-b3c6-b211bef2e42b" (UID: "a4a5a90f-6d0a-4042-b3c6-b211bef2e42b"). InnerVolumeSpecName "kube-api-access-kb9dv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:46:03.781413 master-0 kubenswrapper[29097]: I0312 18:46:03.781365 29097 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4a5a90f-6d0a-4042-b3c6-b211bef2e42b-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:03.781528 master-0 kubenswrapper[29097]: I0312 18:46:03.781458 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb9dv\" (UniqueName: \"kubernetes.io/projected/a4a5a90f-6d0a-4042-b3c6-b211bef2e42b-kube-api-access-kb9dv\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:03.781528 master-0 kubenswrapper[29097]: I0312 18:46:03.781474 29097 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4a5a90f-6d0a-4042-b3c6-b211bef2e42b-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:03.781528 master-0 kubenswrapper[29097]: I0312 18:46:03.781483 29097 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4a5a90f-6d0a-4042-b3c6-b211bef2e42b-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:03.909741 master-0 kubenswrapper[29097]: I0312 18:46:03.909639 29097 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-586dbdbb8c-7gbkn" podUID="3c79941a-6841-4657-ade5-ec4e627743bc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.169:5353: i/o timeout" Mar 12 18:46:04.599590 master-0 kubenswrapper[29097]: I0312 18:46:04.599495 29097 generic.go:334] "Generic (PLEG): container finished" podID="4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2" containerID="9a0840f9843ec84ffa8dabd061974ecd79c275e35e0d59e358b98c9d6b13f9a1" exitCode=0 Mar 12 18:46:04.600185 master-0 kubenswrapper[29097]: I0312 18:46:04.599680 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-8vq5w" event={"ID":"4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2","Type":"ContainerDied","Data":"9a0840f9843ec84ffa8dabd061974ecd79c275e35e0d59e358b98c9d6b13f9a1"} Mar 12 18:46:04.609073 master-0 kubenswrapper[29097]: I0312 18:46:04.605395 29097 generic.go:334] "Generic (PLEG): container finished" podID="e1fa9220-74d6-4409-9db2-822727793acf" containerID="f7782f99d7d6184bb98f78046ccfa508d958a5a48ee4d32535d436ab810210f7" exitCode=0 Mar 12 18:46:04.609073 master-0 kubenswrapper[29097]: I0312 18:46:04.605502 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76f498f559-tw8sw" event={"ID":"e1fa9220-74d6-4409-9db2-822727793acf","Type":"ContainerDied","Data":"f7782f99d7d6184bb98f78046ccfa508d958a5a48ee4d32535d436ab810210f7"} Mar 12 18:46:04.609073 master-0 kubenswrapper[29097]: I0312 18:46:04.606109 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76f498f559-tw8sw" event={"ID":"e1fa9220-74d6-4409-9db2-822727793acf","Type":"ContainerStarted","Data":"6561cb27937dedf8030bf37ed440cd6246a6b7bab572a7faccb500c6071cfae9"} Mar 12 18:46:04.612355 master-0 kubenswrapper[29097]: I0312 18:46:04.612179 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"49290c2f-177f-4a5e-8e1e-cf105e962c5b","Type":"ContainerStarted","Data":"b7c613ef0a8dcf96aa944c9cda7969d6d177e80fd85249a6b2296a24c402b59e"} Mar 12 18:46:04.618530 master-0 kubenswrapper[29097]: I0312 18:46:04.618465 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b","Type":"ContainerStarted","Data":"472e017aef95ca4afce2e32079616d028e66de14f46787ba30b363aa1398ef8c"} Mar 12 18:46:04.627379 master-0 kubenswrapper[29097]: I0312 18:46:04.627144 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65db7fd8ff-mz7n4" Mar 12 18:46:04.628163 master-0 kubenswrapper[29097]: I0312 18:46:04.628093 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"89da455f-45f0-4844-9a54-1ad46fe41d43","Type":"ContainerStarted","Data":"a8f3c3f312d23c561cf4f2b7a0f2ffead26ec83f1a86027dc4849c9d8a8be302"} Mar 12 18:46:04.859247 master-0 kubenswrapper[29097]: I0312 18:46:04.859088 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65db7fd8ff-mz7n4"] Mar 12 18:46:04.865944 master-0 kubenswrapper[29097]: I0312 18:46:04.865880 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65db7fd8ff-mz7n4"] Mar 12 18:46:05.693606 master-0 kubenswrapper[29097]: I0312 18:46:05.690648 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-8vq5w" event={"ID":"4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2","Type":"ContainerStarted","Data":"69b5c87fe80c76c794294aebba8a6c3539a452163673f69e679192cf3e43dbd6"} Mar 12 18:46:05.693606 master-0 kubenswrapper[29097]: I0312 18:46:05.690695 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-8vq5w" event={"ID":"4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2","Type":"ContainerStarted","Data":"9945de9cc3f15e002ae1e316f4d58eec4d70b51e3b448d1c6d79f787612639dd"} Mar 12 18:46:05.693606 master-0 kubenswrapper[29097]: I0312 18:46:05.691562 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-8vq5w" Mar 12 18:46:05.693606 master-0 kubenswrapper[29097]: I0312 18:46:05.691617 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-8vq5w" Mar 12 18:46:05.706247 master-0 kubenswrapper[29097]: I0312 18:46:05.705220 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76f498f559-tw8sw" event={"ID":"e1fa9220-74d6-4409-9db2-822727793acf","Type":"ContainerStarted","Data":"68ee4f5c05c15e67d2920ce1e81f010a91948b5ef780d1d513fe5bc14d715085"} Mar 12 18:46:05.706247 master-0 kubenswrapper[29097]: I0312 18:46:05.705271 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76f498f559-tw8sw" Mar 12 18:46:05.722646 master-0 kubenswrapper[29097]: I0312 18:46:05.722576 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-8vq5w" podStartSLOduration=15.98352397 podStartE2EDuration="27.722555249s" podCreationTimestamp="2026-03-12 18:45:38 +0000 UTC" firstStartedPulling="2026-03-12 18:45:50.674711856 +0000 UTC m=+990.228691953" lastFinishedPulling="2026-03-12 18:46:02.413743135 +0000 UTC m=+1001.967723232" observedRunningTime="2026-03-12 18:46:05.717866112 +0000 UTC m=+1005.271846239" watchObservedRunningTime="2026-03-12 18:46:05.722555249 +0000 UTC m=+1005.276535346" Mar 12 18:46:05.756166 master-0 kubenswrapper[29097]: I0312 18:46:05.756090 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76f498f559-tw8sw" podStartSLOduration=3.7560665650000002 podStartE2EDuration="3.756066565s" podCreationTimestamp="2026-03-12 18:46:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:46:05.743560293 +0000 UTC m=+1005.297540390" watchObservedRunningTime="2026-03-12 18:46:05.756066565 +0000 UTC m=+1005.310046662" Mar 12 18:46:06.730470 master-0 kubenswrapper[29097]: I0312 18:46:06.730416 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4a5a90f-6d0a-4042-b3c6-b211bef2e42b" path="/var/lib/kubelet/pods/a4a5a90f-6d0a-4042-b3c6-b211bef2e42b/volumes" Mar 12 18:46:08.634770 master-0 kubenswrapper[29097]: I0312 18:46:08.634715 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 12 18:46:11.767036 master-0 kubenswrapper[29097]: I0312 18:46:11.766982 29097 generic.go:334] "Generic (PLEG): container finished" podID="7ec3f557-015f-4fc3-b6cd-9d7f0f976e32" containerID="7c01eddabe66f27d26390d1fac743fe94aae9ac6795dda80ec19a9f4a6a2e07e" exitCode=0 Mar 12 18:46:11.767574 master-0 kubenswrapper[29097]: I0312 18:46:11.767057 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7ec3f557-015f-4fc3-b6cd-9d7f0f976e32","Type":"ContainerDied","Data":"7c01eddabe66f27d26390d1fac743fe94aae9ac6795dda80ec19a9f4a6a2e07e"} Mar 12 18:46:11.771211 master-0 kubenswrapper[29097]: I0312 18:46:11.771157 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"636bd264-9a47-4480-8beb-f45a4b8c45fe","Type":"ContainerStarted","Data":"9c7262422ce04b335e88cdf209a64cf269922a70c9b5558a1898b2500d0f1f99"} Mar 12 18:46:11.774350 master-0 kubenswrapper[29097]: I0312 18:46:11.774311 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6e9bd650-99fd-4a45-9742-0b23b242d8b6","Type":"ContainerStarted","Data":"8aee376c55fa8c9d5cf906a12c250bac98500eb747006998a9bed7dba709bd99"} Mar 12 18:46:11.776800 master-0 kubenswrapper[29097]: I0312 18:46:11.776748 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jjwrg" event={"ID":"a28d1a0b-dbf6-4b4f-b2d2-c774917032e4","Type":"ContainerStarted","Data":"86c15d5a3674ca48feda939e8e04ef3fc56dbeb5dd689cc271e4c553c5e2786e"} Mar 12 18:46:11.779294 master-0 kubenswrapper[29097]: I0312 18:46:11.779240 29097 generic.go:334] "Generic (PLEG): container finished" podID="89da455f-45f0-4844-9a54-1ad46fe41d43" containerID="a8f3c3f312d23c561cf4f2b7a0f2ffead26ec83f1a86027dc4849c9d8a8be302" exitCode=0 Mar 12 18:46:11.779294 master-0 kubenswrapper[29097]: I0312 18:46:11.779282 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"89da455f-45f0-4844-9a54-1ad46fe41d43","Type":"ContainerDied","Data":"a8f3c3f312d23c561cf4f2b7a0f2ffead26ec83f1a86027dc4849c9d8a8be302"} Mar 12 18:46:11.829995 master-0 kubenswrapper[29097]: I0312 18:46:11.829833 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=11.771382009 podStartE2EDuration="33.829813594s" podCreationTimestamp="2026-03-12 18:45:38 +0000 UTC" firstStartedPulling="2026-03-12 18:45:48.874579453 +0000 UTC m=+988.428559550" lastFinishedPulling="2026-03-12 18:46:10.933011018 +0000 UTC m=+1010.486991135" observedRunningTime="2026-03-12 18:46:11.822482011 +0000 UTC m=+1011.376462108" watchObservedRunningTime="2026-03-12 18:46:11.829813594 +0000 UTC m=+1011.383793681" Mar 12 18:46:11.875744 master-0 kubenswrapper[29097]: I0312 18:46:11.875033 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=9.442478803 podStartE2EDuration="31.875012472s" podCreationTimestamp="2026-03-12 18:45:40 +0000 UTC" firstStartedPulling="2026-03-12 18:45:48.520256723 +0000 UTC m=+988.074236820" lastFinishedPulling="2026-03-12 18:46:10.952790392 +0000 UTC m=+1010.506770489" observedRunningTime="2026-03-12 18:46:11.869636127 +0000 UTC m=+1011.423616234" watchObservedRunningTime="2026-03-12 18:46:11.875012472 +0000 UTC m=+1011.428992569" Mar 12 18:46:11.935449 master-0 kubenswrapper[29097]: I0312 18:46:11.935349 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-jjwrg" podStartSLOduration=2.162312842 podStartE2EDuration="9.935325336s" podCreationTimestamp="2026-03-12 18:46:02 +0000 UTC" firstStartedPulling="2026-03-12 18:46:03.219118799 +0000 UTC m=+1002.773098896" lastFinishedPulling="2026-03-12 18:46:10.992131303 +0000 UTC m=+1010.546111390" observedRunningTime="2026-03-12 18:46:11.930627539 +0000 UTC m=+1011.484607656" watchObservedRunningTime="2026-03-12 18:46:11.935325336 +0000 UTC m=+1011.489305433" Mar 12 18:46:12.003690 master-0 kubenswrapper[29097]: I0312 18:46:12.003620 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 12 18:46:12.003690 master-0 kubenswrapper[29097]: I0312 18:46:12.003669 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 12 18:46:12.065047 master-0 kubenswrapper[29097]: I0312 18:46:12.064995 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 12 18:46:12.802116 master-0 kubenswrapper[29097]: I0312 18:46:12.800241 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"89da455f-45f0-4844-9a54-1ad46fe41d43","Type":"ContainerStarted","Data":"2834d2069438e4287d3301f589f365476a3ef8a55986fdde9746af029acff0f3"} Mar 12 18:46:12.803050 master-0 kubenswrapper[29097]: I0312 18:46:12.802967 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7ec3f557-015f-4fc3-b6cd-9d7f0f976e32","Type":"ContainerStarted","Data":"706d7e2f46f839abdf46694860403d3ced83da468d7f8fee843d1391f749cc99"} Mar 12 18:46:12.846967 master-0 kubenswrapper[29097]: I0312 18:46:12.846832 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=28.035716311 podStartE2EDuration="41.846776727s" podCreationTimestamp="2026-03-12 18:45:31 +0000 UTC" firstStartedPulling="2026-03-12 18:45:48.376954317 +0000 UTC m=+987.930934414" lastFinishedPulling="2026-03-12 18:46:02.188014743 +0000 UTC m=+1001.741994830" observedRunningTime="2026-03-12 18:46:12.832031269 +0000 UTC m=+1012.386011446" watchObservedRunningTime="2026-03-12 18:46:12.846776727 +0000 UTC m=+1012.400756864" Mar 12 18:46:12.866281 master-0 kubenswrapper[29097]: I0312 18:46:12.866172 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=26.809746404 podStartE2EDuration="40.866152421s" podCreationTimestamp="2026-03-12 18:45:32 +0000 UTC" firstStartedPulling="2026-03-12 18:45:48.369289566 +0000 UTC m=+987.923269663" lastFinishedPulling="2026-03-12 18:46:02.425695583 +0000 UTC m=+1001.979675680" observedRunningTime="2026-03-12 18:46:12.856231603 +0000 UTC m=+1012.410211690" watchObservedRunningTime="2026-03-12 18:46:12.866152421 +0000 UTC m=+1012.420132518" Mar 12 18:46:12.877082 master-0 kubenswrapper[29097]: I0312 18:46:12.877003 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 12 18:46:13.127685 master-0 kubenswrapper[29097]: I0312 18:46:13.124741 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76f498f559-tw8sw" Mar 12 18:46:13.248578 master-0 kubenswrapper[29097]: I0312 18:46:13.229568 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-f289q"] Mar 12 18:46:13.248578 master-0 kubenswrapper[29097]: I0312 18:46:13.229778 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6ff8fd9d5c-f289q" podUID="6ceeba9c-e67f-49da-9b94-4359cfcd448e" containerName="dnsmasq-dns" containerID="cri-o://b19211f40b464894e15be63a8bba3892892832c1f2a504e3bec649306b2d21f5" gracePeriod=10 Mar 12 18:46:13.813712 master-0 kubenswrapper[29097]: I0312 18:46:13.813533 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-f289q" event={"ID":"6ceeba9c-e67f-49da-9b94-4359cfcd448e","Type":"ContainerDied","Data":"b19211f40b464894e15be63a8bba3892892832c1f2a504e3bec649306b2d21f5"} Mar 12 18:46:13.814177 master-0 kubenswrapper[29097]: I0312 18:46:13.814089 29097 generic.go:334] "Generic (PLEG): container finished" podID="6ceeba9c-e67f-49da-9b94-4359cfcd448e" containerID="b19211f40b464894e15be63a8bba3892892832c1f2a504e3bec649306b2d21f5" exitCode=0 Mar 12 18:46:13.814217 master-0 kubenswrapper[29097]: I0312 18:46:13.814180 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-f289q" event={"ID":"6ceeba9c-e67f-49da-9b94-4359cfcd448e","Type":"ContainerDied","Data":"de6dbfa664d77581d275eec398cf25477503e269b7302c717d3a0741d0b4a1d8"} Mar 12 18:46:13.814256 master-0 kubenswrapper[29097]: I0312 18:46:13.814219 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de6dbfa664d77581d275eec398cf25477503e269b7302c717d3a0741d0b4a1d8" Mar 12 18:46:13.814885 master-0 kubenswrapper[29097]: I0312 18:46:13.814865 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff8fd9d5c-f289q" Mar 12 18:46:13.943937 master-0 kubenswrapper[29097]: I0312 18:46:13.943276 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ceeba9c-e67f-49da-9b94-4359cfcd448e-dns-svc\") pod \"6ceeba9c-e67f-49da-9b94-4359cfcd448e\" (UID: \"6ceeba9c-e67f-49da-9b94-4359cfcd448e\") " Mar 12 18:46:13.943937 master-0 kubenswrapper[29097]: I0312 18:46:13.943321 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ceeba9c-e67f-49da-9b94-4359cfcd448e-config\") pod \"6ceeba9c-e67f-49da-9b94-4359cfcd448e\" (UID: \"6ceeba9c-e67f-49da-9b94-4359cfcd448e\") " Mar 12 18:46:13.943937 master-0 kubenswrapper[29097]: I0312 18:46:13.943378 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfj86\" (UniqueName: \"kubernetes.io/projected/6ceeba9c-e67f-49da-9b94-4359cfcd448e-kube-api-access-cfj86\") pod \"6ceeba9c-e67f-49da-9b94-4359cfcd448e\" (UID: \"6ceeba9c-e67f-49da-9b94-4359cfcd448e\") " Mar 12 18:46:13.958549 master-0 kubenswrapper[29097]: I0312 18:46:13.957887 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ceeba9c-e67f-49da-9b94-4359cfcd448e-kube-api-access-cfj86" (OuterVolumeSpecName: "kube-api-access-cfj86") pod "6ceeba9c-e67f-49da-9b94-4359cfcd448e" (UID: "6ceeba9c-e67f-49da-9b94-4359cfcd448e"). InnerVolumeSpecName "kube-api-access-cfj86". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:46:13.982417 master-0 kubenswrapper[29097]: I0312 18:46:13.982358 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ceeba9c-e67f-49da-9b94-4359cfcd448e-config" (OuterVolumeSpecName: "config") pod "6ceeba9c-e67f-49da-9b94-4359cfcd448e" (UID: "6ceeba9c-e67f-49da-9b94-4359cfcd448e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:46:13.985443 master-0 kubenswrapper[29097]: I0312 18:46:13.985266 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ceeba9c-e67f-49da-9b94-4359cfcd448e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6ceeba9c-e67f-49da-9b94-4359cfcd448e" (UID: "6ceeba9c-e67f-49da-9b94-4359cfcd448e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:46:14.054582 master-0 kubenswrapper[29097]: I0312 18:46:14.045946 29097 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ceeba9c-e67f-49da-9b94-4359cfcd448e-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:14.054997 master-0 kubenswrapper[29097]: I0312 18:46:14.054955 29097 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ceeba9c-e67f-49da-9b94-4359cfcd448e-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:14.055093 master-0 kubenswrapper[29097]: I0312 18:46:14.055075 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfj86\" (UniqueName: \"kubernetes.io/projected/6ceeba9c-e67f-49da-9b94-4359cfcd448e-kube-api-access-cfj86\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:14.359187 master-0 kubenswrapper[29097]: I0312 18:46:14.358092 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 12 18:46:14.359187 master-0 kubenswrapper[29097]: I0312 18:46:14.358133 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 12 18:46:14.400546 master-0 kubenswrapper[29097]: I0312 18:46:14.400488 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 12 18:46:14.823709 master-0 kubenswrapper[29097]: I0312 18:46:14.822826 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff8fd9d5c-f289q" Mar 12 18:46:14.862654 master-0 kubenswrapper[29097]: I0312 18:46:14.857782 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-f289q"] Mar 12 18:46:14.867367 master-0 kubenswrapper[29097]: I0312 18:46:14.867296 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-f289q"] Mar 12 18:46:14.880322 master-0 kubenswrapper[29097]: I0312 18:46:14.879867 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 12 18:46:15.099652 master-0 kubenswrapper[29097]: I0312 18:46:15.099597 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 12 18:46:15.100110 master-0 kubenswrapper[29097]: E0312 18:46:15.100091 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ceeba9c-e67f-49da-9b94-4359cfcd448e" containerName="init" Mar 12 18:46:15.100110 master-0 kubenswrapper[29097]: I0312 18:46:15.100109 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ceeba9c-e67f-49da-9b94-4359cfcd448e" containerName="init" Mar 12 18:46:15.100196 master-0 kubenswrapper[29097]: E0312 18:46:15.100131 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ceeba9c-e67f-49da-9b94-4359cfcd448e" containerName="dnsmasq-dns" Mar 12 18:46:15.100196 master-0 kubenswrapper[29097]: I0312 18:46:15.100138 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ceeba9c-e67f-49da-9b94-4359cfcd448e" containerName="dnsmasq-dns" Mar 12 18:46:15.100750 master-0 kubenswrapper[29097]: I0312 18:46:15.100423 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ceeba9c-e67f-49da-9b94-4359cfcd448e" containerName="dnsmasq-dns" Mar 12 18:46:15.101471 master-0 kubenswrapper[29097]: I0312 18:46:15.101450 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 12 18:46:15.103914 master-0 kubenswrapper[29097]: I0312 18:46:15.103831 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 12 18:46:15.104105 master-0 kubenswrapper[29097]: I0312 18:46:15.104071 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 12 18:46:15.104273 master-0 kubenswrapper[29097]: I0312 18:46:15.104255 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 12 18:46:15.139454 master-0 kubenswrapper[29097]: I0312 18:46:15.139325 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 12 18:46:15.180424 master-0 kubenswrapper[29097]: I0312 18:46:15.180161 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65dfbca3-130d-4d5a-bc07-4262fc4b4e50-scripts\") pod \"ovn-northd-0\" (UID: \"65dfbca3-130d-4d5a-bc07-4262fc4b4e50\") " pod="openstack/ovn-northd-0" Mar 12 18:46:15.180424 master-0 kubenswrapper[29097]: I0312 18:46:15.180204 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65dfbca3-130d-4d5a-bc07-4262fc4b4e50-config\") pod \"ovn-northd-0\" (UID: \"65dfbca3-130d-4d5a-bc07-4262fc4b4e50\") " pod="openstack/ovn-northd-0" Mar 12 18:46:15.180424 master-0 kubenswrapper[29097]: I0312 18:46:15.180251 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/65dfbca3-130d-4d5a-bc07-4262fc4b4e50-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"65dfbca3-130d-4d5a-bc07-4262fc4b4e50\") " pod="openstack/ovn-northd-0" Mar 12 18:46:15.180424 master-0 kubenswrapper[29097]: I0312 18:46:15.180281 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/65dfbca3-130d-4d5a-bc07-4262fc4b4e50-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"65dfbca3-130d-4d5a-bc07-4262fc4b4e50\") " pod="openstack/ovn-northd-0" Mar 12 18:46:15.180424 master-0 kubenswrapper[29097]: I0312 18:46:15.180319 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqgnj\" (UniqueName: \"kubernetes.io/projected/65dfbca3-130d-4d5a-bc07-4262fc4b4e50-kube-api-access-wqgnj\") pod \"ovn-northd-0\" (UID: \"65dfbca3-130d-4d5a-bc07-4262fc4b4e50\") " pod="openstack/ovn-northd-0" Mar 12 18:46:15.180424 master-0 kubenswrapper[29097]: I0312 18:46:15.180382 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/65dfbca3-130d-4d5a-bc07-4262fc4b4e50-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"65dfbca3-130d-4d5a-bc07-4262fc4b4e50\") " pod="openstack/ovn-northd-0" Mar 12 18:46:15.180424 master-0 kubenswrapper[29097]: I0312 18:46:15.180410 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65dfbca3-130d-4d5a-bc07-4262fc4b4e50-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"65dfbca3-130d-4d5a-bc07-4262fc4b4e50\") " pod="openstack/ovn-northd-0" Mar 12 18:46:15.282267 master-0 kubenswrapper[29097]: I0312 18:46:15.282204 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65dfbca3-130d-4d5a-bc07-4262fc4b4e50-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"65dfbca3-130d-4d5a-bc07-4262fc4b4e50\") " pod="openstack/ovn-northd-0" Mar 12 18:46:15.282466 master-0 kubenswrapper[29097]: I0312 18:46:15.282294 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65dfbca3-130d-4d5a-bc07-4262fc4b4e50-scripts\") pod \"ovn-northd-0\" (UID: \"65dfbca3-130d-4d5a-bc07-4262fc4b4e50\") " pod="openstack/ovn-northd-0" Mar 12 18:46:15.282466 master-0 kubenswrapper[29097]: I0312 18:46:15.282316 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65dfbca3-130d-4d5a-bc07-4262fc4b4e50-config\") pod \"ovn-northd-0\" (UID: \"65dfbca3-130d-4d5a-bc07-4262fc4b4e50\") " pod="openstack/ovn-northd-0" Mar 12 18:46:15.282466 master-0 kubenswrapper[29097]: I0312 18:46:15.282350 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/65dfbca3-130d-4d5a-bc07-4262fc4b4e50-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"65dfbca3-130d-4d5a-bc07-4262fc4b4e50\") " pod="openstack/ovn-northd-0" Mar 12 18:46:15.282466 master-0 kubenswrapper[29097]: I0312 18:46:15.282380 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/65dfbca3-130d-4d5a-bc07-4262fc4b4e50-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"65dfbca3-130d-4d5a-bc07-4262fc4b4e50\") " pod="openstack/ovn-northd-0" Mar 12 18:46:15.282466 master-0 kubenswrapper[29097]: I0312 18:46:15.282418 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqgnj\" (UniqueName: \"kubernetes.io/projected/65dfbca3-130d-4d5a-bc07-4262fc4b4e50-kube-api-access-wqgnj\") pod \"ovn-northd-0\" (UID: \"65dfbca3-130d-4d5a-bc07-4262fc4b4e50\") " pod="openstack/ovn-northd-0" Mar 12 18:46:15.282660 master-0 kubenswrapper[29097]: I0312 18:46:15.282482 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/65dfbca3-130d-4d5a-bc07-4262fc4b4e50-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"65dfbca3-130d-4d5a-bc07-4262fc4b4e50\") " pod="openstack/ovn-northd-0" Mar 12 18:46:15.283184 master-0 kubenswrapper[29097]: I0312 18:46:15.283156 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65dfbca3-130d-4d5a-bc07-4262fc4b4e50-scripts\") pod \"ovn-northd-0\" (UID: \"65dfbca3-130d-4d5a-bc07-4262fc4b4e50\") " pod="openstack/ovn-northd-0" Mar 12 18:46:15.283362 master-0 kubenswrapper[29097]: I0312 18:46:15.283342 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/65dfbca3-130d-4d5a-bc07-4262fc4b4e50-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"65dfbca3-130d-4d5a-bc07-4262fc4b4e50\") " pod="openstack/ovn-northd-0" Mar 12 18:46:15.284501 master-0 kubenswrapper[29097]: I0312 18:46:15.283624 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65dfbca3-130d-4d5a-bc07-4262fc4b4e50-config\") pod \"ovn-northd-0\" (UID: \"65dfbca3-130d-4d5a-bc07-4262fc4b4e50\") " pod="openstack/ovn-northd-0" Mar 12 18:46:15.285452 master-0 kubenswrapper[29097]: I0312 18:46:15.285431 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65dfbca3-130d-4d5a-bc07-4262fc4b4e50-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"65dfbca3-130d-4d5a-bc07-4262fc4b4e50\") " pod="openstack/ovn-northd-0" Mar 12 18:46:15.289179 master-0 kubenswrapper[29097]: I0312 18:46:15.287868 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/65dfbca3-130d-4d5a-bc07-4262fc4b4e50-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"65dfbca3-130d-4d5a-bc07-4262fc4b4e50\") " pod="openstack/ovn-northd-0" Mar 12 18:46:15.289812 master-0 kubenswrapper[29097]: I0312 18:46:15.289792 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/65dfbca3-130d-4d5a-bc07-4262fc4b4e50-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"65dfbca3-130d-4d5a-bc07-4262fc4b4e50\") " pod="openstack/ovn-northd-0" Mar 12 18:46:15.299020 master-0 kubenswrapper[29097]: I0312 18:46:15.298984 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqgnj\" (UniqueName: \"kubernetes.io/projected/65dfbca3-130d-4d5a-bc07-4262fc4b4e50-kube-api-access-wqgnj\") pod \"ovn-northd-0\" (UID: \"65dfbca3-130d-4d5a-bc07-4262fc4b4e50\") " pod="openstack/ovn-northd-0" Mar 12 18:46:15.449533 master-0 kubenswrapper[29097]: I0312 18:46:15.449397 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 12 18:46:15.946488 master-0 kubenswrapper[29097]: I0312 18:46:15.946421 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf8b865dc-pqwwh"] Mar 12 18:46:15.950169 master-0 kubenswrapper[29097]: I0312 18:46:15.950098 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf8b865dc-pqwwh" Mar 12 18:46:16.000626 master-0 kubenswrapper[29097]: I0312 18:46:15.999957 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb76cb7f-6d8a-4ecd-8580-2f06202826f4-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf8b865dc-pqwwh\" (UID: \"fb76cb7f-6d8a-4ecd-8580-2f06202826f4\") " pod="openstack/dnsmasq-dns-5bf8b865dc-pqwwh" Mar 12 18:46:16.000626 master-0 kubenswrapper[29097]: I0312 18:46:16.000039 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb76cb7f-6d8a-4ecd-8580-2f06202826f4-dns-svc\") pod \"dnsmasq-dns-5bf8b865dc-pqwwh\" (UID: \"fb76cb7f-6d8a-4ecd-8580-2f06202826f4\") " pod="openstack/dnsmasq-dns-5bf8b865dc-pqwwh" Mar 12 18:46:16.000626 master-0 kubenswrapper[29097]: I0312 18:46:16.000068 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb76cb7f-6d8a-4ecd-8580-2f06202826f4-config\") pod \"dnsmasq-dns-5bf8b865dc-pqwwh\" (UID: \"fb76cb7f-6d8a-4ecd-8580-2f06202826f4\") " pod="openstack/dnsmasq-dns-5bf8b865dc-pqwwh" Mar 12 18:46:16.000626 master-0 kubenswrapper[29097]: I0312 18:46:16.000177 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njhws\" (UniqueName: \"kubernetes.io/projected/fb76cb7f-6d8a-4ecd-8580-2f06202826f4-kube-api-access-njhws\") pod \"dnsmasq-dns-5bf8b865dc-pqwwh\" (UID: \"fb76cb7f-6d8a-4ecd-8580-2f06202826f4\") " pod="openstack/dnsmasq-dns-5bf8b865dc-pqwwh" Mar 12 18:46:16.000626 master-0 kubenswrapper[29097]: I0312 18:46:16.000202 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb76cb7f-6d8a-4ecd-8580-2f06202826f4-ovsdbserver-sb\") pod \"dnsmasq-dns-5bf8b865dc-pqwwh\" (UID: \"fb76cb7f-6d8a-4ecd-8580-2f06202826f4\") " pod="openstack/dnsmasq-dns-5bf8b865dc-pqwwh" Mar 12 18:46:16.051569 master-0 kubenswrapper[29097]: I0312 18:46:16.047693 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 12 18:46:16.080642 master-0 kubenswrapper[29097]: I0312 18:46:16.072784 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf8b865dc-pqwwh"] Mar 12 18:46:16.102099 master-0 kubenswrapper[29097]: I0312 18:46:16.101553 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njhws\" (UniqueName: \"kubernetes.io/projected/fb76cb7f-6d8a-4ecd-8580-2f06202826f4-kube-api-access-njhws\") pod \"dnsmasq-dns-5bf8b865dc-pqwwh\" (UID: \"fb76cb7f-6d8a-4ecd-8580-2f06202826f4\") " pod="openstack/dnsmasq-dns-5bf8b865dc-pqwwh" Mar 12 18:46:16.102099 master-0 kubenswrapper[29097]: I0312 18:46:16.101606 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb76cb7f-6d8a-4ecd-8580-2f06202826f4-ovsdbserver-sb\") pod \"dnsmasq-dns-5bf8b865dc-pqwwh\" (UID: \"fb76cb7f-6d8a-4ecd-8580-2f06202826f4\") " pod="openstack/dnsmasq-dns-5bf8b865dc-pqwwh" Mar 12 18:46:16.102099 master-0 kubenswrapper[29097]: I0312 18:46:16.101697 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb76cb7f-6d8a-4ecd-8580-2f06202826f4-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf8b865dc-pqwwh\" (UID: \"fb76cb7f-6d8a-4ecd-8580-2f06202826f4\") " pod="openstack/dnsmasq-dns-5bf8b865dc-pqwwh" Mar 12 18:46:16.102099 master-0 kubenswrapper[29097]: I0312 18:46:16.101743 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb76cb7f-6d8a-4ecd-8580-2f06202826f4-dns-svc\") pod \"dnsmasq-dns-5bf8b865dc-pqwwh\" (UID: \"fb76cb7f-6d8a-4ecd-8580-2f06202826f4\") " pod="openstack/dnsmasq-dns-5bf8b865dc-pqwwh" Mar 12 18:46:16.102099 master-0 kubenswrapper[29097]: I0312 18:46:16.101767 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb76cb7f-6d8a-4ecd-8580-2f06202826f4-config\") pod \"dnsmasq-dns-5bf8b865dc-pqwwh\" (UID: \"fb76cb7f-6d8a-4ecd-8580-2f06202826f4\") " pod="openstack/dnsmasq-dns-5bf8b865dc-pqwwh" Mar 12 18:46:16.103690 master-0 kubenswrapper[29097]: I0312 18:46:16.102625 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb76cb7f-6d8a-4ecd-8580-2f06202826f4-config\") pod \"dnsmasq-dns-5bf8b865dc-pqwwh\" (UID: \"fb76cb7f-6d8a-4ecd-8580-2f06202826f4\") " pod="openstack/dnsmasq-dns-5bf8b865dc-pqwwh" Mar 12 18:46:16.103690 master-0 kubenswrapper[29097]: I0312 18:46:16.103190 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb76cb7f-6d8a-4ecd-8580-2f06202826f4-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf8b865dc-pqwwh\" (UID: \"fb76cb7f-6d8a-4ecd-8580-2f06202826f4\") " pod="openstack/dnsmasq-dns-5bf8b865dc-pqwwh" Mar 12 18:46:16.103690 master-0 kubenswrapper[29097]: I0312 18:46:16.103553 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb76cb7f-6d8a-4ecd-8580-2f06202826f4-ovsdbserver-sb\") pod \"dnsmasq-dns-5bf8b865dc-pqwwh\" (UID: \"fb76cb7f-6d8a-4ecd-8580-2f06202826f4\") " pod="openstack/dnsmasq-dns-5bf8b865dc-pqwwh" Mar 12 18:46:16.103899 master-0 kubenswrapper[29097]: I0312 18:46:16.103885 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb76cb7f-6d8a-4ecd-8580-2f06202826f4-dns-svc\") pod \"dnsmasq-dns-5bf8b865dc-pqwwh\" (UID: \"fb76cb7f-6d8a-4ecd-8580-2f06202826f4\") " pod="openstack/dnsmasq-dns-5bf8b865dc-pqwwh" Mar 12 18:46:16.120452 master-0 kubenswrapper[29097]: I0312 18:46:16.120404 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njhws\" (UniqueName: \"kubernetes.io/projected/fb76cb7f-6d8a-4ecd-8580-2f06202826f4-kube-api-access-njhws\") pod \"dnsmasq-dns-5bf8b865dc-pqwwh\" (UID: \"fb76cb7f-6d8a-4ecd-8580-2f06202826f4\") " pod="openstack/dnsmasq-dns-5bf8b865dc-pqwwh" Mar 12 18:46:16.358335 master-0 kubenswrapper[29097]: I0312 18:46:16.358230 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf8b865dc-pqwwh" Mar 12 18:46:16.748239 master-0 kubenswrapper[29097]: I0312 18:46:16.748094 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ceeba9c-e67f-49da-9b94-4359cfcd448e" path="/var/lib/kubelet/pods/6ceeba9c-e67f-49da-9b94-4359cfcd448e/volumes" Mar 12 18:46:16.902548 master-0 kubenswrapper[29097]: I0312 18:46:16.890187 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 12 18:46:16.902548 master-0 kubenswrapper[29097]: I0312 18:46:16.890227 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 12 18:46:16.921542 master-0 kubenswrapper[29097]: I0312 18:46:16.921458 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"65dfbca3-130d-4d5a-bc07-4262fc4b4e50","Type":"ContainerStarted","Data":"7639a08d49d8674d53f112a469947a7b767f9a0b2c65386de4b4e630220d702f"} Mar 12 18:46:16.977926 master-0 kubenswrapper[29097]: E0312 18:46:16.977788 29097 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.32.10:39312->192.168.32.10:35109: write tcp 192.168.32.10:39312->192.168.32.10:35109: write: broken pipe Mar 12 18:46:16.989759 master-0 kubenswrapper[29097]: I0312 18:46:16.989658 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf8b865dc-pqwwh"] Mar 12 18:46:17.911100 master-0 kubenswrapper[29097]: I0312 18:46:17.910956 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 12 18:46:17.911100 master-0 kubenswrapper[29097]: I0312 18:46:17.911023 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 12 18:46:17.932820 master-0 kubenswrapper[29097]: I0312 18:46:17.932700 29097 generic.go:334] "Generic (PLEG): container finished" podID="fb76cb7f-6d8a-4ecd-8580-2f06202826f4" containerID="cf883aa6f72afeda64766364560c9bc259e3ee260ec73f15ae534957adf1ea8a" exitCode=0 Mar 12 18:46:17.933210 master-0 kubenswrapper[29097]: I0312 18:46:17.932812 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf8b865dc-pqwwh" event={"ID":"fb76cb7f-6d8a-4ecd-8580-2f06202826f4","Type":"ContainerDied","Data":"cf883aa6f72afeda64766364560c9bc259e3ee260ec73f15ae534957adf1ea8a"} Mar 12 18:46:17.933275 master-0 kubenswrapper[29097]: I0312 18:46:17.933216 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf8b865dc-pqwwh" event={"ID":"fb76cb7f-6d8a-4ecd-8580-2f06202826f4","Type":"ContainerStarted","Data":"887fd363b5741a8687b7271899dfa2cab988232351994529981d8eaf49ceb969"} Mar 12 18:46:17.937121 master-0 kubenswrapper[29097]: I0312 18:46:17.936793 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"65dfbca3-130d-4d5a-bc07-4262fc4b4e50","Type":"ContainerStarted","Data":"41a8165060fe7a261482ffbacc4746c193c507c8036b4883c47fccf89b6bb04e"} Mar 12 18:46:18.030754 master-0 kubenswrapper[29097]: I0312 18:46:18.027665 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Mar 12 18:46:18.074865 master-0 kubenswrapper[29097]: I0312 18:46:18.052207 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 12 18:46:18.074865 master-0 kubenswrapper[29097]: I0312 18:46:18.069181 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Mar 12 18:46:18.074865 master-0 kubenswrapper[29097]: I0312 18:46:18.069409 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Mar 12 18:46:18.074865 master-0 kubenswrapper[29097]: I0312 18:46:18.069662 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Mar 12 18:46:18.079467 master-0 kubenswrapper[29097]: I0312 18:46:18.079407 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 12 18:46:18.183036 master-0 kubenswrapper[29097]: I0312 18:46:18.182992 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c46d895a-8f64-48ab-8ed9-ea581e79f266\" (UniqueName: \"kubernetes.io/csi/topolvm.io^bb84011e-02e4-41a7-b382-87c54866427c\") pod \"swift-storage-0\" (UID: \"066d07a5-82a7-49a5-b345-203a1ee212f0\") " pod="openstack/swift-storage-0" Mar 12 18:46:18.183126 master-0 kubenswrapper[29097]: I0312 18:46:18.183056 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/066d07a5-82a7-49a5-b345-203a1ee212f0-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"066d07a5-82a7-49a5-b345-203a1ee212f0\") " pod="openstack/swift-storage-0" Mar 12 18:46:18.183126 master-0 kubenswrapper[29097]: I0312 18:46:18.183086 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnf6l\" (UniqueName: \"kubernetes.io/projected/066d07a5-82a7-49a5-b345-203a1ee212f0-kube-api-access-nnf6l\") pod \"swift-storage-0\" (UID: \"066d07a5-82a7-49a5-b345-203a1ee212f0\") " pod="openstack/swift-storage-0" Mar 12 18:46:18.183126 master-0 kubenswrapper[29097]: I0312 18:46:18.183115 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/066d07a5-82a7-49a5-b345-203a1ee212f0-etc-swift\") pod \"swift-storage-0\" (UID: \"066d07a5-82a7-49a5-b345-203a1ee212f0\") " pod="openstack/swift-storage-0" Mar 12 18:46:18.183425 master-0 kubenswrapper[29097]: I0312 18:46:18.183362 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/066d07a5-82a7-49a5-b345-203a1ee212f0-cache\") pod \"swift-storage-0\" (UID: \"066d07a5-82a7-49a5-b345-203a1ee212f0\") " pod="openstack/swift-storage-0" Mar 12 18:46:18.183779 master-0 kubenswrapper[29097]: I0312 18:46:18.183725 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/066d07a5-82a7-49a5-b345-203a1ee212f0-lock\") pod \"swift-storage-0\" (UID: \"066d07a5-82a7-49a5-b345-203a1ee212f0\") " pod="openstack/swift-storage-0" Mar 12 18:46:18.285564 master-0 kubenswrapper[29097]: I0312 18:46:18.285483 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c46d895a-8f64-48ab-8ed9-ea581e79f266\" (UniqueName: \"kubernetes.io/csi/topolvm.io^bb84011e-02e4-41a7-b382-87c54866427c\") pod \"swift-storage-0\" (UID: \"066d07a5-82a7-49a5-b345-203a1ee212f0\") " pod="openstack/swift-storage-0" Mar 12 18:46:18.285763 master-0 kubenswrapper[29097]: I0312 18:46:18.285732 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/066d07a5-82a7-49a5-b345-203a1ee212f0-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"066d07a5-82a7-49a5-b345-203a1ee212f0\") " pod="openstack/swift-storage-0" Mar 12 18:46:18.285822 master-0 kubenswrapper[29097]: I0312 18:46:18.285793 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnf6l\" (UniqueName: \"kubernetes.io/projected/066d07a5-82a7-49a5-b345-203a1ee212f0-kube-api-access-nnf6l\") pod \"swift-storage-0\" (UID: \"066d07a5-82a7-49a5-b345-203a1ee212f0\") " pod="openstack/swift-storage-0" Mar 12 18:46:18.285891 master-0 kubenswrapper[29097]: I0312 18:46:18.285845 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/066d07a5-82a7-49a5-b345-203a1ee212f0-etc-swift\") pod \"swift-storage-0\" (UID: \"066d07a5-82a7-49a5-b345-203a1ee212f0\") " pod="openstack/swift-storage-0" Mar 12 18:46:18.285972 master-0 kubenswrapper[29097]: I0312 18:46:18.285941 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/066d07a5-82a7-49a5-b345-203a1ee212f0-cache\") pod \"swift-storage-0\" (UID: \"066d07a5-82a7-49a5-b345-203a1ee212f0\") " pod="openstack/swift-storage-0" Mar 12 18:46:18.286451 master-0 kubenswrapper[29097]: I0312 18:46:18.286156 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/066d07a5-82a7-49a5-b345-203a1ee212f0-lock\") pod \"swift-storage-0\" (UID: \"066d07a5-82a7-49a5-b345-203a1ee212f0\") " pod="openstack/swift-storage-0" Mar 12 18:46:18.286451 master-0 kubenswrapper[29097]: E0312 18:46:18.286161 29097 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 18:46:18.286451 master-0 kubenswrapper[29097]: E0312 18:46:18.286263 29097 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 18:46:18.286451 master-0 kubenswrapper[29097]: E0312 18:46:18.286325 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/066d07a5-82a7-49a5-b345-203a1ee212f0-etc-swift podName:066d07a5-82a7-49a5-b345-203a1ee212f0 nodeName:}" failed. No retries permitted until 2026-03-12 18:46:18.786302612 +0000 UTC m=+1018.340282709 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/066d07a5-82a7-49a5-b345-203a1ee212f0-etc-swift") pod "swift-storage-0" (UID: "066d07a5-82a7-49a5-b345-203a1ee212f0") : configmap "swift-ring-files" not found Mar 12 18:46:18.286621 master-0 kubenswrapper[29097]: I0312 18:46:18.286595 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/066d07a5-82a7-49a5-b345-203a1ee212f0-cache\") pod \"swift-storage-0\" (UID: \"066d07a5-82a7-49a5-b345-203a1ee212f0\") " pod="openstack/swift-storage-0" Mar 12 18:46:18.286773 master-0 kubenswrapper[29097]: I0312 18:46:18.286726 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/066d07a5-82a7-49a5-b345-203a1ee212f0-lock\") pod \"swift-storage-0\" (UID: \"066d07a5-82a7-49a5-b345-203a1ee212f0\") " pod="openstack/swift-storage-0" Mar 12 18:46:18.289122 master-0 kubenswrapper[29097]: I0312 18:46:18.289087 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/066d07a5-82a7-49a5-b345-203a1ee212f0-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"066d07a5-82a7-49a5-b345-203a1ee212f0\") " pod="openstack/swift-storage-0" Mar 12 18:46:18.294760 master-0 kubenswrapper[29097]: I0312 18:46:18.294737 29097 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 18:46:18.294879 master-0 kubenswrapper[29097]: I0312 18:46:18.294861 29097 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c46d895a-8f64-48ab-8ed9-ea581e79f266\" (UniqueName: \"kubernetes.io/csi/topolvm.io^bb84011e-02e4-41a7-b382-87c54866427c\") pod \"swift-storage-0\" (UID: \"066d07a5-82a7-49a5-b345-203a1ee212f0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/1452505fe589440931de8998ba334f4eb556ec06efda9f905f2be00d20e69b97/globalmount\"" pod="openstack/swift-storage-0" Mar 12 18:46:18.303594 master-0 kubenswrapper[29097]: I0312 18:46:18.303554 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnf6l\" (UniqueName: \"kubernetes.io/projected/066d07a5-82a7-49a5-b345-203a1ee212f0-kube-api-access-nnf6l\") pod \"swift-storage-0\" (UID: \"066d07a5-82a7-49a5-b345-203a1ee212f0\") " pod="openstack/swift-storage-0" Mar 12 18:46:18.795823 master-0 kubenswrapper[29097]: I0312 18:46:18.795541 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/066d07a5-82a7-49a5-b345-203a1ee212f0-etc-swift\") pod \"swift-storage-0\" (UID: \"066d07a5-82a7-49a5-b345-203a1ee212f0\") " pod="openstack/swift-storage-0" Mar 12 18:46:18.796034 master-0 kubenswrapper[29097]: E0312 18:46:18.795847 29097 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 18:46:18.796034 master-0 kubenswrapper[29097]: E0312 18:46:18.795903 29097 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 18:46:18.796034 master-0 kubenswrapper[29097]: E0312 18:46:18.795992 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/066d07a5-82a7-49a5-b345-203a1ee212f0-etc-swift podName:066d07a5-82a7-49a5-b345-203a1ee212f0 nodeName:}" failed. No retries permitted until 2026-03-12 18:46:19.795963748 +0000 UTC m=+1019.349943885 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/066d07a5-82a7-49a5-b345-203a1ee212f0-etc-swift") pod "swift-storage-0" (UID: "066d07a5-82a7-49a5-b345-203a1ee212f0") : configmap "swift-ring-files" not found Mar 12 18:46:18.866218 master-0 kubenswrapper[29097]: I0312 18:46:18.866146 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-wk79g"] Mar 12 18:46:18.869063 master-0 kubenswrapper[29097]: I0312 18:46:18.868971 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wk79g" Mar 12 18:46:18.875278 master-0 kubenswrapper[29097]: I0312 18:46:18.874699 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 12 18:46:18.875278 master-0 kubenswrapper[29097]: I0312 18:46:18.874995 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Mar 12 18:46:18.875797 master-0 kubenswrapper[29097]: I0312 18:46:18.875729 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Mar 12 18:46:18.889124 master-0 kubenswrapper[29097]: I0312 18:46:18.888223 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-wk79g"] Mar 12 18:46:18.948418 master-0 kubenswrapper[29097]: I0312 18:46:18.948361 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"65dfbca3-130d-4d5a-bc07-4262fc4b4e50","Type":"ContainerStarted","Data":"1be94bf415baa67a334431a43e9a99bc5cab39bd1ddabfd8b08839e5d6d8e633"} Mar 12 18:46:18.948674 master-0 kubenswrapper[29097]: I0312 18:46:18.948472 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 12 18:46:18.951817 master-0 kubenswrapper[29097]: I0312 18:46:18.951791 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf8b865dc-pqwwh" event={"ID":"fb76cb7f-6d8a-4ecd-8580-2f06202826f4","Type":"ContainerStarted","Data":"e75c0e2252e75cebdd0ab117eacec869cb590b6eb419cb7a183ab58a57b63f8e"} Mar 12 18:46:18.952349 master-0 kubenswrapper[29097]: I0312 18:46:18.952311 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf8b865dc-pqwwh" Mar 12 18:46:18.995598 master-0 kubenswrapper[29097]: I0312 18:46:18.993253 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.356487033 podStartE2EDuration="3.993228s" podCreationTimestamp="2026-03-12 18:46:15 +0000 UTC" firstStartedPulling="2026-03-12 18:46:15.9871843 +0000 UTC m=+1015.541164397" lastFinishedPulling="2026-03-12 18:46:17.623925267 +0000 UTC m=+1017.177905364" observedRunningTime="2026-03-12 18:46:18.984893512 +0000 UTC m=+1018.538873609" watchObservedRunningTime="2026-03-12 18:46:18.993228 +0000 UTC m=+1018.547208107" Mar 12 18:46:19.005782 master-0 kubenswrapper[29097]: I0312 18:46:19.002758 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0bc3215a-a09f-49fe-a3f6-050665225137-etc-swift\") pod \"swift-ring-rebalance-wk79g\" (UID: \"0bc3215a-a09f-49fe-a3f6-050665225137\") " pod="openstack/swift-ring-rebalance-wk79g" Mar 12 18:46:19.005782 master-0 kubenswrapper[29097]: I0312 18:46:19.002811 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0bc3215a-a09f-49fe-a3f6-050665225137-scripts\") pod \"swift-ring-rebalance-wk79g\" (UID: \"0bc3215a-a09f-49fe-a3f6-050665225137\") " pod="openstack/swift-ring-rebalance-wk79g" Mar 12 18:46:19.005782 master-0 kubenswrapper[29097]: I0312 18:46:19.002835 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0bc3215a-a09f-49fe-a3f6-050665225137-swiftconf\") pod \"swift-ring-rebalance-wk79g\" (UID: \"0bc3215a-a09f-49fe-a3f6-050665225137\") " pod="openstack/swift-ring-rebalance-wk79g" Mar 12 18:46:19.005782 master-0 kubenswrapper[29097]: I0312 18:46:19.002879 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bc3215a-a09f-49fe-a3f6-050665225137-combined-ca-bundle\") pod \"swift-ring-rebalance-wk79g\" (UID: \"0bc3215a-a09f-49fe-a3f6-050665225137\") " pod="openstack/swift-ring-rebalance-wk79g" Mar 12 18:46:19.005782 master-0 kubenswrapper[29097]: I0312 18:46:19.002954 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0bc3215a-a09f-49fe-a3f6-050665225137-ring-data-devices\") pod \"swift-ring-rebalance-wk79g\" (UID: \"0bc3215a-a09f-49fe-a3f6-050665225137\") " pod="openstack/swift-ring-rebalance-wk79g" Mar 12 18:46:19.005782 master-0 kubenswrapper[29097]: I0312 18:46:19.003009 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0bc3215a-a09f-49fe-a3f6-050665225137-dispersionconf\") pod \"swift-ring-rebalance-wk79g\" (UID: \"0bc3215a-a09f-49fe-a3f6-050665225137\") " pod="openstack/swift-ring-rebalance-wk79g" Mar 12 18:46:19.005782 master-0 kubenswrapper[29097]: I0312 18:46:19.003039 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zt58\" (UniqueName: \"kubernetes.io/projected/0bc3215a-a09f-49fe-a3f6-050665225137-kube-api-access-8zt58\") pod \"swift-ring-rebalance-wk79g\" (UID: \"0bc3215a-a09f-49fe-a3f6-050665225137\") " pod="openstack/swift-ring-rebalance-wk79g" Mar 12 18:46:19.015208 master-0 kubenswrapper[29097]: I0312 18:46:19.014955 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf8b865dc-pqwwh" podStartSLOduration=4.014936902 podStartE2EDuration="4.014936902s" podCreationTimestamp="2026-03-12 18:46:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:46:19.014443249 +0000 UTC m=+1018.568423366" watchObservedRunningTime="2026-03-12 18:46:19.014936902 +0000 UTC m=+1018.568916999" Mar 12 18:46:19.104965 master-0 kubenswrapper[29097]: I0312 18:46:19.104786 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0bc3215a-a09f-49fe-a3f6-050665225137-ring-data-devices\") pod \"swift-ring-rebalance-wk79g\" (UID: \"0bc3215a-a09f-49fe-a3f6-050665225137\") " pod="openstack/swift-ring-rebalance-wk79g" Mar 12 18:46:19.105522 master-0 kubenswrapper[29097]: I0312 18:46:19.105480 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0bc3215a-a09f-49fe-a3f6-050665225137-ring-data-devices\") pod \"swift-ring-rebalance-wk79g\" (UID: \"0bc3215a-a09f-49fe-a3f6-050665225137\") " pod="openstack/swift-ring-rebalance-wk79g" Mar 12 18:46:19.105638 master-0 kubenswrapper[29097]: I0312 18:46:19.105618 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0bc3215a-a09f-49fe-a3f6-050665225137-dispersionconf\") pod \"swift-ring-rebalance-wk79g\" (UID: \"0bc3215a-a09f-49fe-a3f6-050665225137\") " pod="openstack/swift-ring-rebalance-wk79g" Mar 12 18:46:19.105775 master-0 kubenswrapper[29097]: I0312 18:46:19.105751 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zt58\" (UniqueName: \"kubernetes.io/projected/0bc3215a-a09f-49fe-a3f6-050665225137-kube-api-access-8zt58\") pod \"swift-ring-rebalance-wk79g\" (UID: \"0bc3215a-a09f-49fe-a3f6-050665225137\") " pod="openstack/swift-ring-rebalance-wk79g" Mar 12 18:46:19.106020 master-0 kubenswrapper[29097]: I0312 18:46:19.105983 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0bc3215a-a09f-49fe-a3f6-050665225137-etc-swift\") pod \"swift-ring-rebalance-wk79g\" (UID: \"0bc3215a-a09f-49fe-a3f6-050665225137\") " pod="openstack/swift-ring-rebalance-wk79g" Mar 12 18:46:19.106072 master-0 kubenswrapper[29097]: I0312 18:46:19.106038 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0bc3215a-a09f-49fe-a3f6-050665225137-scripts\") pod \"swift-ring-rebalance-wk79g\" (UID: \"0bc3215a-a09f-49fe-a3f6-050665225137\") " pod="openstack/swift-ring-rebalance-wk79g" Mar 12 18:46:19.106156 master-0 kubenswrapper[29097]: I0312 18:46:19.106116 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0bc3215a-a09f-49fe-a3f6-050665225137-swiftconf\") pod \"swift-ring-rebalance-wk79g\" (UID: \"0bc3215a-a09f-49fe-a3f6-050665225137\") " pod="openstack/swift-ring-rebalance-wk79g" Mar 12 18:46:19.106256 master-0 kubenswrapper[29097]: I0312 18:46:19.106232 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bc3215a-a09f-49fe-a3f6-050665225137-combined-ca-bundle\") pod \"swift-ring-rebalance-wk79g\" (UID: \"0bc3215a-a09f-49fe-a3f6-050665225137\") " pod="openstack/swift-ring-rebalance-wk79g" Mar 12 18:46:19.106727 master-0 kubenswrapper[29097]: I0312 18:46:19.106694 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0bc3215a-a09f-49fe-a3f6-050665225137-etc-swift\") pod \"swift-ring-rebalance-wk79g\" (UID: \"0bc3215a-a09f-49fe-a3f6-050665225137\") " pod="openstack/swift-ring-rebalance-wk79g" Mar 12 18:46:19.107218 master-0 kubenswrapper[29097]: I0312 18:46:19.107192 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0bc3215a-a09f-49fe-a3f6-050665225137-scripts\") pod \"swift-ring-rebalance-wk79g\" (UID: \"0bc3215a-a09f-49fe-a3f6-050665225137\") " pod="openstack/swift-ring-rebalance-wk79g" Mar 12 18:46:19.109617 master-0 kubenswrapper[29097]: I0312 18:46:19.109558 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0bc3215a-a09f-49fe-a3f6-050665225137-swiftconf\") pod \"swift-ring-rebalance-wk79g\" (UID: \"0bc3215a-a09f-49fe-a3f6-050665225137\") " pod="openstack/swift-ring-rebalance-wk79g" Mar 12 18:46:19.121921 master-0 kubenswrapper[29097]: I0312 18:46:19.121886 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0bc3215a-a09f-49fe-a3f6-050665225137-dispersionconf\") pod \"swift-ring-rebalance-wk79g\" (UID: \"0bc3215a-a09f-49fe-a3f6-050665225137\") " pod="openstack/swift-ring-rebalance-wk79g" Mar 12 18:46:19.122065 master-0 kubenswrapper[29097]: I0312 18:46:19.121946 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bc3215a-a09f-49fe-a3f6-050665225137-combined-ca-bundle\") pod \"swift-ring-rebalance-wk79g\" (UID: \"0bc3215a-a09f-49fe-a3f6-050665225137\") " pod="openstack/swift-ring-rebalance-wk79g" Mar 12 18:46:19.125104 master-0 kubenswrapper[29097]: I0312 18:46:19.125070 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zt58\" (UniqueName: \"kubernetes.io/projected/0bc3215a-a09f-49fe-a3f6-050665225137-kube-api-access-8zt58\") pod \"swift-ring-rebalance-wk79g\" (UID: \"0bc3215a-a09f-49fe-a3f6-050665225137\") " pod="openstack/swift-ring-rebalance-wk79g" Mar 12 18:46:19.220185 master-0 kubenswrapper[29097]: I0312 18:46:19.220110 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wk79g" Mar 12 18:46:19.698960 master-0 kubenswrapper[29097]: I0312 18:46:19.698887 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-wk79g"] Mar 12 18:46:19.777593 master-0 kubenswrapper[29097]: I0312 18:46:19.777510 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c46d895a-8f64-48ab-8ed9-ea581e79f266\" (UniqueName: \"kubernetes.io/csi/topolvm.io^bb84011e-02e4-41a7-b382-87c54866427c\") pod \"swift-storage-0\" (UID: \"066d07a5-82a7-49a5-b345-203a1ee212f0\") " pod="openstack/swift-storage-0" Mar 12 18:46:19.824541 master-0 kubenswrapper[29097]: I0312 18:46:19.824449 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/066d07a5-82a7-49a5-b345-203a1ee212f0-etc-swift\") pod \"swift-storage-0\" (UID: \"066d07a5-82a7-49a5-b345-203a1ee212f0\") " pod="openstack/swift-storage-0" Mar 12 18:46:19.825158 master-0 kubenswrapper[29097]: E0312 18:46:19.825075 29097 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 18:46:19.825158 master-0 kubenswrapper[29097]: E0312 18:46:19.825142 29097 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 18:46:19.825305 master-0 kubenswrapper[29097]: E0312 18:46:19.825247 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/066d07a5-82a7-49a5-b345-203a1ee212f0-etc-swift podName:066d07a5-82a7-49a5-b345-203a1ee212f0 nodeName:}" failed. No retries permitted until 2026-03-12 18:46:21.825214228 +0000 UTC m=+1021.379194365 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/066d07a5-82a7-49a5-b345-203a1ee212f0-etc-swift") pod "swift-storage-0" (UID: "066d07a5-82a7-49a5-b345-203a1ee212f0") : configmap "swift-ring-files" not found Mar 12 18:46:19.963797 master-0 kubenswrapper[29097]: I0312 18:46:19.963637 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wk79g" event={"ID":"0bc3215a-a09f-49fe-a3f6-050665225137","Type":"ContainerStarted","Data":"491f471c5783797f24697ad74de686f20ee8a8bb8fe50d81fb6332ddafa859ee"} Mar 12 18:46:20.255351 master-0 kubenswrapper[29097]: I0312 18:46:20.255215 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 12 18:46:20.344727 master-0 kubenswrapper[29097]: I0312 18:46:20.344638 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 12 18:46:21.050566 master-0 kubenswrapper[29097]: I0312 18:46:21.050497 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 12 18:46:21.139152 master-0 kubenswrapper[29097]: I0312 18:46:21.139104 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 12 18:46:21.261410 master-0 kubenswrapper[29097]: E0312 18:46:21.261365 29097 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33feec78_4592_4343_965b_aa1b7044fcf3.slice/crio-26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Error finding container 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Status 404 returned error can't find the container with id 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547 Mar 12 18:46:21.884971 master-0 kubenswrapper[29097]: I0312 18:46:21.884912 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/066d07a5-82a7-49a5-b345-203a1ee212f0-etc-swift\") pod \"swift-storage-0\" (UID: \"066d07a5-82a7-49a5-b345-203a1ee212f0\") " pod="openstack/swift-storage-0" Mar 12 18:46:21.885413 master-0 kubenswrapper[29097]: E0312 18:46:21.885367 29097 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 18:46:21.885413 master-0 kubenswrapper[29097]: E0312 18:46:21.885402 29097 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 18:46:21.885504 master-0 kubenswrapper[29097]: E0312 18:46:21.885473 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/066d07a5-82a7-49a5-b345-203a1ee212f0-etc-swift podName:066d07a5-82a7-49a5-b345-203a1ee212f0 nodeName:}" failed. No retries permitted until 2026-03-12 18:46:25.88545345 +0000 UTC m=+1025.439433547 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/066d07a5-82a7-49a5-b345-203a1ee212f0-etc-swift") pod "swift-storage-0" (UID: "066d07a5-82a7-49a5-b345-203a1ee212f0") : configmap "swift-ring-files" not found Mar 12 18:46:21.927525 master-0 kubenswrapper[29097]: I0312 18:46:21.927467 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-pbnk9"] Mar 12 18:46:21.928718 master-0 kubenswrapper[29097]: I0312 18:46:21.928680 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pbnk9" Mar 12 18:46:21.930677 master-0 kubenswrapper[29097]: I0312 18:46:21.930633 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 12 18:46:21.960744 master-0 kubenswrapper[29097]: I0312 18:46:21.960666 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pbnk9"] Mar 12 18:46:21.988091 master-0 kubenswrapper[29097]: I0312 18:46:21.988035 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e15fa2fc-d6cd-46b4-b814-1f00c4004c49-operator-scripts\") pod \"root-account-create-update-pbnk9\" (UID: \"e15fa2fc-d6cd-46b4-b814-1f00c4004c49\") " pod="openstack/root-account-create-update-pbnk9" Mar 12 18:46:21.988311 master-0 kubenswrapper[29097]: I0312 18:46:21.988145 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c5db\" (UniqueName: \"kubernetes.io/projected/e15fa2fc-d6cd-46b4-b814-1f00c4004c49-kube-api-access-9c5db\") pod \"root-account-create-update-pbnk9\" (UID: \"e15fa2fc-d6cd-46b4-b814-1f00c4004c49\") " pod="openstack/root-account-create-update-pbnk9" Mar 12 18:46:22.089708 master-0 kubenswrapper[29097]: I0312 18:46:22.089647 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e15fa2fc-d6cd-46b4-b814-1f00c4004c49-operator-scripts\") pod \"root-account-create-update-pbnk9\" (UID: \"e15fa2fc-d6cd-46b4-b814-1f00c4004c49\") " pod="openstack/root-account-create-update-pbnk9" Mar 12 18:46:22.089936 master-0 kubenswrapper[29097]: I0312 18:46:22.089786 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c5db\" (UniqueName: \"kubernetes.io/projected/e15fa2fc-d6cd-46b4-b814-1f00c4004c49-kube-api-access-9c5db\") pod \"root-account-create-update-pbnk9\" (UID: \"e15fa2fc-d6cd-46b4-b814-1f00c4004c49\") " pod="openstack/root-account-create-update-pbnk9" Mar 12 18:46:22.092235 master-0 kubenswrapper[29097]: I0312 18:46:22.092204 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e15fa2fc-d6cd-46b4-b814-1f00c4004c49-operator-scripts\") pod \"root-account-create-update-pbnk9\" (UID: \"e15fa2fc-d6cd-46b4-b814-1f00c4004c49\") " pod="openstack/root-account-create-update-pbnk9" Mar 12 18:46:22.115277 master-0 kubenswrapper[29097]: I0312 18:46:22.115211 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c5db\" (UniqueName: \"kubernetes.io/projected/e15fa2fc-d6cd-46b4-b814-1f00c4004c49-kube-api-access-9c5db\") pod \"root-account-create-update-pbnk9\" (UID: \"e15fa2fc-d6cd-46b4-b814-1f00c4004c49\") " pod="openstack/root-account-create-update-pbnk9" Mar 12 18:46:22.282131 master-0 kubenswrapper[29097]: I0312 18:46:22.282016 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pbnk9" Mar 12 18:46:24.542922 master-0 kubenswrapper[29097]: I0312 18:46:24.542837 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-psnrd"] Mar 12 18:46:24.546758 master-0 kubenswrapper[29097]: I0312 18:46:24.544078 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-psnrd" Mar 12 18:46:24.575896 master-0 kubenswrapper[29097]: I0312 18:46:24.575850 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-psnrd"] Mar 12 18:46:24.663149 master-0 kubenswrapper[29097]: I0312 18:46:24.663090 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-409f-account-create-update-8s7vs"] Mar 12 18:46:24.665252 master-0 kubenswrapper[29097]: I0312 18:46:24.664366 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-409f-account-create-update-8s7vs" Mar 12 18:46:24.668329 master-0 kubenswrapper[29097]: I0312 18:46:24.668261 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/054314ce-7598-4127-bde5-a98ceeeae7f5-operator-scripts\") pod \"keystone-db-create-psnrd\" (UID: \"054314ce-7598-4127-bde5-a98ceeeae7f5\") " pod="openstack/keystone-db-create-psnrd" Mar 12 18:46:24.668587 master-0 kubenswrapper[29097]: I0312 18:46:24.668566 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmrzn\" (UniqueName: \"kubernetes.io/projected/054314ce-7598-4127-bde5-a98ceeeae7f5-kube-api-access-dmrzn\") pod \"keystone-db-create-psnrd\" (UID: \"054314ce-7598-4127-bde5-a98ceeeae7f5\") " pod="openstack/keystone-db-create-psnrd" Mar 12 18:46:24.669693 master-0 kubenswrapper[29097]: I0312 18:46:24.668739 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqnf8\" (UniqueName: \"kubernetes.io/projected/90e68749-42ba-42d0-8ead-4517f6aae601-kube-api-access-qqnf8\") pod \"keystone-409f-account-create-update-8s7vs\" (UID: \"90e68749-42ba-42d0-8ead-4517f6aae601\") " pod="openstack/keystone-409f-account-create-update-8s7vs" Mar 12 18:46:24.670035 master-0 kubenswrapper[29097]: I0312 18:46:24.670013 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90e68749-42ba-42d0-8ead-4517f6aae601-operator-scripts\") pod \"keystone-409f-account-create-update-8s7vs\" (UID: \"90e68749-42ba-42d0-8ead-4517f6aae601\") " pod="openstack/keystone-409f-account-create-update-8s7vs" Mar 12 18:46:24.692406 master-0 kubenswrapper[29097]: I0312 18:46:24.692230 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 12 18:46:24.696635 master-0 kubenswrapper[29097]: I0312 18:46:24.696260 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-409f-account-create-update-8s7vs"] Mar 12 18:46:24.771381 master-0 kubenswrapper[29097]: I0312 18:46:24.771207 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/054314ce-7598-4127-bde5-a98ceeeae7f5-operator-scripts\") pod \"keystone-db-create-psnrd\" (UID: \"054314ce-7598-4127-bde5-a98ceeeae7f5\") " pod="openstack/keystone-db-create-psnrd" Mar 12 18:46:24.771381 master-0 kubenswrapper[29097]: I0312 18:46:24.771276 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmrzn\" (UniqueName: \"kubernetes.io/projected/054314ce-7598-4127-bde5-a98ceeeae7f5-kube-api-access-dmrzn\") pod \"keystone-db-create-psnrd\" (UID: \"054314ce-7598-4127-bde5-a98ceeeae7f5\") " pod="openstack/keystone-db-create-psnrd" Mar 12 18:46:24.771381 master-0 kubenswrapper[29097]: I0312 18:46:24.771329 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqnf8\" (UniqueName: \"kubernetes.io/projected/90e68749-42ba-42d0-8ead-4517f6aae601-kube-api-access-qqnf8\") pod \"keystone-409f-account-create-update-8s7vs\" (UID: \"90e68749-42ba-42d0-8ead-4517f6aae601\") " pod="openstack/keystone-409f-account-create-update-8s7vs" Mar 12 18:46:24.771381 master-0 kubenswrapper[29097]: I0312 18:46:24.771360 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90e68749-42ba-42d0-8ead-4517f6aae601-operator-scripts\") pod \"keystone-409f-account-create-update-8s7vs\" (UID: \"90e68749-42ba-42d0-8ead-4517f6aae601\") " pod="openstack/keystone-409f-account-create-update-8s7vs" Mar 12 18:46:24.772259 master-0 kubenswrapper[29097]: I0312 18:46:24.772123 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90e68749-42ba-42d0-8ead-4517f6aae601-operator-scripts\") pod \"keystone-409f-account-create-update-8s7vs\" (UID: \"90e68749-42ba-42d0-8ead-4517f6aae601\") " pod="openstack/keystone-409f-account-create-update-8s7vs" Mar 12 18:46:24.774034 master-0 kubenswrapper[29097]: I0312 18:46:24.773214 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/054314ce-7598-4127-bde5-a98ceeeae7f5-operator-scripts\") pod \"keystone-db-create-psnrd\" (UID: \"054314ce-7598-4127-bde5-a98ceeeae7f5\") " pod="openstack/keystone-db-create-psnrd" Mar 12 18:46:24.792374 master-0 kubenswrapper[29097]: I0312 18:46:24.792332 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmrzn\" (UniqueName: \"kubernetes.io/projected/054314ce-7598-4127-bde5-a98ceeeae7f5-kube-api-access-dmrzn\") pod \"keystone-db-create-psnrd\" (UID: \"054314ce-7598-4127-bde5-a98ceeeae7f5\") " pod="openstack/keystone-db-create-psnrd" Mar 12 18:46:24.796628 master-0 kubenswrapper[29097]: I0312 18:46:24.796531 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqnf8\" (UniqueName: \"kubernetes.io/projected/90e68749-42ba-42d0-8ead-4517f6aae601-kube-api-access-qqnf8\") pod \"keystone-409f-account-create-update-8s7vs\" (UID: \"90e68749-42ba-42d0-8ead-4517f6aae601\") " pod="openstack/keystone-409f-account-create-update-8s7vs" Mar 12 18:46:24.805076 master-0 kubenswrapper[29097]: I0312 18:46:24.805030 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pbnk9"] Mar 12 18:46:24.848244 master-0 kubenswrapper[29097]: I0312 18:46:24.848178 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-m74jd"] Mar 12 18:46:24.849817 master-0 kubenswrapper[29097]: I0312 18:46:24.849793 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-m74jd" Mar 12 18:46:24.865551 master-0 kubenswrapper[29097]: I0312 18:46:24.865020 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-m74jd"] Mar 12 18:46:24.876633 master-0 kubenswrapper[29097]: I0312 18:46:24.875658 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpvmv\" (UniqueName: \"kubernetes.io/projected/5c788cc9-0232-4e2a-ac56-b52212c2d589-kube-api-access-qpvmv\") pod \"placement-db-create-m74jd\" (UID: \"5c788cc9-0232-4e2a-ac56-b52212c2d589\") " pod="openstack/placement-db-create-m74jd" Mar 12 18:46:24.876633 master-0 kubenswrapper[29097]: I0312 18:46:24.875747 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c788cc9-0232-4e2a-ac56-b52212c2d589-operator-scripts\") pod \"placement-db-create-m74jd\" (UID: \"5c788cc9-0232-4e2a-ac56-b52212c2d589\") " pod="openstack/placement-db-create-m74jd" Mar 12 18:46:24.977715 master-0 kubenswrapper[29097]: I0312 18:46:24.977659 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpvmv\" (UniqueName: \"kubernetes.io/projected/5c788cc9-0232-4e2a-ac56-b52212c2d589-kube-api-access-qpvmv\") pod \"placement-db-create-m74jd\" (UID: \"5c788cc9-0232-4e2a-ac56-b52212c2d589\") " pod="openstack/placement-db-create-m74jd" Mar 12 18:46:24.977959 master-0 kubenswrapper[29097]: I0312 18:46:24.977731 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c788cc9-0232-4e2a-ac56-b52212c2d589-operator-scripts\") pod \"placement-db-create-m74jd\" (UID: \"5c788cc9-0232-4e2a-ac56-b52212c2d589\") " pod="openstack/placement-db-create-m74jd" Mar 12 18:46:24.978738 master-0 kubenswrapper[29097]: I0312 18:46:24.978449 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c788cc9-0232-4e2a-ac56-b52212c2d589-operator-scripts\") pod \"placement-db-create-m74jd\" (UID: \"5c788cc9-0232-4e2a-ac56-b52212c2d589\") " pod="openstack/placement-db-create-m74jd" Mar 12 18:46:24.982581 master-0 kubenswrapper[29097]: I0312 18:46:24.982542 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-1444-account-create-update-pd8f4"] Mar 12 18:46:24.984356 master-0 kubenswrapper[29097]: I0312 18:46:24.984340 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1444-account-create-update-pd8f4" Mar 12 18:46:24.987814 master-0 kubenswrapper[29097]: I0312 18:46:24.986328 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 12 18:46:24.988922 master-0 kubenswrapper[29097]: I0312 18:46:24.988470 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-psnrd" Mar 12 18:46:25.000272 master-0 kubenswrapper[29097]: I0312 18:46:25.000159 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpvmv\" (UniqueName: \"kubernetes.io/projected/5c788cc9-0232-4e2a-ac56-b52212c2d589-kube-api-access-qpvmv\") pod \"placement-db-create-m74jd\" (UID: \"5c788cc9-0232-4e2a-ac56-b52212c2d589\") " pod="openstack/placement-db-create-m74jd" Mar 12 18:46:25.001509 master-0 kubenswrapper[29097]: I0312 18:46:25.001489 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1444-account-create-update-pd8f4"] Mar 12 18:46:25.022652 master-0 kubenswrapper[29097]: I0312 18:46:25.022580 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wk79g" event={"ID":"0bc3215a-a09f-49fe-a3f6-050665225137","Type":"ContainerStarted","Data":"52c8b4170cf04fa4cfadf0aaa543df8a0a3f315b1a12a023e68dbcf31970150c"} Mar 12 18:46:25.024275 master-0 kubenswrapper[29097]: I0312 18:46:25.024244 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-409f-account-create-update-8s7vs" Mar 12 18:46:25.030279 master-0 kubenswrapper[29097]: I0312 18:46:25.030225 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pbnk9" event={"ID":"e15fa2fc-d6cd-46b4-b814-1f00c4004c49","Type":"ContainerStarted","Data":"44dab1eeebc8306c05e308c70a544c94aff6c88fb948491bada9953740334087"} Mar 12 18:46:25.030396 master-0 kubenswrapper[29097]: I0312 18:46:25.030289 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pbnk9" event={"ID":"e15fa2fc-d6cd-46b4-b814-1f00c4004c49","Type":"ContainerStarted","Data":"7556abded0d7d7496c4b2cbe350d14b25b3361dee69e838e9f63f9999fd33501"} Mar 12 18:46:25.058422 master-0 kubenswrapper[29097]: I0312 18:46:25.058330 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-wk79g" podStartSLOduration=2.216336637 podStartE2EDuration="7.058313234s" podCreationTimestamp="2026-03-12 18:46:18 +0000 UTC" firstStartedPulling="2026-03-12 18:46:19.721346017 +0000 UTC m=+1019.275326124" lastFinishedPulling="2026-03-12 18:46:24.563322604 +0000 UTC m=+1024.117302721" observedRunningTime="2026-03-12 18:46:25.045225447 +0000 UTC m=+1024.599205544" watchObservedRunningTime="2026-03-12 18:46:25.058313234 +0000 UTC m=+1024.612293331" Mar 12 18:46:25.074069 master-0 kubenswrapper[29097]: I0312 18:46:25.073960 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-pbnk9" podStartSLOduration=4.073942384 podStartE2EDuration="4.073942384s" podCreationTimestamp="2026-03-12 18:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:46:25.063497323 +0000 UTC m=+1024.617477420" watchObservedRunningTime="2026-03-12 18:46:25.073942384 +0000 UTC m=+1024.627922481" Mar 12 18:46:25.080962 master-0 kubenswrapper[29097]: I0312 18:46:25.080183 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed79e5de-c177-42ad-acf1-8b548f050262-operator-scripts\") pod \"placement-1444-account-create-update-pd8f4\" (UID: \"ed79e5de-c177-42ad-acf1-8b548f050262\") " pod="openstack/placement-1444-account-create-update-pd8f4" Mar 12 18:46:25.080962 master-0 kubenswrapper[29097]: I0312 18:46:25.080306 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9rln\" (UniqueName: \"kubernetes.io/projected/ed79e5de-c177-42ad-acf1-8b548f050262-kube-api-access-x9rln\") pod \"placement-1444-account-create-update-pd8f4\" (UID: \"ed79e5de-c177-42ad-acf1-8b548f050262\") " pod="openstack/placement-1444-account-create-update-pd8f4" Mar 12 18:46:25.182585 master-0 kubenswrapper[29097]: I0312 18:46:25.182514 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-m74jd" Mar 12 18:46:25.191464 master-0 kubenswrapper[29097]: I0312 18:46:25.190403 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed79e5de-c177-42ad-acf1-8b548f050262-operator-scripts\") pod \"placement-1444-account-create-update-pd8f4\" (UID: \"ed79e5de-c177-42ad-acf1-8b548f050262\") " pod="openstack/placement-1444-account-create-update-pd8f4" Mar 12 18:46:25.196541 master-0 kubenswrapper[29097]: I0312 18:46:25.191727 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9rln\" (UniqueName: \"kubernetes.io/projected/ed79e5de-c177-42ad-acf1-8b548f050262-kube-api-access-x9rln\") pod \"placement-1444-account-create-update-pd8f4\" (UID: \"ed79e5de-c177-42ad-acf1-8b548f050262\") " pod="openstack/placement-1444-account-create-update-pd8f4" Mar 12 18:46:25.198935 master-0 kubenswrapper[29097]: I0312 18:46:25.198887 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed79e5de-c177-42ad-acf1-8b548f050262-operator-scripts\") pod \"placement-1444-account-create-update-pd8f4\" (UID: \"ed79e5de-c177-42ad-acf1-8b548f050262\") " pod="openstack/placement-1444-account-create-update-pd8f4" Mar 12 18:46:25.209247 master-0 kubenswrapper[29097]: I0312 18:46:25.209190 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9rln\" (UniqueName: \"kubernetes.io/projected/ed79e5de-c177-42ad-acf1-8b548f050262-kube-api-access-x9rln\") pod \"placement-1444-account-create-update-pd8f4\" (UID: \"ed79e5de-c177-42ad-acf1-8b548f050262\") " pod="openstack/placement-1444-account-create-update-pd8f4" Mar 12 18:46:25.319969 master-0 kubenswrapper[29097]: I0312 18:46:25.319587 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1444-account-create-update-pd8f4" Mar 12 18:46:25.467751 master-0 kubenswrapper[29097]: I0312 18:46:25.467393 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-psnrd"] Mar 12 18:46:25.548582 master-0 kubenswrapper[29097]: I0312 18:46:25.548463 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-409f-account-create-update-8s7vs"] Mar 12 18:46:25.550065 master-0 kubenswrapper[29097]: W0312 18:46:25.549140 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90e68749_42ba_42d0_8ead_4517f6aae601.slice/crio-3d2a372a78a7ab8fa650d39e1b159a4a936519d1b9ce71da1f3aa1a397fc8563 WatchSource:0}: Error finding container 3d2a372a78a7ab8fa650d39e1b159a4a936519d1b9ce71da1f3aa1a397fc8563: Status 404 returned error can't find the container with id 3d2a372a78a7ab8fa650d39e1b159a4a936519d1b9ce71da1f3aa1a397fc8563 Mar 12 18:46:25.756879 master-0 kubenswrapper[29097]: I0312 18:46:25.749106 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-m74jd"] Mar 12 18:46:25.903467 master-0 kubenswrapper[29097]: I0312 18:46:25.902847 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1444-account-create-update-pd8f4"] Mar 12 18:46:25.931793 master-0 kubenswrapper[29097]: I0312 18:46:25.928840 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/066d07a5-82a7-49a5-b345-203a1ee212f0-etc-swift\") pod \"swift-storage-0\" (UID: \"066d07a5-82a7-49a5-b345-203a1ee212f0\") " pod="openstack/swift-storage-0" Mar 12 18:46:25.931793 master-0 kubenswrapper[29097]: E0312 18:46:25.929157 29097 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 18:46:25.931793 master-0 kubenswrapper[29097]: E0312 18:46:25.929173 29097 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 18:46:25.931793 master-0 kubenswrapper[29097]: E0312 18:46:25.929220 29097 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/066d07a5-82a7-49a5-b345-203a1ee212f0-etc-swift podName:066d07a5-82a7-49a5-b345-203a1ee212f0 nodeName:}" failed. No retries permitted until 2026-03-12 18:46:33.929205982 +0000 UTC m=+1033.483186079 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/066d07a5-82a7-49a5-b345-203a1ee212f0-etc-swift") pod "swift-storage-0" (UID: "066d07a5-82a7-49a5-b345-203a1ee212f0") : configmap "swift-ring-files" not found Mar 12 18:46:26.045106 master-0 kubenswrapper[29097]: I0312 18:46:26.045056 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-m74jd" event={"ID":"5c788cc9-0232-4e2a-ac56-b52212c2d589","Type":"ContainerStarted","Data":"d1da70347a71f50012848cce3a6f934808c7511b56bb021519846f4da242b181"} Mar 12 18:46:26.045191 master-0 kubenswrapper[29097]: I0312 18:46:26.045108 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-m74jd" event={"ID":"5c788cc9-0232-4e2a-ac56-b52212c2d589","Type":"ContainerStarted","Data":"1785f720fe9c1425d8219ed03f0e1737e9042155080c7aeb1cdd5af84b85c5b0"} Mar 12 18:46:26.047163 master-0 kubenswrapper[29097]: I0312 18:46:26.047128 29097 generic.go:334] "Generic (PLEG): container finished" podID="e15fa2fc-d6cd-46b4-b814-1f00c4004c49" containerID="44dab1eeebc8306c05e308c70a544c94aff6c88fb948491bada9953740334087" exitCode=0 Mar 12 18:46:26.048018 master-0 kubenswrapper[29097]: I0312 18:46:26.047173 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pbnk9" event={"ID":"e15fa2fc-d6cd-46b4-b814-1f00c4004c49","Type":"ContainerDied","Data":"44dab1eeebc8306c05e308c70a544c94aff6c88fb948491bada9953740334087"} Mar 12 18:46:26.052364 master-0 kubenswrapper[29097]: I0312 18:46:26.051376 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1444-account-create-update-pd8f4" event={"ID":"ed79e5de-c177-42ad-acf1-8b548f050262","Type":"ContainerStarted","Data":"9d94102c448cf445a256d12ec8e47f3f475b8461781cf46bc3af8fb235086f9f"} Mar 12 18:46:26.054993 master-0 kubenswrapper[29097]: I0312 18:46:26.054932 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-409f-account-create-update-8s7vs" event={"ID":"90e68749-42ba-42d0-8ead-4517f6aae601","Type":"ContainerStarted","Data":"73edbd0f9afcb1c84682dd0dc5985bda2ba535b86581442d418467a7104300dd"} Mar 12 18:46:26.054993 master-0 kubenswrapper[29097]: I0312 18:46:26.054992 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-409f-account-create-update-8s7vs" event={"ID":"90e68749-42ba-42d0-8ead-4517f6aae601","Type":"ContainerStarted","Data":"3d2a372a78a7ab8fa650d39e1b159a4a936519d1b9ce71da1f3aa1a397fc8563"} Mar 12 18:46:26.057394 master-0 kubenswrapper[29097]: I0312 18:46:26.057352 29097 generic.go:334] "Generic (PLEG): container finished" podID="054314ce-7598-4127-bde5-a98ceeeae7f5" containerID="d50996f804102455259ff584a87dee09d3f94baaf47b921c1e10f0bd7adfe600" exitCode=0 Mar 12 18:46:26.057693 master-0 kubenswrapper[29097]: I0312 18:46:26.057662 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-psnrd" event={"ID":"054314ce-7598-4127-bde5-a98ceeeae7f5","Type":"ContainerDied","Data":"d50996f804102455259ff584a87dee09d3f94baaf47b921c1e10f0bd7adfe600"} Mar 12 18:46:26.057693 master-0 kubenswrapper[29097]: I0312 18:46:26.057691 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-psnrd" event={"ID":"054314ce-7598-4127-bde5-a98ceeeae7f5","Type":"ContainerStarted","Data":"7d3add8d958fe7198a173bf622e5da54eaf26ca32713f27de87b88c14d3325f9"} Mar 12 18:46:26.077049 master-0 kubenswrapper[29097]: I0312 18:46:26.076958 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-m74jd" podStartSLOduration=2.076934718 podStartE2EDuration="2.076934718s" podCreationTimestamp="2026-03-12 18:46:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:46:26.063913993 +0000 UTC m=+1025.617894090" watchObservedRunningTime="2026-03-12 18:46:26.076934718 +0000 UTC m=+1025.630914835" Mar 12 18:46:26.367438 master-0 kubenswrapper[29097]: I0312 18:46:26.367375 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf8b865dc-pqwwh" Mar 12 18:46:26.446153 master-0 kubenswrapper[29097]: I0312 18:46:26.445971 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76f498f559-tw8sw"] Mar 12 18:46:26.446335 master-0 kubenswrapper[29097]: I0312 18:46:26.446210 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76f498f559-tw8sw" podUID="e1fa9220-74d6-4409-9db2-822727793acf" containerName="dnsmasq-dns" containerID="cri-o://68ee4f5c05c15e67d2920ce1e81f010a91948b5ef780d1d513fe5bc14d715085" gracePeriod=10 Mar 12 18:46:27.768427 master-0 kubenswrapper[29097]: I0312 18:46:27.767998 29097 generic.go:334] "Generic (PLEG): container finished" podID="5c788cc9-0232-4e2a-ac56-b52212c2d589" containerID="d1da70347a71f50012848cce3a6f934808c7511b56bb021519846f4da242b181" exitCode=0 Mar 12 18:46:27.768427 master-0 kubenswrapper[29097]: I0312 18:46:27.768077 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-m74jd" event={"ID":"5c788cc9-0232-4e2a-ac56-b52212c2d589","Type":"ContainerDied","Data":"d1da70347a71f50012848cce3a6f934808c7511b56bb021519846f4da242b181"} Mar 12 18:46:27.771580 master-0 kubenswrapper[29097]: I0312 18:46:27.770359 29097 generic.go:334] "Generic (PLEG): container finished" podID="e1fa9220-74d6-4409-9db2-822727793acf" containerID="68ee4f5c05c15e67d2920ce1e81f010a91948b5ef780d1d513fe5bc14d715085" exitCode=0 Mar 12 18:46:27.771580 master-0 kubenswrapper[29097]: I0312 18:46:27.770413 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76f498f559-tw8sw" event={"ID":"e1fa9220-74d6-4409-9db2-822727793acf","Type":"ContainerDied","Data":"68ee4f5c05c15e67d2920ce1e81f010a91948b5ef780d1d513fe5bc14d715085"} Mar 12 18:46:27.778337 master-0 kubenswrapper[29097]: I0312 18:46:27.778267 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1444-account-create-update-pd8f4" event={"ID":"ed79e5de-c177-42ad-acf1-8b548f050262","Type":"ContainerStarted","Data":"781f26ec29e807a1a837a5b47ddebc3a16aa88d8af062c29f94f8f9f00e762de"} Mar 12 18:46:27.791583 master-0 kubenswrapper[29097]: I0312 18:46:27.786467 29097 generic.go:334] "Generic (PLEG): container finished" podID="90e68749-42ba-42d0-8ead-4517f6aae601" containerID="73edbd0f9afcb1c84682dd0dc5985bda2ba535b86581442d418467a7104300dd" exitCode=0 Mar 12 18:46:27.791583 master-0 kubenswrapper[29097]: I0312 18:46:27.786716 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-409f-account-create-update-8s7vs" event={"ID":"90e68749-42ba-42d0-8ead-4517f6aae601","Type":"ContainerDied","Data":"73edbd0f9afcb1c84682dd0dc5985bda2ba535b86581442d418467a7104300dd"} Mar 12 18:46:27.845330 master-0 kubenswrapper[29097]: I0312 18:46:27.837265 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-1444-account-create-update-pd8f4" podStartSLOduration=3.837244237 podStartE2EDuration="3.837244237s" podCreationTimestamp="2026-03-12 18:46:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:46:27.816202642 +0000 UTC m=+1027.370182739" watchObservedRunningTime="2026-03-12 18:46:27.837244237 +0000 UTC m=+1027.391224334" Mar 12 18:46:28.051786 master-0 kubenswrapper[29097]: E0312 18:46:28.051505 29097 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded79e5de_c177_42ad_acf1_8b548f050262.slice/crio-conmon-781f26ec29e807a1a837a5b47ddebc3a16aa88d8af062c29f94f8f9f00e762de.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded79e5de_c177_42ad_acf1_8b548f050262.slice/crio-781f26ec29e807a1a837a5b47ddebc3a16aa88d8af062c29f94f8f9f00e762de.scope\": RecentStats: unable to find data in memory cache]" Mar 12 18:46:28.458973 master-0 kubenswrapper[29097]: I0312 18:46:28.455219 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76f498f559-tw8sw" Mar 12 18:46:28.576538 master-0 kubenswrapper[29097]: I0312 18:46:28.565260 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1fa9220-74d6-4409-9db2-822727793acf-ovsdbserver-sb\") pod \"e1fa9220-74d6-4409-9db2-822727793acf\" (UID: \"e1fa9220-74d6-4409-9db2-822727793acf\") " Mar 12 18:46:28.576538 master-0 kubenswrapper[29097]: I0312 18:46:28.565318 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1fa9220-74d6-4409-9db2-822727793acf-ovsdbserver-nb\") pod \"e1fa9220-74d6-4409-9db2-822727793acf\" (UID: \"e1fa9220-74d6-4409-9db2-822727793acf\") " Mar 12 18:46:28.576538 master-0 kubenswrapper[29097]: I0312 18:46:28.565380 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1fa9220-74d6-4409-9db2-822727793acf-dns-svc\") pod \"e1fa9220-74d6-4409-9db2-822727793acf\" (UID: \"e1fa9220-74d6-4409-9db2-822727793acf\") " Mar 12 18:46:28.576538 master-0 kubenswrapper[29097]: I0312 18:46:28.565418 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1fa9220-74d6-4409-9db2-822727793acf-config\") pod \"e1fa9220-74d6-4409-9db2-822727793acf\" (UID: \"e1fa9220-74d6-4409-9db2-822727793acf\") " Mar 12 18:46:28.576538 master-0 kubenswrapper[29097]: I0312 18:46:28.565451 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp6hw\" (UniqueName: \"kubernetes.io/projected/e1fa9220-74d6-4409-9db2-822727793acf-kube-api-access-sp6hw\") pod \"e1fa9220-74d6-4409-9db2-822727793acf\" (UID: \"e1fa9220-74d6-4409-9db2-822727793acf\") " Mar 12 18:46:28.584546 master-0 kubenswrapper[29097]: I0312 18:46:28.579874 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1fa9220-74d6-4409-9db2-822727793acf-kube-api-access-sp6hw" (OuterVolumeSpecName: "kube-api-access-sp6hw") pod "e1fa9220-74d6-4409-9db2-822727793acf" (UID: "e1fa9220-74d6-4409-9db2-822727793acf"). InnerVolumeSpecName "kube-api-access-sp6hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:46:28.653617 master-0 kubenswrapper[29097]: I0312 18:46:28.642180 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1fa9220-74d6-4409-9db2-822727793acf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e1fa9220-74d6-4409-9db2-822727793acf" (UID: "e1fa9220-74d6-4409-9db2-822727793acf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:46:28.653617 master-0 kubenswrapper[29097]: I0312 18:46:28.647384 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-psnrd" Mar 12 18:46:28.657591 master-0 kubenswrapper[29097]: I0312 18:46:28.656315 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pbnk9" Mar 12 18:46:28.664842 master-0 kubenswrapper[29097]: I0312 18:46:28.664799 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1fa9220-74d6-4409-9db2-822727793acf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e1fa9220-74d6-4409-9db2-822727793acf" (UID: "e1fa9220-74d6-4409-9db2-822727793acf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:46:28.669899 master-0 kubenswrapper[29097]: I0312 18:46:28.669849 29097 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1fa9220-74d6-4409-9db2-822727793acf-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:28.669899 master-0 kubenswrapper[29097]: I0312 18:46:28.669880 29097 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1fa9220-74d6-4409-9db2-822727793acf-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:28.669899 master-0 kubenswrapper[29097]: I0312 18:46:28.669890 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp6hw\" (UniqueName: \"kubernetes.io/projected/e1fa9220-74d6-4409-9db2-822727793acf-kube-api-access-sp6hw\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:28.692596 master-0 kubenswrapper[29097]: I0312 18:46:28.691355 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1fa9220-74d6-4409-9db2-822727793acf-config" (OuterVolumeSpecName: "config") pod "e1fa9220-74d6-4409-9db2-822727793acf" (UID: "e1fa9220-74d6-4409-9db2-822727793acf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:46:28.698747 master-0 kubenswrapper[29097]: I0312 18:46:28.696406 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1fa9220-74d6-4409-9db2-822727793acf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e1fa9220-74d6-4409-9db2-822727793acf" (UID: "e1fa9220-74d6-4409-9db2-822727793acf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:46:28.733540 master-0 kubenswrapper[29097]: I0312 18:46:28.729081 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-409f-account-create-update-8s7vs" Mar 12 18:46:28.781950 master-0 kubenswrapper[29097]: I0312 18:46:28.778743 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9c5db\" (UniqueName: \"kubernetes.io/projected/e15fa2fc-d6cd-46b4-b814-1f00c4004c49-kube-api-access-9c5db\") pod \"e15fa2fc-d6cd-46b4-b814-1f00c4004c49\" (UID: \"e15fa2fc-d6cd-46b4-b814-1f00c4004c49\") " Mar 12 18:46:28.781950 master-0 kubenswrapper[29097]: I0312 18:46:28.779052 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e15fa2fc-d6cd-46b4-b814-1f00c4004c49-operator-scripts\") pod \"e15fa2fc-d6cd-46b4-b814-1f00c4004c49\" (UID: \"e15fa2fc-d6cd-46b4-b814-1f00c4004c49\") " Mar 12 18:46:28.781950 master-0 kubenswrapper[29097]: I0312 18:46:28.779286 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqnf8\" (UniqueName: \"kubernetes.io/projected/90e68749-42ba-42d0-8ead-4517f6aae601-kube-api-access-qqnf8\") pod \"90e68749-42ba-42d0-8ead-4517f6aae601\" (UID: \"90e68749-42ba-42d0-8ead-4517f6aae601\") " Mar 12 18:46:28.781950 master-0 kubenswrapper[29097]: I0312 18:46:28.779312 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmrzn\" (UniqueName: \"kubernetes.io/projected/054314ce-7598-4127-bde5-a98ceeeae7f5-kube-api-access-dmrzn\") pod \"054314ce-7598-4127-bde5-a98ceeeae7f5\" (UID: \"054314ce-7598-4127-bde5-a98ceeeae7f5\") " Mar 12 18:46:28.781950 master-0 kubenswrapper[29097]: I0312 18:46:28.779337 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90e68749-42ba-42d0-8ead-4517f6aae601-operator-scripts\") pod \"90e68749-42ba-42d0-8ead-4517f6aae601\" (UID: \"90e68749-42ba-42d0-8ead-4517f6aae601\") " Mar 12 18:46:28.781950 master-0 kubenswrapper[29097]: I0312 18:46:28.779364 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/054314ce-7598-4127-bde5-a98ceeeae7f5-operator-scripts\") pod \"054314ce-7598-4127-bde5-a98ceeeae7f5\" (UID: \"054314ce-7598-4127-bde5-a98ceeeae7f5\") " Mar 12 18:46:28.781950 master-0 kubenswrapper[29097]: I0312 18:46:28.780084 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e15fa2fc-d6cd-46b4-b814-1f00c4004c49-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e15fa2fc-d6cd-46b4-b814-1f00c4004c49" (UID: "e15fa2fc-d6cd-46b4-b814-1f00c4004c49"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:46:28.781950 master-0 kubenswrapper[29097]: I0312 18:46:28.780322 29097 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e15fa2fc-d6cd-46b4-b814-1f00c4004c49-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:28.781950 master-0 kubenswrapper[29097]: I0312 18:46:28.780339 29097 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1fa9220-74d6-4409-9db2-822727793acf-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:28.781950 master-0 kubenswrapper[29097]: I0312 18:46:28.780352 29097 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1fa9220-74d6-4409-9db2-822727793acf-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:28.781950 master-0 kubenswrapper[29097]: I0312 18:46:28.780498 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90e68749-42ba-42d0-8ead-4517f6aae601-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "90e68749-42ba-42d0-8ead-4517f6aae601" (UID: "90e68749-42ba-42d0-8ead-4517f6aae601"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:46:28.781950 master-0 kubenswrapper[29097]: I0312 18:46:28.781917 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/054314ce-7598-4127-bde5-a98ceeeae7f5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "054314ce-7598-4127-bde5-a98ceeeae7f5" (UID: "054314ce-7598-4127-bde5-a98ceeeae7f5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:46:28.784753 master-0 kubenswrapper[29097]: I0312 18:46:28.784272 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e15fa2fc-d6cd-46b4-b814-1f00c4004c49-kube-api-access-9c5db" (OuterVolumeSpecName: "kube-api-access-9c5db") pod "e15fa2fc-d6cd-46b4-b814-1f00c4004c49" (UID: "e15fa2fc-d6cd-46b4-b814-1f00c4004c49"). InnerVolumeSpecName "kube-api-access-9c5db". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:46:28.802707 master-0 kubenswrapper[29097]: I0312 18:46:28.786583 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/054314ce-7598-4127-bde5-a98ceeeae7f5-kube-api-access-dmrzn" (OuterVolumeSpecName: "kube-api-access-dmrzn") pod "054314ce-7598-4127-bde5-a98ceeeae7f5" (UID: "054314ce-7598-4127-bde5-a98ceeeae7f5"). InnerVolumeSpecName "kube-api-access-dmrzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:46:28.802707 master-0 kubenswrapper[29097]: I0312 18:46:28.798264 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76f498f559-tw8sw" Mar 12 18:46:28.802865 master-0 kubenswrapper[29097]: I0312 18:46:28.802813 29097 generic.go:334] "Generic (PLEG): container finished" podID="ed79e5de-c177-42ad-acf1-8b548f050262" containerID="781f26ec29e807a1a837a5b47ddebc3a16aa88d8af062c29f94f8f9f00e762de" exitCode=0 Mar 12 18:46:28.808535 master-0 kubenswrapper[29097]: I0312 18:46:28.805738 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90e68749-42ba-42d0-8ead-4517f6aae601-kube-api-access-qqnf8" (OuterVolumeSpecName: "kube-api-access-qqnf8") pod "90e68749-42ba-42d0-8ead-4517f6aae601" (UID: "90e68749-42ba-42d0-8ead-4517f6aae601"). InnerVolumeSpecName "kube-api-access-qqnf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:46:28.833813 master-0 kubenswrapper[29097]: I0312 18:46:28.832546 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-409f-account-create-update-8s7vs" Mar 12 18:46:28.834218 master-0 kubenswrapper[29097]: I0312 18:46:28.834175 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-psnrd" Mar 12 18:46:28.838125 master-0 kubenswrapper[29097]: I0312 18:46:28.837480 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pbnk9" Mar 12 18:46:28.877883 master-0 kubenswrapper[29097]: I0312 18:46:28.876988 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76f498f559-tw8sw" event={"ID":"e1fa9220-74d6-4409-9db2-822727793acf","Type":"ContainerDied","Data":"6561cb27937dedf8030bf37ed440cd6246a6b7bab572a7faccb500c6071cfae9"} Mar 12 18:46:28.877883 master-0 kubenswrapper[29097]: I0312 18:46:28.877031 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1444-account-create-update-pd8f4" event={"ID":"ed79e5de-c177-42ad-acf1-8b548f050262","Type":"ContainerDied","Data":"781f26ec29e807a1a837a5b47ddebc3a16aa88d8af062c29f94f8f9f00e762de"} Mar 12 18:46:28.877883 master-0 kubenswrapper[29097]: I0312 18:46:28.877044 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-409f-account-create-update-8s7vs" event={"ID":"90e68749-42ba-42d0-8ead-4517f6aae601","Type":"ContainerDied","Data":"3d2a372a78a7ab8fa650d39e1b159a4a936519d1b9ce71da1f3aa1a397fc8563"} Mar 12 18:46:28.877883 master-0 kubenswrapper[29097]: I0312 18:46:28.877055 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d2a372a78a7ab8fa650d39e1b159a4a936519d1b9ce71da1f3aa1a397fc8563" Mar 12 18:46:28.877883 master-0 kubenswrapper[29097]: I0312 18:46:28.877065 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-psnrd" event={"ID":"054314ce-7598-4127-bde5-a98ceeeae7f5","Type":"ContainerDied","Data":"7d3add8d958fe7198a173bf622e5da54eaf26ca32713f27de87b88c14d3325f9"} Mar 12 18:46:28.877883 master-0 kubenswrapper[29097]: I0312 18:46:28.877074 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d3add8d958fe7198a173bf622e5da54eaf26ca32713f27de87b88c14d3325f9" Mar 12 18:46:28.877883 master-0 kubenswrapper[29097]: I0312 18:46:28.877082 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pbnk9" event={"ID":"e15fa2fc-d6cd-46b4-b814-1f00c4004c49","Type":"ContainerDied","Data":"7556abded0d7d7496c4b2cbe350d14b25b3361dee69e838e9f63f9999fd33501"} Mar 12 18:46:28.877883 master-0 kubenswrapper[29097]: I0312 18:46:28.877092 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7556abded0d7d7496c4b2cbe350d14b25b3361dee69e838e9f63f9999fd33501" Mar 12 18:46:28.877883 master-0 kubenswrapper[29097]: I0312 18:46:28.877108 29097 scope.go:117] "RemoveContainer" containerID="68ee4f5c05c15e67d2920ce1e81f010a91948b5ef780d1d513fe5bc14d715085" Mar 12 18:46:28.885741 master-0 kubenswrapper[29097]: I0312 18:46:28.885710 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqnf8\" (UniqueName: \"kubernetes.io/projected/90e68749-42ba-42d0-8ead-4517f6aae601-kube-api-access-qqnf8\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:28.885741 master-0 kubenswrapper[29097]: I0312 18:46:28.885739 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmrzn\" (UniqueName: \"kubernetes.io/projected/054314ce-7598-4127-bde5-a98ceeeae7f5-kube-api-access-dmrzn\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:28.885838 master-0 kubenswrapper[29097]: I0312 18:46:28.885749 29097 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90e68749-42ba-42d0-8ead-4517f6aae601-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:28.885838 master-0 kubenswrapper[29097]: I0312 18:46:28.885759 29097 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/054314ce-7598-4127-bde5-a98ceeeae7f5-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:28.885838 master-0 kubenswrapper[29097]: I0312 18:46:28.885768 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9c5db\" (UniqueName: \"kubernetes.io/projected/e15fa2fc-d6cd-46b4-b814-1f00c4004c49-kube-api-access-9c5db\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:28.898209 master-0 kubenswrapper[29097]: I0312 18:46:28.897155 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-kvbg4"] Mar 12 18:46:28.902270 master-0 kubenswrapper[29097]: E0312 18:46:28.901358 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e15fa2fc-d6cd-46b4-b814-1f00c4004c49" containerName="mariadb-account-create-update" Mar 12 18:46:28.902270 master-0 kubenswrapper[29097]: I0312 18:46:28.901414 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="e15fa2fc-d6cd-46b4-b814-1f00c4004c49" containerName="mariadb-account-create-update" Mar 12 18:46:28.902270 master-0 kubenswrapper[29097]: E0312 18:46:28.901458 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1fa9220-74d6-4409-9db2-822727793acf" containerName="dnsmasq-dns" Mar 12 18:46:28.902270 master-0 kubenswrapper[29097]: I0312 18:46:28.901465 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1fa9220-74d6-4409-9db2-822727793acf" containerName="dnsmasq-dns" Mar 12 18:46:28.902270 master-0 kubenswrapper[29097]: E0312 18:46:28.901497 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="054314ce-7598-4127-bde5-a98ceeeae7f5" containerName="mariadb-database-create" Mar 12 18:46:28.902270 master-0 kubenswrapper[29097]: I0312 18:46:28.901503 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="054314ce-7598-4127-bde5-a98ceeeae7f5" containerName="mariadb-database-create" Mar 12 18:46:28.902270 master-0 kubenswrapper[29097]: E0312 18:46:28.901565 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90e68749-42ba-42d0-8ead-4517f6aae601" containerName="mariadb-account-create-update" Mar 12 18:46:28.902270 master-0 kubenswrapper[29097]: I0312 18:46:28.901573 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="90e68749-42ba-42d0-8ead-4517f6aae601" containerName="mariadb-account-create-update" Mar 12 18:46:28.902270 master-0 kubenswrapper[29097]: E0312 18:46:28.901596 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1fa9220-74d6-4409-9db2-822727793acf" containerName="init" Mar 12 18:46:28.902270 master-0 kubenswrapper[29097]: I0312 18:46:28.901604 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1fa9220-74d6-4409-9db2-822727793acf" containerName="init" Mar 12 18:46:28.904935 master-0 kubenswrapper[29097]: I0312 18:46:28.904907 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="e15fa2fc-d6cd-46b4-b814-1f00c4004c49" containerName="mariadb-account-create-update" Mar 12 18:46:28.905009 master-0 kubenswrapper[29097]: I0312 18:46:28.904971 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="054314ce-7598-4127-bde5-a98ceeeae7f5" containerName="mariadb-database-create" Mar 12 18:46:28.905009 master-0 kubenswrapper[29097]: I0312 18:46:28.904990 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1fa9220-74d6-4409-9db2-822727793acf" containerName="dnsmasq-dns" Mar 12 18:46:28.905080 master-0 kubenswrapper[29097]: I0312 18:46:28.905015 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="90e68749-42ba-42d0-8ead-4517f6aae601" containerName="mariadb-account-create-update" Mar 12 18:46:28.905835 master-0 kubenswrapper[29097]: I0312 18:46:28.905807 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kvbg4" Mar 12 18:46:28.910446 master-0 kubenswrapper[29097]: I0312 18:46:28.910353 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-kvbg4"] Mar 12 18:46:28.952440 master-0 kubenswrapper[29097]: I0312 18:46:28.952314 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-1b8b-account-create-update-6trjd"] Mar 12 18:46:28.954388 master-0 kubenswrapper[29097]: I0312 18:46:28.954355 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1b8b-account-create-update-6trjd" Mar 12 18:46:28.960739 master-0 kubenswrapper[29097]: I0312 18:46:28.960694 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 12 18:46:28.983811 master-0 kubenswrapper[29097]: I0312 18:46:28.983463 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1b8b-account-create-update-6trjd"] Mar 12 18:46:28.988605 master-0 kubenswrapper[29097]: I0312 18:46:28.988106 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70840f07-cd5a-450b-ad71-40f71f42d2ae-operator-scripts\") pod \"glance-db-create-kvbg4\" (UID: \"70840f07-cd5a-450b-ad71-40f71f42d2ae\") " pod="openstack/glance-db-create-kvbg4" Mar 12 18:46:28.988605 master-0 kubenswrapper[29097]: I0312 18:46:28.988187 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjsmp\" (UniqueName: \"kubernetes.io/projected/70840f07-cd5a-450b-ad71-40f71f42d2ae-kube-api-access-cjsmp\") pod \"glance-db-create-kvbg4\" (UID: \"70840f07-cd5a-450b-ad71-40f71f42d2ae\") " pod="openstack/glance-db-create-kvbg4" Mar 12 18:46:29.018427 master-0 kubenswrapper[29097]: I0312 18:46:29.015768 29097 scope.go:117] "RemoveContainer" containerID="f7782f99d7d6184bb98f78046ccfa508d958a5a48ee4d32535d436ab810210f7" Mar 12 18:46:29.037083 master-0 kubenswrapper[29097]: I0312 18:46:29.037025 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76f498f559-tw8sw"] Mar 12 18:46:29.050633 master-0 kubenswrapper[29097]: I0312 18:46:29.050572 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76f498f559-tw8sw"] Mar 12 18:46:29.090470 master-0 kubenswrapper[29097]: I0312 18:46:29.090383 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8brk\" (UniqueName: \"kubernetes.io/projected/df47b1d0-5d7e-4e88-ab52-4936dcfa4e19-kube-api-access-v8brk\") pod \"glance-1b8b-account-create-update-6trjd\" (UID: \"df47b1d0-5d7e-4e88-ab52-4936dcfa4e19\") " pod="openstack/glance-1b8b-account-create-update-6trjd" Mar 12 18:46:29.090861 master-0 kubenswrapper[29097]: I0312 18:46:29.090828 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70840f07-cd5a-450b-ad71-40f71f42d2ae-operator-scripts\") pod \"glance-db-create-kvbg4\" (UID: \"70840f07-cd5a-450b-ad71-40f71f42d2ae\") " pod="openstack/glance-db-create-kvbg4" Mar 12 18:46:29.090982 master-0 kubenswrapper[29097]: I0312 18:46:29.090970 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjsmp\" (UniqueName: \"kubernetes.io/projected/70840f07-cd5a-450b-ad71-40f71f42d2ae-kube-api-access-cjsmp\") pod \"glance-db-create-kvbg4\" (UID: \"70840f07-cd5a-450b-ad71-40f71f42d2ae\") " pod="openstack/glance-db-create-kvbg4" Mar 12 18:46:29.091471 master-0 kubenswrapper[29097]: I0312 18:46:29.091453 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df47b1d0-5d7e-4e88-ab52-4936dcfa4e19-operator-scripts\") pod \"glance-1b8b-account-create-update-6trjd\" (UID: \"df47b1d0-5d7e-4e88-ab52-4936dcfa4e19\") " pod="openstack/glance-1b8b-account-create-update-6trjd" Mar 12 18:46:29.091751 master-0 kubenswrapper[29097]: I0312 18:46:29.091714 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70840f07-cd5a-450b-ad71-40f71f42d2ae-operator-scripts\") pod \"glance-db-create-kvbg4\" (UID: \"70840f07-cd5a-450b-ad71-40f71f42d2ae\") " pod="openstack/glance-db-create-kvbg4" Mar 12 18:46:29.109494 master-0 kubenswrapper[29097]: I0312 18:46:29.108451 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjsmp\" (UniqueName: \"kubernetes.io/projected/70840f07-cd5a-450b-ad71-40f71f42d2ae-kube-api-access-cjsmp\") pod \"glance-db-create-kvbg4\" (UID: \"70840f07-cd5a-450b-ad71-40f71f42d2ae\") " pod="openstack/glance-db-create-kvbg4" Mar 12 18:46:29.194607 master-0 kubenswrapper[29097]: I0312 18:46:29.193641 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df47b1d0-5d7e-4e88-ab52-4936dcfa4e19-operator-scripts\") pod \"glance-1b8b-account-create-update-6trjd\" (UID: \"df47b1d0-5d7e-4e88-ab52-4936dcfa4e19\") " pod="openstack/glance-1b8b-account-create-update-6trjd" Mar 12 18:46:29.194607 master-0 kubenswrapper[29097]: I0312 18:46:29.193714 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8brk\" (UniqueName: \"kubernetes.io/projected/df47b1d0-5d7e-4e88-ab52-4936dcfa4e19-kube-api-access-v8brk\") pod \"glance-1b8b-account-create-update-6trjd\" (UID: \"df47b1d0-5d7e-4e88-ab52-4936dcfa4e19\") " pod="openstack/glance-1b8b-account-create-update-6trjd" Mar 12 18:46:29.197292 master-0 kubenswrapper[29097]: I0312 18:46:29.197247 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df47b1d0-5d7e-4e88-ab52-4936dcfa4e19-operator-scripts\") pod \"glance-1b8b-account-create-update-6trjd\" (UID: \"df47b1d0-5d7e-4e88-ab52-4936dcfa4e19\") " pod="openstack/glance-1b8b-account-create-update-6trjd" Mar 12 18:46:29.210142 master-0 kubenswrapper[29097]: I0312 18:46:29.210105 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8brk\" (UniqueName: \"kubernetes.io/projected/df47b1d0-5d7e-4e88-ab52-4936dcfa4e19-kube-api-access-v8brk\") pod \"glance-1b8b-account-create-update-6trjd\" (UID: \"df47b1d0-5d7e-4e88-ab52-4936dcfa4e19\") " pod="openstack/glance-1b8b-account-create-update-6trjd" Mar 12 18:46:29.295280 master-0 kubenswrapper[29097]: I0312 18:46:29.295230 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-m74jd" Mar 12 18:46:29.326956 master-0 kubenswrapper[29097]: I0312 18:46:29.326870 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kvbg4" Mar 12 18:46:29.329920 master-0 kubenswrapper[29097]: I0312 18:46:29.329863 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1b8b-account-create-update-6trjd" Mar 12 18:46:29.399381 master-0 kubenswrapper[29097]: I0312 18:46:29.399343 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c788cc9-0232-4e2a-ac56-b52212c2d589-operator-scripts\") pod \"5c788cc9-0232-4e2a-ac56-b52212c2d589\" (UID: \"5c788cc9-0232-4e2a-ac56-b52212c2d589\") " Mar 12 18:46:29.399508 master-0 kubenswrapper[29097]: I0312 18:46:29.399416 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpvmv\" (UniqueName: \"kubernetes.io/projected/5c788cc9-0232-4e2a-ac56-b52212c2d589-kube-api-access-qpvmv\") pod \"5c788cc9-0232-4e2a-ac56-b52212c2d589\" (UID: \"5c788cc9-0232-4e2a-ac56-b52212c2d589\") " Mar 12 18:46:29.400024 master-0 kubenswrapper[29097]: I0312 18:46:29.399968 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c788cc9-0232-4e2a-ac56-b52212c2d589-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5c788cc9-0232-4e2a-ac56-b52212c2d589" (UID: "5c788cc9-0232-4e2a-ac56-b52212c2d589"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:46:29.405536 master-0 kubenswrapper[29097]: I0312 18:46:29.405476 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c788cc9-0232-4e2a-ac56-b52212c2d589-kube-api-access-qpvmv" (OuterVolumeSpecName: "kube-api-access-qpvmv") pod "5c788cc9-0232-4e2a-ac56-b52212c2d589" (UID: "5c788cc9-0232-4e2a-ac56-b52212c2d589"). InnerVolumeSpecName "kube-api-access-qpvmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:46:29.502471 master-0 kubenswrapper[29097]: I0312 18:46:29.502165 29097 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c788cc9-0232-4e2a-ac56-b52212c2d589-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:29.502471 master-0 kubenswrapper[29097]: I0312 18:46:29.502196 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpvmv\" (UniqueName: \"kubernetes.io/projected/5c788cc9-0232-4e2a-ac56-b52212c2d589-kube-api-access-qpvmv\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:29.845705 master-0 kubenswrapper[29097]: I0312 18:46:29.845504 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-kvbg4"] Mar 12 18:46:29.855021 master-0 kubenswrapper[29097]: I0312 18:46:29.853960 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1b8b-account-create-update-6trjd"] Mar 12 18:46:29.856817 master-0 kubenswrapper[29097]: I0312 18:46:29.856782 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-m74jd" Mar 12 18:46:29.858175 master-0 kubenswrapper[29097]: I0312 18:46:29.858124 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-m74jd" event={"ID":"5c788cc9-0232-4e2a-ac56-b52212c2d589","Type":"ContainerDied","Data":"1785f720fe9c1425d8219ed03f0e1737e9042155080c7aeb1cdd5af84b85c5b0"} Mar 12 18:46:29.858254 master-0 kubenswrapper[29097]: I0312 18:46:29.858176 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1785f720fe9c1425d8219ed03f0e1737e9042155080c7aeb1cdd5af84b85c5b0" Mar 12 18:46:29.860741 master-0 kubenswrapper[29097]: I0312 18:46:29.860689 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kvbg4" event={"ID":"70840f07-cd5a-450b-ad71-40f71f42d2ae","Type":"ContainerStarted","Data":"9857ab63d8c2d2232d5538d88f311bc037c5d0e612bb718e91a0113bd5020053"} Mar 12 18:46:30.259010 master-0 kubenswrapper[29097]: I0312 18:46:30.258964 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1444-account-create-update-pd8f4" Mar 12 18:46:30.324617 master-0 kubenswrapper[29097]: I0312 18:46:30.324502 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9rln\" (UniqueName: \"kubernetes.io/projected/ed79e5de-c177-42ad-acf1-8b548f050262-kube-api-access-x9rln\") pod \"ed79e5de-c177-42ad-acf1-8b548f050262\" (UID: \"ed79e5de-c177-42ad-acf1-8b548f050262\") " Mar 12 18:46:30.324711 master-0 kubenswrapper[29097]: I0312 18:46:30.324695 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed79e5de-c177-42ad-acf1-8b548f050262-operator-scripts\") pod \"ed79e5de-c177-42ad-acf1-8b548f050262\" (UID: \"ed79e5de-c177-42ad-acf1-8b548f050262\") " Mar 12 18:46:30.327317 master-0 kubenswrapper[29097]: I0312 18:46:30.327274 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed79e5de-c177-42ad-acf1-8b548f050262-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ed79e5de-c177-42ad-acf1-8b548f050262" (UID: "ed79e5de-c177-42ad-acf1-8b548f050262"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:46:30.330585 master-0 kubenswrapper[29097]: I0312 18:46:30.330504 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed79e5de-c177-42ad-acf1-8b548f050262-kube-api-access-x9rln" (OuterVolumeSpecName: "kube-api-access-x9rln") pod "ed79e5de-c177-42ad-acf1-8b548f050262" (UID: "ed79e5de-c177-42ad-acf1-8b548f050262"). InnerVolumeSpecName "kube-api-access-x9rln". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:46:30.426882 master-0 kubenswrapper[29097]: I0312 18:46:30.426824 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9rln\" (UniqueName: \"kubernetes.io/projected/ed79e5de-c177-42ad-acf1-8b548f050262-kube-api-access-x9rln\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:30.426882 master-0 kubenswrapper[29097]: I0312 18:46:30.426876 29097 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed79e5de-c177-42ad-acf1-8b548f050262-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:30.741108 master-0 kubenswrapper[29097]: I0312 18:46:30.741024 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1fa9220-74d6-4409-9db2-822727793acf" path="/var/lib/kubelet/pods/e1fa9220-74d6-4409-9db2-822727793acf/volumes" Mar 12 18:46:30.878546 master-0 kubenswrapper[29097]: I0312 18:46:30.878414 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1444-account-create-update-pd8f4" Mar 12 18:46:30.879913 master-0 kubenswrapper[29097]: I0312 18:46:30.878410 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1444-account-create-update-pd8f4" event={"ID":"ed79e5de-c177-42ad-acf1-8b548f050262","Type":"ContainerDied","Data":"9d94102c448cf445a256d12ec8e47f3f475b8461781cf46bc3af8fb235086f9f"} Mar 12 18:46:30.879913 master-0 kubenswrapper[29097]: I0312 18:46:30.878674 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d94102c448cf445a256d12ec8e47f3f475b8461781cf46bc3af8fb235086f9f" Mar 12 18:46:30.880353 master-0 kubenswrapper[29097]: I0312 18:46:30.880291 29097 generic.go:334] "Generic (PLEG): container finished" podID="70840f07-cd5a-450b-ad71-40f71f42d2ae" containerID="be4e1f76faf5438f0de07d0290016147cac26d95b87612de6184b048e3572d3b" exitCode=0 Mar 12 18:46:30.880445 master-0 kubenswrapper[29097]: I0312 18:46:30.880360 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kvbg4" event={"ID":"70840f07-cd5a-450b-ad71-40f71f42d2ae","Type":"ContainerDied","Data":"be4e1f76faf5438f0de07d0290016147cac26d95b87612de6184b048e3572d3b"} Mar 12 18:46:30.882757 master-0 kubenswrapper[29097]: I0312 18:46:30.882695 29097 generic.go:334] "Generic (PLEG): container finished" podID="df47b1d0-5d7e-4e88-ab52-4936dcfa4e19" containerID="9869783cf8bb7611474635d320f9208087d73f58a08a5162c43a95ca2dc5f980" exitCode=0 Mar 12 18:46:30.882757 master-0 kubenswrapper[29097]: I0312 18:46:30.882748 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1b8b-account-create-update-6trjd" event={"ID":"df47b1d0-5d7e-4e88-ab52-4936dcfa4e19","Type":"ContainerDied","Data":"9869783cf8bb7611474635d320f9208087d73f58a08a5162c43a95ca2dc5f980"} Mar 12 18:46:30.883000 master-0 kubenswrapper[29097]: I0312 18:46:30.882776 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1b8b-account-create-update-6trjd" event={"ID":"df47b1d0-5d7e-4e88-ab52-4936dcfa4e19","Type":"ContainerStarted","Data":"9851f09d6749da0817ed7eb7802dec480c6acbd22b39e4ea5913bc7e68660343"} Mar 12 18:46:31.212485 master-0 kubenswrapper[29097]: I0312 18:46:31.212413 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-pbnk9"] Mar 12 18:46:31.221913 master-0 kubenswrapper[29097]: I0312 18:46:31.221844 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-pbnk9"] Mar 12 18:46:31.307938 master-0 kubenswrapper[29097]: I0312 18:46:31.307826 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-hbgqc"] Mar 12 18:46:31.310480 master-0 kubenswrapper[29097]: E0312 18:46:31.310434 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed79e5de-c177-42ad-acf1-8b548f050262" containerName="mariadb-account-create-update" Mar 12 18:46:31.310480 master-0 kubenswrapper[29097]: I0312 18:46:31.310477 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed79e5de-c177-42ad-acf1-8b548f050262" containerName="mariadb-account-create-update" Mar 12 18:46:31.310972 master-0 kubenswrapper[29097]: E0312 18:46:31.310946 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c788cc9-0232-4e2a-ac56-b52212c2d589" containerName="mariadb-database-create" Mar 12 18:46:31.310972 master-0 kubenswrapper[29097]: I0312 18:46:31.310968 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c788cc9-0232-4e2a-ac56-b52212c2d589" containerName="mariadb-database-create" Mar 12 18:46:31.311569 master-0 kubenswrapper[29097]: I0312 18:46:31.311541 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c788cc9-0232-4e2a-ac56-b52212c2d589" containerName="mariadb-database-create" Mar 12 18:46:31.311675 master-0 kubenswrapper[29097]: I0312 18:46:31.311593 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed79e5de-c177-42ad-acf1-8b548f050262" containerName="mariadb-account-create-update" Mar 12 18:46:31.315507 master-0 kubenswrapper[29097]: I0312 18:46:31.315450 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hbgqc" Mar 12 18:46:31.324379 master-0 kubenswrapper[29097]: I0312 18:46:31.324300 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hbgqc"] Mar 12 18:46:31.325340 master-0 kubenswrapper[29097]: I0312 18:46:31.325301 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 12 18:46:31.445692 master-0 kubenswrapper[29097]: I0312 18:46:31.445616 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3917bc6f-9028-4939-9324-bec74885ac53-operator-scripts\") pod \"root-account-create-update-hbgqc\" (UID: \"3917bc6f-9028-4939-9324-bec74885ac53\") " pod="openstack/root-account-create-update-hbgqc" Mar 12 18:46:31.445894 master-0 kubenswrapper[29097]: I0312 18:46:31.445745 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56h28\" (UniqueName: \"kubernetes.io/projected/3917bc6f-9028-4939-9324-bec74885ac53-kube-api-access-56h28\") pod \"root-account-create-update-hbgqc\" (UID: \"3917bc6f-9028-4939-9324-bec74885ac53\") " pod="openstack/root-account-create-update-hbgqc" Mar 12 18:46:31.548140 master-0 kubenswrapper[29097]: I0312 18:46:31.548047 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3917bc6f-9028-4939-9324-bec74885ac53-operator-scripts\") pod \"root-account-create-update-hbgqc\" (UID: \"3917bc6f-9028-4939-9324-bec74885ac53\") " pod="openstack/root-account-create-update-hbgqc" Mar 12 18:46:31.548544 master-0 kubenswrapper[29097]: I0312 18:46:31.548207 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56h28\" (UniqueName: \"kubernetes.io/projected/3917bc6f-9028-4939-9324-bec74885ac53-kube-api-access-56h28\") pod \"root-account-create-update-hbgqc\" (UID: \"3917bc6f-9028-4939-9324-bec74885ac53\") " pod="openstack/root-account-create-update-hbgqc" Mar 12 18:46:31.549585 master-0 kubenswrapper[29097]: I0312 18:46:31.549477 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3917bc6f-9028-4939-9324-bec74885ac53-operator-scripts\") pod \"root-account-create-update-hbgqc\" (UID: \"3917bc6f-9028-4939-9324-bec74885ac53\") " pod="openstack/root-account-create-update-hbgqc" Mar 12 18:46:31.564347 master-0 kubenswrapper[29097]: I0312 18:46:31.564248 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56h28\" (UniqueName: \"kubernetes.io/projected/3917bc6f-9028-4939-9324-bec74885ac53-kube-api-access-56h28\") pod \"root-account-create-update-hbgqc\" (UID: \"3917bc6f-9028-4939-9324-bec74885ac53\") " pod="openstack/root-account-create-update-hbgqc" Mar 12 18:46:31.653276 master-0 kubenswrapper[29097]: I0312 18:46:31.653218 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hbgqc" Mar 12 18:46:32.154949 master-0 kubenswrapper[29097]: I0312 18:46:32.154821 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hbgqc"] Mar 12 18:46:32.358653 master-0 kubenswrapper[29097]: I0312 18:46:32.358279 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kvbg4" Mar 12 18:46:32.471693 master-0 kubenswrapper[29097]: I0312 18:46:32.471649 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjsmp\" (UniqueName: \"kubernetes.io/projected/70840f07-cd5a-450b-ad71-40f71f42d2ae-kube-api-access-cjsmp\") pod \"70840f07-cd5a-450b-ad71-40f71f42d2ae\" (UID: \"70840f07-cd5a-450b-ad71-40f71f42d2ae\") " Mar 12 18:46:32.471961 master-0 kubenswrapper[29097]: I0312 18:46:32.471935 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70840f07-cd5a-450b-ad71-40f71f42d2ae-operator-scripts\") pod \"70840f07-cd5a-450b-ad71-40f71f42d2ae\" (UID: \"70840f07-cd5a-450b-ad71-40f71f42d2ae\") " Mar 12 18:46:32.473108 master-0 kubenswrapper[29097]: I0312 18:46:32.473074 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70840f07-cd5a-450b-ad71-40f71f42d2ae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "70840f07-cd5a-450b-ad71-40f71f42d2ae" (UID: "70840f07-cd5a-450b-ad71-40f71f42d2ae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:46:32.475833 master-0 kubenswrapper[29097]: I0312 18:46:32.475806 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1b8b-account-create-update-6trjd" Mar 12 18:46:32.477151 master-0 kubenswrapper[29097]: I0312 18:46:32.477112 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70840f07-cd5a-450b-ad71-40f71f42d2ae-kube-api-access-cjsmp" (OuterVolumeSpecName: "kube-api-access-cjsmp") pod "70840f07-cd5a-450b-ad71-40f71f42d2ae" (UID: "70840f07-cd5a-450b-ad71-40f71f42d2ae"). InnerVolumeSpecName "kube-api-access-cjsmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:46:32.574597 master-0 kubenswrapper[29097]: I0312 18:46:32.574527 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df47b1d0-5d7e-4e88-ab52-4936dcfa4e19-operator-scripts\") pod \"df47b1d0-5d7e-4e88-ab52-4936dcfa4e19\" (UID: \"df47b1d0-5d7e-4e88-ab52-4936dcfa4e19\") " Mar 12 18:46:32.574773 master-0 kubenswrapper[29097]: I0312 18:46:32.574708 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8brk\" (UniqueName: \"kubernetes.io/projected/df47b1d0-5d7e-4e88-ab52-4936dcfa4e19-kube-api-access-v8brk\") pod \"df47b1d0-5d7e-4e88-ab52-4936dcfa4e19\" (UID: \"df47b1d0-5d7e-4e88-ab52-4936dcfa4e19\") " Mar 12 18:46:32.575274 master-0 kubenswrapper[29097]: I0312 18:46:32.575224 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df47b1d0-5d7e-4e88-ab52-4936dcfa4e19-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "df47b1d0-5d7e-4e88-ab52-4936dcfa4e19" (UID: "df47b1d0-5d7e-4e88-ab52-4936dcfa4e19"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:46:32.575308 master-0 kubenswrapper[29097]: I0312 18:46:32.575258 29097 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70840f07-cd5a-450b-ad71-40f71f42d2ae-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:32.575356 master-0 kubenswrapper[29097]: I0312 18:46:32.575326 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjsmp\" (UniqueName: \"kubernetes.io/projected/70840f07-cd5a-450b-ad71-40f71f42d2ae-kube-api-access-cjsmp\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:32.577676 master-0 kubenswrapper[29097]: I0312 18:46:32.577620 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df47b1d0-5d7e-4e88-ab52-4936dcfa4e19-kube-api-access-v8brk" (OuterVolumeSpecName: "kube-api-access-v8brk") pod "df47b1d0-5d7e-4e88-ab52-4936dcfa4e19" (UID: "df47b1d0-5d7e-4e88-ab52-4936dcfa4e19"). InnerVolumeSpecName "kube-api-access-v8brk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:46:32.677369 master-0 kubenswrapper[29097]: I0312 18:46:32.677276 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8brk\" (UniqueName: \"kubernetes.io/projected/df47b1d0-5d7e-4e88-ab52-4936dcfa4e19-kube-api-access-v8brk\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:32.677369 master-0 kubenswrapper[29097]: I0312 18:46:32.677325 29097 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df47b1d0-5d7e-4e88-ab52-4936dcfa4e19-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:32.735232 master-0 kubenswrapper[29097]: I0312 18:46:32.735147 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e15fa2fc-d6cd-46b4-b814-1f00c4004c49" path="/var/lib/kubelet/pods/e15fa2fc-d6cd-46b4-b814-1f00c4004c49/volumes" Mar 12 18:46:32.905884 master-0 kubenswrapper[29097]: I0312 18:46:32.905805 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kvbg4" event={"ID":"70840f07-cd5a-450b-ad71-40f71f42d2ae","Type":"ContainerDied","Data":"9857ab63d8c2d2232d5538d88f311bc037c5d0e612bb718e91a0113bd5020053"} Mar 12 18:46:32.905884 master-0 kubenswrapper[29097]: I0312 18:46:32.905862 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9857ab63d8c2d2232d5538d88f311bc037c5d0e612bb718e91a0113bd5020053" Mar 12 18:46:32.905884 master-0 kubenswrapper[29097]: I0312 18:46:32.905824 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kvbg4" Mar 12 18:46:32.907310 master-0 kubenswrapper[29097]: I0312 18:46:32.907271 29097 generic.go:334] "Generic (PLEG): container finished" podID="0bc3215a-a09f-49fe-a3f6-050665225137" containerID="52c8b4170cf04fa4cfadf0aaa543df8a0a3f315b1a12a023e68dbcf31970150c" exitCode=0 Mar 12 18:46:32.907362 master-0 kubenswrapper[29097]: I0312 18:46:32.907341 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wk79g" event={"ID":"0bc3215a-a09f-49fe-a3f6-050665225137","Type":"ContainerDied","Data":"52c8b4170cf04fa4cfadf0aaa543df8a0a3f315b1a12a023e68dbcf31970150c"} Mar 12 18:46:32.910150 master-0 kubenswrapper[29097]: I0312 18:46:32.910107 29097 generic.go:334] "Generic (PLEG): container finished" podID="3917bc6f-9028-4939-9324-bec74885ac53" containerID="7b7dbfe4299d85eca43cf217b836e9ac9854e2a32b5c26d0191e32c428a23163" exitCode=0 Mar 12 18:46:32.910218 master-0 kubenswrapper[29097]: I0312 18:46:32.910150 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hbgqc" event={"ID":"3917bc6f-9028-4939-9324-bec74885ac53","Type":"ContainerDied","Data":"7b7dbfe4299d85eca43cf217b836e9ac9854e2a32b5c26d0191e32c428a23163"} Mar 12 18:46:32.910218 master-0 kubenswrapper[29097]: I0312 18:46:32.910178 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hbgqc" event={"ID":"3917bc6f-9028-4939-9324-bec74885ac53","Type":"ContainerStarted","Data":"2ec6f7dc04932c087fcdf171bd11f04214a845b5be972380c8997d89aa2527a1"} Mar 12 18:46:32.911816 master-0 kubenswrapper[29097]: I0312 18:46:32.911757 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1b8b-account-create-update-6trjd" event={"ID":"df47b1d0-5d7e-4e88-ab52-4936dcfa4e19","Type":"ContainerDied","Data":"9851f09d6749da0817ed7eb7802dec480c6acbd22b39e4ea5913bc7e68660343"} Mar 12 18:46:32.911816 master-0 kubenswrapper[29097]: I0312 18:46:32.911798 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9851f09d6749da0817ed7eb7802dec480c6acbd22b39e4ea5913bc7e68660343" Mar 12 18:46:32.911970 master-0 kubenswrapper[29097]: I0312 18:46:32.911929 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1b8b-account-create-update-6trjd" Mar 12 18:46:33.121931 master-0 kubenswrapper[29097]: I0312 18:46:33.121848 29097 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-76f498f559-tw8sw" podUID="e1fa9220-74d6-4409-9db2-822727793acf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.182:5353: i/o timeout" Mar 12 18:46:34.010936 master-0 kubenswrapper[29097]: I0312 18:46:34.010653 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/066d07a5-82a7-49a5-b345-203a1ee212f0-etc-swift\") pod \"swift-storage-0\" (UID: \"066d07a5-82a7-49a5-b345-203a1ee212f0\") " pod="openstack/swift-storage-0" Mar 12 18:46:34.017589 master-0 kubenswrapper[29097]: I0312 18:46:34.017438 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/066d07a5-82a7-49a5-b345-203a1ee212f0-etc-swift\") pod \"swift-storage-0\" (UID: \"066d07a5-82a7-49a5-b345-203a1ee212f0\") " pod="openstack/swift-storage-0" Mar 12 18:46:34.051536 master-0 kubenswrapper[29097]: I0312 18:46:34.051459 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 12 18:46:34.133378 master-0 kubenswrapper[29097]: I0312 18:46:34.133337 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-cmn4b"] Mar 12 18:46:34.133991 master-0 kubenswrapper[29097]: E0312 18:46:34.133964 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df47b1d0-5d7e-4e88-ab52-4936dcfa4e19" containerName="mariadb-account-create-update" Mar 12 18:46:34.134067 master-0 kubenswrapper[29097]: I0312 18:46:34.134047 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="df47b1d0-5d7e-4e88-ab52-4936dcfa4e19" containerName="mariadb-account-create-update" Mar 12 18:46:34.134150 master-0 kubenswrapper[29097]: E0312 18:46:34.134140 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70840f07-cd5a-450b-ad71-40f71f42d2ae" containerName="mariadb-database-create" Mar 12 18:46:34.134202 master-0 kubenswrapper[29097]: I0312 18:46:34.134193 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="70840f07-cd5a-450b-ad71-40f71f42d2ae" containerName="mariadb-database-create" Mar 12 18:46:34.134472 master-0 kubenswrapper[29097]: I0312 18:46:34.134453 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="70840f07-cd5a-450b-ad71-40f71f42d2ae" containerName="mariadb-database-create" Mar 12 18:46:34.134575 master-0 kubenswrapper[29097]: I0312 18:46:34.134565 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="df47b1d0-5d7e-4e88-ab52-4936dcfa4e19" containerName="mariadb-account-create-update" Mar 12 18:46:34.135297 master-0 kubenswrapper[29097]: I0312 18:46:34.135282 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cmn4b" Mar 12 18:46:34.139756 master-0 kubenswrapper[29097]: I0312 18:46:34.139701 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-16afb-config-data" Mar 12 18:46:34.165246 master-0 kubenswrapper[29097]: I0312 18:46:34.165169 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-cmn4b"] Mar 12 18:46:34.224252 master-0 kubenswrapper[29097]: I0312 18:46:34.224167 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2557ae75-2d67-4831-ace5-a6e46d581c7f-combined-ca-bundle\") pod \"glance-db-sync-cmn4b\" (UID: \"2557ae75-2d67-4831-ace5-a6e46d581c7f\") " pod="openstack/glance-db-sync-cmn4b" Mar 12 18:46:34.224252 master-0 kubenswrapper[29097]: I0312 18:46:34.224221 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsw2b\" (UniqueName: \"kubernetes.io/projected/2557ae75-2d67-4831-ace5-a6e46d581c7f-kube-api-access-xsw2b\") pod \"glance-db-sync-cmn4b\" (UID: \"2557ae75-2d67-4831-ace5-a6e46d581c7f\") " pod="openstack/glance-db-sync-cmn4b" Mar 12 18:46:34.224537 master-0 kubenswrapper[29097]: I0312 18:46:34.224457 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2557ae75-2d67-4831-ace5-a6e46d581c7f-config-data\") pod \"glance-db-sync-cmn4b\" (UID: \"2557ae75-2d67-4831-ace5-a6e46d581c7f\") " pod="openstack/glance-db-sync-cmn4b" Mar 12 18:46:34.224600 master-0 kubenswrapper[29097]: I0312 18:46:34.224548 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2557ae75-2d67-4831-ace5-a6e46d581c7f-db-sync-config-data\") pod \"glance-db-sync-cmn4b\" (UID: \"2557ae75-2d67-4831-ace5-a6e46d581c7f\") " pod="openstack/glance-db-sync-cmn4b" Mar 12 18:46:34.273542 master-0 kubenswrapper[29097]: I0312 18:46:34.273453 29097 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-zrpnx" podUID="a5e5d447-ad0f-45a1-9613-8be6ff16ce62" containerName="ovn-controller" probeResult="failure" output=< Mar 12 18:46:34.273542 master-0 kubenswrapper[29097]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 12 18:46:34.273542 master-0 kubenswrapper[29097]: > Mar 12 18:46:34.326044 master-0 kubenswrapper[29097]: I0312 18:46:34.325978 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2557ae75-2d67-4831-ace5-a6e46d581c7f-config-data\") pod \"glance-db-sync-cmn4b\" (UID: \"2557ae75-2d67-4831-ace5-a6e46d581c7f\") " pod="openstack/glance-db-sync-cmn4b" Mar 12 18:46:34.326243 master-0 kubenswrapper[29097]: I0312 18:46:34.326063 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2557ae75-2d67-4831-ace5-a6e46d581c7f-db-sync-config-data\") pod \"glance-db-sync-cmn4b\" (UID: \"2557ae75-2d67-4831-ace5-a6e46d581c7f\") " pod="openstack/glance-db-sync-cmn4b" Mar 12 18:46:34.326243 master-0 kubenswrapper[29097]: I0312 18:46:34.326150 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2557ae75-2d67-4831-ace5-a6e46d581c7f-combined-ca-bundle\") pod \"glance-db-sync-cmn4b\" (UID: \"2557ae75-2d67-4831-ace5-a6e46d581c7f\") " pod="openstack/glance-db-sync-cmn4b" Mar 12 18:46:34.326243 master-0 kubenswrapper[29097]: I0312 18:46:34.326176 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsw2b\" (UniqueName: \"kubernetes.io/projected/2557ae75-2d67-4831-ace5-a6e46d581c7f-kube-api-access-xsw2b\") pod \"glance-db-sync-cmn4b\" (UID: \"2557ae75-2d67-4831-ace5-a6e46d581c7f\") " pod="openstack/glance-db-sync-cmn4b" Mar 12 18:46:34.330366 master-0 kubenswrapper[29097]: I0312 18:46:34.330340 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2557ae75-2d67-4831-ace5-a6e46d581c7f-combined-ca-bundle\") pod \"glance-db-sync-cmn4b\" (UID: \"2557ae75-2d67-4831-ace5-a6e46d581c7f\") " pod="openstack/glance-db-sync-cmn4b" Mar 12 18:46:34.330681 master-0 kubenswrapper[29097]: I0312 18:46:34.330607 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2557ae75-2d67-4831-ace5-a6e46d581c7f-db-sync-config-data\") pod \"glance-db-sync-cmn4b\" (UID: \"2557ae75-2d67-4831-ace5-a6e46d581c7f\") " pod="openstack/glance-db-sync-cmn4b" Mar 12 18:46:34.330982 master-0 kubenswrapper[29097]: I0312 18:46:34.330962 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2557ae75-2d67-4831-ace5-a6e46d581c7f-config-data\") pod \"glance-db-sync-cmn4b\" (UID: \"2557ae75-2d67-4831-ace5-a6e46d581c7f\") " pod="openstack/glance-db-sync-cmn4b" Mar 12 18:46:34.344481 master-0 kubenswrapper[29097]: I0312 18:46:34.344375 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsw2b\" (UniqueName: \"kubernetes.io/projected/2557ae75-2d67-4831-ace5-a6e46d581c7f-kube-api-access-xsw2b\") pod \"glance-db-sync-cmn4b\" (UID: \"2557ae75-2d67-4831-ace5-a6e46d581c7f\") " pod="openstack/glance-db-sync-cmn4b" Mar 12 18:46:34.415810 master-0 kubenswrapper[29097]: I0312 18:46:34.415440 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hbgqc" Mar 12 18:46:34.510985 master-0 kubenswrapper[29097]: I0312 18:46:34.504784 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cmn4b" Mar 12 18:46:34.542500 master-0 kubenswrapper[29097]: I0312 18:46:34.542230 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3917bc6f-9028-4939-9324-bec74885ac53-operator-scripts\") pod \"3917bc6f-9028-4939-9324-bec74885ac53\" (UID: \"3917bc6f-9028-4939-9324-bec74885ac53\") " Mar 12 18:46:34.542500 master-0 kubenswrapper[29097]: I0312 18:46:34.542333 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56h28\" (UniqueName: \"kubernetes.io/projected/3917bc6f-9028-4939-9324-bec74885ac53-kube-api-access-56h28\") pod \"3917bc6f-9028-4939-9324-bec74885ac53\" (UID: \"3917bc6f-9028-4939-9324-bec74885ac53\") " Mar 12 18:46:34.542792 master-0 kubenswrapper[29097]: I0312 18:46:34.542731 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3917bc6f-9028-4939-9324-bec74885ac53-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3917bc6f-9028-4939-9324-bec74885ac53" (UID: "3917bc6f-9028-4939-9324-bec74885ac53"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:46:34.543146 master-0 kubenswrapper[29097]: I0312 18:46:34.543120 29097 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3917bc6f-9028-4939-9324-bec74885ac53-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:34.545091 master-0 kubenswrapper[29097]: I0312 18:46:34.545046 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3917bc6f-9028-4939-9324-bec74885ac53-kube-api-access-56h28" (OuterVolumeSpecName: "kube-api-access-56h28") pod "3917bc6f-9028-4939-9324-bec74885ac53" (UID: "3917bc6f-9028-4939-9324-bec74885ac53"). InnerVolumeSpecName "kube-api-access-56h28". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:46:34.566770 master-0 kubenswrapper[29097]: I0312 18:46:34.566706 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wk79g" Mar 12 18:46:34.638894 master-0 kubenswrapper[29097]: I0312 18:46:34.637285 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 12 18:46:34.644975 master-0 kubenswrapper[29097]: I0312 18:46:34.644902 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0bc3215a-a09f-49fe-a3f6-050665225137-swiftconf\") pod \"0bc3215a-a09f-49fe-a3f6-050665225137\" (UID: \"0bc3215a-a09f-49fe-a3f6-050665225137\") " Mar 12 18:46:34.645132 master-0 kubenswrapper[29097]: I0312 18:46:34.645094 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bc3215a-a09f-49fe-a3f6-050665225137-combined-ca-bundle\") pod \"0bc3215a-a09f-49fe-a3f6-050665225137\" (UID: \"0bc3215a-a09f-49fe-a3f6-050665225137\") " Mar 12 18:46:34.645725 master-0 kubenswrapper[29097]: I0312 18:46:34.645360 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0bc3215a-a09f-49fe-a3f6-050665225137-dispersionconf\") pod \"0bc3215a-a09f-49fe-a3f6-050665225137\" (UID: \"0bc3215a-a09f-49fe-a3f6-050665225137\") " Mar 12 18:46:34.645814 master-0 kubenswrapper[29097]: I0312 18:46:34.645687 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zt58\" (UniqueName: \"kubernetes.io/projected/0bc3215a-a09f-49fe-a3f6-050665225137-kube-api-access-8zt58\") pod \"0bc3215a-a09f-49fe-a3f6-050665225137\" (UID: \"0bc3215a-a09f-49fe-a3f6-050665225137\") " Mar 12 18:46:34.645879 master-0 kubenswrapper[29097]: I0312 18:46:34.645860 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0bc3215a-a09f-49fe-a3f6-050665225137-ring-data-devices\") pod \"0bc3215a-a09f-49fe-a3f6-050665225137\" (UID: \"0bc3215a-a09f-49fe-a3f6-050665225137\") " Mar 12 18:46:34.645976 master-0 kubenswrapper[29097]: I0312 18:46:34.645933 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0bc3215a-a09f-49fe-a3f6-050665225137-etc-swift\") pod \"0bc3215a-a09f-49fe-a3f6-050665225137\" (UID: \"0bc3215a-a09f-49fe-a3f6-050665225137\") " Mar 12 18:46:34.646061 master-0 kubenswrapper[29097]: I0312 18:46:34.646036 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0bc3215a-a09f-49fe-a3f6-050665225137-scripts\") pod \"0bc3215a-a09f-49fe-a3f6-050665225137\" (UID: \"0bc3215a-a09f-49fe-a3f6-050665225137\") " Mar 12 18:46:34.646694 master-0 kubenswrapper[29097]: I0312 18:46:34.646637 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bc3215a-a09f-49fe-a3f6-050665225137-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "0bc3215a-a09f-49fe-a3f6-050665225137" (UID: "0bc3215a-a09f-49fe-a3f6-050665225137"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:46:34.647040 master-0 kubenswrapper[29097]: I0312 18:46:34.647005 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56h28\" (UniqueName: \"kubernetes.io/projected/3917bc6f-9028-4939-9324-bec74885ac53-kube-api-access-56h28\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:34.647040 master-0 kubenswrapper[29097]: I0312 18:46:34.647027 29097 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0bc3215a-a09f-49fe-a3f6-050665225137-ring-data-devices\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:34.647211 master-0 kubenswrapper[29097]: I0312 18:46:34.647131 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bc3215a-a09f-49fe-a3f6-050665225137-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "0bc3215a-a09f-49fe-a3f6-050665225137" (UID: "0bc3215a-a09f-49fe-a3f6-050665225137"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:46:34.659708 master-0 kubenswrapper[29097]: I0312 18:46:34.659642 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bc3215a-a09f-49fe-a3f6-050665225137-kube-api-access-8zt58" (OuterVolumeSpecName: "kube-api-access-8zt58") pod "0bc3215a-a09f-49fe-a3f6-050665225137" (UID: "0bc3215a-a09f-49fe-a3f6-050665225137"). InnerVolumeSpecName "kube-api-access-8zt58". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:46:34.664696 master-0 kubenswrapper[29097]: I0312 18:46:34.664651 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bc3215a-a09f-49fe-a3f6-050665225137-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "0bc3215a-a09f-49fe-a3f6-050665225137" (UID: "0bc3215a-a09f-49fe-a3f6-050665225137"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:46:34.673459 master-0 kubenswrapper[29097]: I0312 18:46:34.673416 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bc3215a-a09f-49fe-a3f6-050665225137-scripts" (OuterVolumeSpecName: "scripts") pod "0bc3215a-a09f-49fe-a3f6-050665225137" (UID: "0bc3215a-a09f-49fe-a3f6-050665225137"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:46:34.678064 master-0 kubenswrapper[29097]: I0312 18:46:34.678010 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bc3215a-a09f-49fe-a3f6-050665225137-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "0bc3215a-a09f-49fe-a3f6-050665225137" (UID: "0bc3215a-a09f-49fe-a3f6-050665225137"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:46:34.695710 master-0 kubenswrapper[29097]: I0312 18:46:34.695470 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bc3215a-a09f-49fe-a3f6-050665225137-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0bc3215a-a09f-49fe-a3f6-050665225137" (UID: "0bc3215a-a09f-49fe-a3f6-050665225137"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:46:34.749152 master-0 kubenswrapper[29097]: I0312 18:46:34.749119 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zt58\" (UniqueName: \"kubernetes.io/projected/0bc3215a-a09f-49fe-a3f6-050665225137-kube-api-access-8zt58\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:34.749338 master-0 kubenswrapper[29097]: I0312 18:46:34.749326 29097 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0bc3215a-a09f-49fe-a3f6-050665225137-etc-swift\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:34.749401 master-0 kubenswrapper[29097]: I0312 18:46:34.749391 29097 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0bc3215a-a09f-49fe-a3f6-050665225137-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:34.749599 master-0 kubenswrapper[29097]: I0312 18:46:34.749587 29097 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0bc3215a-a09f-49fe-a3f6-050665225137-swiftconf\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:34.749682 master-0 kubenswrapper[29097]: I0312 18:46:34.749671 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bc3215a-a09f-49fe-a3f6-050665225137-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:34.749862 master-0 kubenswrapper[29097]: I0312 18:46:34.749849 29097 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0bc3215a-a09f-49fe-a3f6-050665225137-dispersionconf\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:34.942760 master-0 kubenswrapper[29097]: I0312 18:46:34.942700 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wk79g" Mar 12 18:46:34.943007 master-0 kubenswrapper[29097]: I0312 18:46:34.942688 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wk79g" event={"ID":"0bc3215a-a09f-49fe-a3f6-050665225137","Type":"ContainerDied","Data":"491f471c5783797f24697ad74de686f20ee8a8bb8fe50d81fb6332ddafa859ee"} Mar 12 18:46:34.943007 master-0 kubenswrapper[29097]: I0312 18:46:34.942875 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="491f471c5783797f24697ad74de686f20ee8a8bb8fe50d81fb6332ddafa859ee" Mar 12 18:46:34.945767 master-0 kubenswrapper[29097]: I0312 18:46:34.945712 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hbgqc" event={"ID":"3917bc6f-9028-4939-9324-bec74885ac53","Type":"ContainerDied","Data":"2ec6f7dc04932c087fcdf171bd11f04214a845b5be972380c8997d89aa2527a1"} Mar 12 18:46:34.945844 master-0 kubenswrapper[29097]: I0312 18:46:34.945766 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ec6f7dc04932c087fcdf171bd11f04214a845b5be972380c8997d89aa2527a1" Mar 12 18:46:34.945844 master-0 kubenswrapper[29097]: I0312 18:46:34.945741 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hbgqc" Mar 12 18:46:34.947384 master-0 kubenswrapper[29097]: I0312 18:46:34.947311 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"066d07a5-82a7-49a5-b345-203a1ee212f0","Type":"ContainerStarted","Data":"ddece9f7b77ff8720102cab8968d4cfaeff38e1844f63c4c6a5d56df4723c7d0"} Mar 12 18:46:35.133134 master-0 kubenswrapper[29097]: I0312 18:46:35.133070 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-cmn4b"] Mar 12 18:46:35.144678 master-0 kubenswrapper[29097]: W0312 18:46:35.144615 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2557ae75_2d67_4831_ace5_a6e46d581c7f.slice/crio-f486d1686b6be42df22d82652b69219c593502208692a4ef111c0454c3aa1daf WatchSource:0}: Error finding container f486d1686b6be42df22d82652b69219c593502208692a4ef111c0454c3aa1daf: Status 404 returned error can't find the container with id f486d1686b6be42df22d82652b69219c593502208692a4ef111c0454c3aa1daf Mar 12 18:46:35.526227 master-0 kubenswrapper[29097]: I0312 18:46:35.525204 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 12 18:46:35.969545 master-0 kubenswrapper[29097]: I0312 18:46:35.969243 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cmn4b" event={"ID":"2557ae75-2d67-4831-ace5-a6e46d581c7f","Type":"ContainerStarted","Data":"f486d1686b6be42df22d82652b69219c593502208692a4ef111c0454c3aa1daf"} Mar 12 18:46:35.971190 master-0 kubenswrapper[29097]: I0312 18:46:35.971143 29097 generic.go:334] "Generic (PLEG): container finished" podID="cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b" containerID="472e017aef95ca4afce2e32079616d028e66de14f46787ba30b363aa1398ef8c" exitCode=0 Mar 12 18:46:35.971291 master-0 kubenswrapper[29097]: I0312 18:46:35.971267 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b","Type":"ContainerDied","Data":"472e017aef95ca4afce2e32079616d028e66de14f46787ba30b363aa1398ef8c"} Mar 12 18:46:36.986670 master-0 kubenswrapper[29097]: I0312 18:46:36.986557 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b","Type":"ContainerStarted","Data":"0eb6e483b58984c553451723b0fafbb931e8b0bed49a80dc89b59ac059d63702"} Mar 12 18:46:36.987110 master-0 kubenswrapper[29097]: I0312 18:46:36.986763 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:46:36.997445 master-0 kubenswrapper[29097]: I0312 18:46:36.997380 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"066d07a5-82a7-49a5-b345-203a1ee212f0","Type":"ContainerStarted","Data":"f128a1fa9ebf0b7b58a4f8a485f6d84bb5edd52eba3cd2eeb8434cd76dcebbf6"} Mar 12 18:46:36.997445 master-0 kubenswrapper[29097]: I0312 18:46:36.997443 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"066d07a5-82a7-49a5-b345-203a1ee212f0","Type":"ContainerStarted","Data":"aadf82489535563ae18d55321f3bf50653a0d2dbc3e186c8ec5966c128ebb076"} Mar 12 18:46:36.997633 master-0 kubenswrapper[29097]: I0312 18:46:36.997461 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"066d07a5-82a7-49a5-b345-203a1ee212f0","Type":"ContainerStarted","Data":"913d34e20571a907cc7c8227d026a90817c32e63a214c6df17942546ac50eb7f"} Mar 12 18:46:36.997633 master-0 kubenswrapper[29097]: I0312 18:46:36.997472 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"066d07a5-82a7-49a5-b345-203a1ee212f0","Type":"ContainerStarted","Data":"5612835801de1ea246bcbb63ac39d00f09b94093bf9e6bd586b27b70ed42a438"} Mar 12 18:46:37.012863 master-0 kubenswrapper[29097]: I0312 18:46:37.012609 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-hbgqc"] Mar 12 18:46:37.032557 master-0 kubenswrapper[29097]: I0312 18:46:37.030470 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-hbgqc"] Mar 12 18:46:37.042584 master-0 kubenswrapper[29097]: I0312 18:46:37.038346 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=56.233276609 podStartE2EDuration="1m6.038326493s" podCreationTimestamp="2026-03-12 18:45:31 +0000 UTC" firstStartedPulling="2026-03-12 18:45:48.284398528 +0000 UTC m=+987.838378625" lastFinishedPulling="2026-03-12 18:45:58.089448412 +0000 UTC m=+997.643428509" observedRunningTime="2026-03-12 18:46:37.018259672 +0000 UTC m=+1036.572239769" watchObservedRunningTime="2026-03-12 18:46:37.038326493 +0000 UTC m=+1036.592306610" Mar 12 18:46:38.735673 master-0 kubenswrapper[29097]: I0312 18:46:38.735607 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3917bc6f-9028-4939-9324-bec74885ac53" path="/var/lib/kubelet/pods/3917bc6f-9028-4939-9324-bec74885ac53/volumes" Mar 12 18:46:39.024065 master-0 kubenswrapper[29097]: I0312 18:46:39.023957 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"066d07a5-82a7-49a5-b345-203a1ee212f0","Type":"ContainerStarted","Data":"31ddc94822e7ae4a197e6b463b456dfb462cb5c86d36b5e17ba273d9b398b698"} Mar 12 18:46:39.024065 master-0 kubenswrapper[29097]: I0312 18:46:39.024020 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"066d07a5-82a7-49a5-b345-203a1ee212f0","Type":"ContainerStarted","Data":"93f40d0b4648497e25282c70b91799b2e5445f26d30f05add733a5ffc0e54b2d"} Mar 12 18:46:39.275414 master-0 kubenswrapper[29097]: I0312 18:46:39.275367 29097 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-zrpnx" podUID="a5e5d447-ad0f-45a1-9613-8be6ff16ce62" containerName="ovn-controller" probeResult="failure" output=< Mar 12 18:46:39.275414 master-0 kubenswrapper[29097]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 12 18:46:39.275414 master-0 kubenswrapper[29097]: > Mar 12 18:46:39.330456 master-0 kubenswrapper[29097]: I0312 18:46:39.329651 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-8vq5w" Mar 12 18:46:39.331192 master-0 kubenswrapper[29097]: I0312 18:46:39.331149 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-8vq5w" Mar 12 18:46:39.792490 master-0 kubenswrapper[29097]: I0312 18:46:39.792438 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zrpnx-config-z79x2"] Mar 12 18:46:39.793095 master-0 kubenswrapper[29097]: E0312 18:46:39.793067 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3917bc6f-9028-4939-9324-bec74885ac53" containerName="mariadb-account-create-update" Mar 12 18:46:39.793154 master-0 kubenswrapper[29097]: I0312 18:46:39.793095 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="3917bc6f-9028-4939-9324-bec74885ac53" containerName="mariadb-account-create-update" Mar 12 18:46:39.793154 master-0 kubenswrapper[29097]: E0312 18:46:39.793145 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bc3215a-a09f-49fe-a3f6-050665225137" containerName="swift-ring-rebalance" Mar 12 18:46:39.793217 master-0 kubenswrapper[29097]: I0312 18:46:39.793156 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bc3215a-a09f-49fe-a3f6-050665225137" containerName="swift-ring-rebalance" Mar 12 18:46:39.795036 master-0 kubenswrapper[29097]: I0312 18:46:39.793458 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bc3215a-a09f-49fe-a3f6-050665225137" containerName="swift-ring-rebalance" Mar 12 18:46:39.795036 master-0 kubenswrapper[29097]: I0312 18:46:39.793535 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="3917bc6f-9028-4939-9324-bec74885ac53" containerName="mariadb-account-create-update" Mar 12 18:46:39.795036 master-0 kubenswrapper[29097]: I0312 18:46:39.794390 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zrpnx-config-z79x2" Mar 12 18:46:39.798879 master-0 kubenswrapper[29097]: I0312 18:46:39.798840 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 12 18:46:39.808536 master-0 kubenswrapper[29097]: I0312 18:46:39.806980 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zrpnx-config-z79x2"] Mar 12 18:46:39.902606 master-0 kubenswrapper[29097]: I0312 18:46:39.902557 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fba29002-7a45-4c61-ad7c-ee1634e066f9-var-log-ovn\") pod \"ovn-controller-zrpnx-config-z79x2\" (UID: \"fba29002-7a45-4c61-ad7c-ee1634e066f9\") " pod="openstack/ovn-controller-zrpnx-config-z79x2" Mar 12 18:46:39.902914 master-0 kubenswrapper[29097]: I0312 18:46:39.902898 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fba29002-7a45-4c61-ad7c-ee1634e066f9-var-run\") pod \"ovn-controller-zrpnx-config-z79x2\" (UID: \"fba29002-7a45-4c61-ad7c-ee1634e066f9\") " pod="openstack/ovn-controller-zrpnx-config-z79x2" Mar 12 18:46:39.903015 master-0 kubenswrapper[29097]: I0312 18:46:39.903003 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fba29002-7a45-4c61-ad7c-ee1634e066f9-scripts\") pod \"ovn-controller-zrpnx-config-z79x2\" (UID: \"fba29002-7a45-4c61-ad7c-ee1634e066f9\") " pod="openstack/ovn-controller-zrpnx-config-z79x2" Mar 12 18:46:39.903174 master-0 kubenswrapper[29097]: I0312 18:46:39.903158 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqqvx\" (UniqueName: \"kubernetes.io/projected/fba29002-7a45-4c61-ad7c-ee1634e066f9-kube-api-access-bqqvx\") pod \"ovn-controller-zrpnx-config-z79x2\" (UID: \"fba29002-7a45-4c61-ad7c-ee1634e066f9\") " pod="openstack/ovn-controller-zrpnx-config-z79x2" Mar 12 18:46:39.903258 master-0 kubenswrapper[29097]: I0312 18:46:39.903245 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fba29002-7a45-4c61-ad7c-ee1634e066f9-var-run-ovn\") pod \"ovn-controller-zrpnx-config-z79x2\" (UID: \"fba29002-7a45-4c61-ad7c-ee1634e066f9\") " pod="openstack/ovn-controller-zrpnx-config-z79x2" Mar 12 18:46:39.903366 master-0 kubenswrapper[29097]: I0312 18:46:39.903351 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fba29002-7a45-4c61-ad7c-ee1634e066f9-additional-scripts\") pod \"ovn-controller-zrpnx-config-z79x2\" (UID: \"fba29002-7a45-4c61-ad7c-ee1634e066f9\") " pod="openstack/ovn-controller-zrpnx-config-z79x2" Mar 12 18:46:40.011655 master-0 kubenswrapper[29097]: I0312 18:46:40.011601 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqqvx\" (UniqueName: \"kubernetes.io/projected/fba29002-7a45-4c61-ad7c-ee1634e066f9-kube-api-access-bqqvx\") pod \"ovn-controller-zrpnx-config-z79x2\" (UID: \"fba29002-7a45-4c61-ad7c-ee1634e066f9\") " pod="openstack/ovn-controller-zrpnx-config-z79x2" Mar 12 18:46:40.011929 master-0 kubenswrapper[29097]: I0312 18:46:40.011911 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fba29002-7a45-4c61-ad7c-ee1634e066f9-var-run-ovn\") pod \"ovn-controller-zrpnx-config-z79x2\" (UID: \"fba29002-7a45-4c61-ad7c-ee1634e066f9\") " pod="openstack/ovn-controller-zrpnx-config-z79x2" Mar 12 18:46:40.012020 master-0 kubenswrapper[29097]: I0312 18:46:40.012007 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fba29002-7a45-4c61-ad7c-ee1634e066f9-additional-scripts\") pod \"ovn-controller-zrpnx-config-z79x2\" (UID: \"fba29002-7a45-4c61-ad7c-ee1634e066f9\") " pod="openstack/ovn-controller-zrpnx-config-z79x2" Mar 12 18:46:40.012138 master-0 kubenswrapper[29097]: I0312 18:46:40.012095 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fba29002-7a45-4c61-ad7c-ee1634e066f9-var-run-ovn\") pod \"ovn-controller-zrpnx-config-z79x2\" (UID: \"fba29002-7a45-4c61-ad7c-ee1634e066f9\") " pod="openstack/ovn-controller-zrpnx-config-z79x2" Mar 12 18:46:40.012191 master-0 kubenswrapper[29097]: I0312 18:46:40.012118 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fba29002-7a45-4c61-ad7c-ee1634e066f9-var-log-ovn\") pod \"ovn-controller-zrpnx-config-z79x2\" (UID: \"fba29002-7a45-4c61-ad7c-ee1634e066f9\") " pod="openstack/ovn-controller-zrpnx-config-z79x2" Mar 12 18:46:40.012315 master-0 kubenswrapper[29097]: I0312 18:46:40.012300 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fba29002-7a45-4c61-ad7c-ee1634e066f9-var-log-ovn\") pod \"ovn-controller-zrpnx-config-z79x2\" (UID: \"fba29002-7a45-4c61-ad7c-ee1634e066f9\") " pod="openstack/ovn-controller-zrpnx-config-z79x2" Mar 12 18:46:40.012438 master-0 kubenswrapper[29097]: I0312 18:46:40.012414 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fba29002-7a45-4c61-ad7c-ee1634e066f9-var-run\") pod \"ovn-controller-zrpnx-config-z79x2\" (UID: \"fba29002-7a45-4c61-ad7c-ee1634e066f9\") " pod="openstack/ovn-controller-zrpnx-config-z79x2" Mar 12 18:46:40.012487 master-0 kubenswrapper[29097]: I0312 18:46:40.012455 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fba29002-7a45-4c61-ad7c-ee1634e066f9-scripts\") pod \"ovn-controller-zrpnx-config-z79x2\" (UID: \"fba29002-7a45-4c61-ad7c-ee1634e066f9\") " pod="openstack/ovn-controller-zrpnx-config-z79x2" Mar 12 18:46:40.012877 master-0 kubenswrapper[29097]: I0312 18:46:40.012839 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fba29002-7a45-4c61-ad7c-ee1634e066f9-var-run\") pod \"ovn-controller-zrpnx-config-z79x2\" (UID: \"fba29002-7a45-4c61-ad7c-ee1634e066f9\") " pod="openstack/ovn-controller-zrpnx-config-z79x2" Mar 12 18:46:40.013251 master-0 kubenswrapper[29097]: I0312 18:46:40.013236 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fba29002-7a45-4c61-ad7c-ee1634e066f9-additional-scripts\") pod \"ovn-controller-zrpnx-config-z79x2\" (UID: \"fba29002-7a45-4c61-ad7c-ee1634e066f9\") " pod="openstack/ovn-controller-zrpnx-config-z79x2" Mar 12 18:46:40.017532 master-0 kubenswrapper[29097]: I0312 18:46:40.016148 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fba29002-7a45-4c61-ad7c-ee1634e066f9-scripts\") pod \"ovn-controller-zrpnx-config-z79x2\" (UID: \"fba29002-7a45-4c61-ad7c-ee1634e066f9\") " pod="openstack/ovn-controller-zrpnx-config-z79x2" Mar 12 18:46:40.034328 master-0 kubenswrapper[29097]: I0312 18:46:40.034281 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqqvx\" (UniqueName: \"kubernetes.io/projected/fba29002-7a45-4c61-ad7c-ee1634e066f9-kube-api-access-bqqvx\") pod \"ovn-controller-zrpnx-config-z79x2\" (UID: \"fba29002-7a45-4c61-ad7c-ee1634e066f9\") " pod="openstack/ovn-controller-zrpnx-config-z79x2" Mar 12 18:46:40.072718 master-0 kubenswrapper[29097]: I0312 18:46:40.072612 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"066d07a5-82a7-49a5-b345-203a1ee212f0","Type":"ContainerStarted","Data":"a44b9970de5e66631cfa05a4833b4cb3254432a33fb614f9a86b871a00b84ceb"} Mar 12 18:46:40.072943 master-0 kubenswrapper[29097]: I0312 18:46:40.072929 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"066d07a5-82a7-49a5-b345-203a1ee212f0","Type":"ContainerStarted","Data":"a5727446c6df7a041f6377bcb99eb2b3efc67e7dc3343f8de7b2a01ff796fdc5"} Mar 12 18:46:40.126325 master-0 kubenswrapper[29097]: I0312 18:46:40.126271 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zrpnx-config-z79x2" Mar 12 18:46:40.654156 master-0 kubenswrapper[29097]: I0312 18:46:40.654097 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zrpnx-config-z79x2"] Mar 12 18:46:41.104691 master-0 kubenswrapper[29097]: I0312 18:46:41.099526 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zrpnx-config-z79x2" event={"ID":"fba29002-7a45-4c61-ad7c-ee1634e066f9","Type":"ContainerStarted","Data":"ccf0c55267877f2f7a6f54dcb6c97b8f81aec56256d31ba2e7fe43a7f9efa974"} Mar 12 18:46:42.121913 master-0 kubenswrapper[29097]: I0312 18:46:42.121831 29097 generic.go:334] "Generic (PLEG): container finished" podID="fba29002-7a45-4c61-ad7c-ee1634e066f9" containerID="b934ab3a162ec3d9c46499ca1e821d7622fd38493f58177d7a80ea1bdc0c0725" exitCode=0 Mar 12 18:46:42.122966 master-0 kubenswrapper[29097]: I0312 18:46:42.121951 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zrpnx-config-z79x2" event={"ID":"fba29002-7a45-4c61-ad7c-ee1634e066f9","Type":"ContainerDied","Data":"b934ab3a162ec3d9c46499ca1e821d7622fd38493f58177d7a80ea1bdc0c0725"} Mar 12 18:46:42.129811 master-0 kubenswrapper[29097]: I0312 18:46:42.129737 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"066d07a5-82a7-49a5-b345-203a1ee212f0","Type":"ContainerStarted","Data":"8809c9af0bf1dd75ac5d1877f9acdd59b3c6be90dd527de560b4195373494a0c"} Mar 12 18:46:42.130020 master-0 kubenswrapper[29097]: I0312 18:46:42.130004 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"066d07a5-82a7-49a5-b345-203a1ee212f0","Type":"ContainerStarted","Data":"9ad35e81f4b04182c473db77e52202acd5d56b7bd935f526978bc3d19359fc16"} Mar 12 18:46:42.130110 master-0 kubenswrapper[29097]: I0312 18:46:42.130097 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"066d07a5-82a7-49a5-b345-203a1ee212f0","Type":"ContainerStarted","Data":"7dde4645d7abc98becb330b1d06580e41ed9b56662d3320ced4cd75e63beecfa"} Mar 12 18:46:42.130195 master-0 kubenswrapper[29097]: I0312 18:46:42.130183 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"066d07a5-82a7-49a5-b345-203a1ee212f0","Type":"ContainerStarted","Data":"c05fbdca8fe1e6e641df2b2bd81cbbaa1c95fa5b2a1bf11244b06d3795c9a520"} Mar 12 18:46:42.132218 master-0 kubenswrapper[29097]: I0312 18:46:42.132187 29097 generic.go:334] "Generic (PLEG): container finished" podID="49290c2f-177f-4a5e-8e1e-cf105e962c5b" containerID="b7c613ef0a8dcf96aa944c9cda7969d6d177e80fd85249a6b2296a24c402b59e" exitCode=0 Mar 12 18:46:42.132297 master-0 kubenswrapper[29097]: I0312 18:46:42.132229 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"49290c2f-177f-4a5e-8e1e-cf105e962c5b","Type":"ContainerDied","Data":"b7c613ef0a8dcf96aa944c9cda7969d6d177e80fd85249a6b2296a24c402b59e"} Mar 12 18:46:42.798594 master-0 kubenswrapper[29097]: I0312 18:46:42.798532 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-8clsj"] Mar 12 18:46:42.800659 master-0 kubenswrapper[29097]: I0312 18:46:42.800035 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8clsj" Mar 12 18:46:42.804739 master-0 kubenswrapper[29097]: I0312 18:46:42.803352 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 12 18:46:42.866598 master-0 kubenswrapper[29097]: I0312 18:46:42.864288 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9be4e8e-96b5-424a-9137-b468887ed037-operator-scripts\") pod \"root-account-create-update-8clsj\" (UID: \"f9be4e8e-96b5-424a-9137-b468887ed037\") " pod="openstack/root-account-create-update-8clsj" Mar 12 18:46:42.866598 master-0 kubenswrapper[29097]: I0312 18:46:42.864387 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pp64\" (UniqueName: \"kubernetes.io/projected/f9be4e8e-96b5-424a-9137-b468887ed037-kube-api-access-5pp64\") pod \"root-account-create-update-8clsj\" (UID: \"f9be4e8e-96b5-424a-9137-b468887ed037\") " pod="openstack/root-account-create-update-8clsj" Mar 12 18:46:42.881650 master-0 kubenswrapper[29097]: I0312 18:46:42.881590 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8clsj"] Mar 12 18:46:42.967763 master-0 kubenswrapper[29097]: I0312 18:46:42.967714 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pp64\" (UniqueName: \"kubernetes.io/projected/f9be4e8e-96b5-424a-9137-b468887ed037-kube-api-access-5pp64\") pod \"root-account-create-update-8clsj\" (UID: \"f9be4e8e-96b5-424a-9137-b468887ed037\") " pod="openstack/root-account-create-update-8clsj" Mar 12 18:46:42.969260 master-0 kubenswrapper[29097]: I0312 18:46:42.969200 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9be4e8e-96b5-424a-9137-b468887ed037-operator-scripts\") pod \"root-account-create-update-8clsj\" (UID: \"f9be4e8e-96b5-424a-9137-b468887ed037\") " pod="openstack/root-account-create-update-8clsj" Mar 12 18:46:42.970250 master-0 kubenswrapper[29097]: I0312 18:46:42.970218 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9be4e8e-96b5-424a-9137-b468887ed037-operator-scripts\") pod \"root-account-create-update-8clsj\" (UID: \"f9be4e8e-96b5-424a-9137-b468887ed037\") " pod="openstack/root-account-create-update-8clsj" Mar 12 18:46:42.985301 master-0 kubenswrapper[29097]: I0312 18:46:42.985253 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pp64\" (UniqueName: \"kubernetes.io/projected/f9be4e8e-96b5-424a-9137-b468887ed037-kube-api-access-5pp64\") pod \"root-account-create-update-8clsj\" (UID: \"f9be4e8e-96b5-424a-9137-b468887ed037\") " pod="openstack/root-account-create-update-8clsj" Mar 12 18:46:43.153557 master-0 kubenswrapper[29097]: I0312 18:46:43.153214 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8clsj" Mar 12 18:46:43.155310 master-0 kubenswrapper[29097]: I0312 18:46:43.154468 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"49290c2f-177f-4a5e-8e1e-cf105e962c5b","Type":"ContainerStarted","Data":"576b827c90051888b47c142e86461ece5d61eb8eff6d14ca0a85724602cb49e0"} Mar 12 18:46:43.156489 master-0 kubenswrapper[29097]: I0312 18:46:43.155949 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 12 18:46:43.163272 master-0 kubenswrapper[29097]: I0312 18:46:43.163204 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"066d07a5-82a7-49a5-b345-203a1ee212f0","Type":"ContainerStarted","Data":"ddb7698e4586563f535d5f0a40c3f86aa5e6a556e78eacfe11dcb7ca2e5d8a41"} Mar 12 18:46:43.279538 master-0 kubenswrapper[29097]: I0312 18:46:43.268598 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=62.867712177 podStartE2EDuration="1m15.268578917s" podCreationTimestamp="2026-03-12 18:45:28 +0000 UTC" firstStartedPulling="2026-03-12 18:45:46.681399913 +0000 UTC m=+986.235380020" lastFinishedPulling="2026-03-12 18:45:59.082266663 +0000 UTC m=+998.636246760" observedRunningTime="2026-03-12 18:46:43.262155766 +0000 UTC m=+1042.816135863" watchObservedRunningTime="2026-03-12 18:46:43.268578917 +0000 UTC m=+1042.822559014" Mar 12 18:46:44.251118 master-0 kubenswrapper[29097]: I0312 18:46:44.251072 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-zrpnx" Mar 12 18:46:49.835821 master-0 kubenswrapper[29097]: I0312 18:46:49.835780 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zrpnx-config-z79x2" Mar 12 18:46:49.918221 master-0 kubenswrapper[29097]: I0312 18:46:49.918169 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fba29002-7a45-4c61-ad7c-ee1634e066f9-var-log-ovn\") pod \"fba29002-7a45-4c61-ad7c-ee1634e066f9\" (UID: \"fba29002-7a45-4c61-ad7c-ee1634e066f9\") " Mar 12 18:46:49.918439 master-0 kubenswrapper[29097]: I0312 18:46:49.918243 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqqvx\" (UniqueName: \"kubernetes.io/projected/fba29002-7a45-4c61-ad7c-ee1634e066f9-kube-api-access-bqqvx\") pod \"fba29002-7a45-4c61-ad7c-ee1634e066f9\" (UID: \"fba29002-7a45-4c61-ad7c-ee1634e066f9\") " Mar 12 18:46:49.918439 master-0 kubenswrapper[29097]: I0312 18:46:49.918317 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fba29002-7a45-4c61-ad7c-ee1634e066f9-scripts\") pod \"fba29002-7a45-4c61-ad7c-ee1634e066f9\" (UID: \"fba29002-7a45-4c61-ad7c-ee1634e066f9\") " Mar 12 18:46:49.918439 master-0 kubenswrapper[29097]: I0312 18:46:49.918362 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fba29002-7a45-4c61-ad7c-ee1634e066f9-var-run\") pod \"fba29002-7a45-4c61-ad7c-ee1634e066f9\" (UID: \"fba29002-7a45-4c61-ad7c-ee1634e066f9\") " Mar 12 18:46:49.918560 master-0 kubenswrapper[29097]: I0312 18:46:49.918513 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fba29002-7a45-4c61-ad7c-ee1634e066f9-additional-scripts\") pod \"fba29002-7a45-4c61-ad7c-ee1634e066f9\" (UID: \"fba29002-7a45-4c61-ad7c-ee1634e066f9\") " Mar 12 18:46:49.918560 master-0 kubenswrapper[29097]: I0312 18:46:49.918558 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fba29002-7a45-4c61-ad7c-ee1634e066f9-var-run-ovn\") pod \"fba29002-7a45-4c61-ad7c-ee1634e066f9\" (UID: \"fba29002-7a45-4c61-ad7c-ee1634e066f9\") " Mar 12 18:46:49.919924 master-0 kubenswrapper[29097]: I0312 18:46:49.919895 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fba29002-7a45-4c61-ad7c-ee1634e066f9-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "fba29002-7a45-4c61-ad7c-ee1634e066f9" (UID: "fba29002-7a45-4c61-ad7c-ee1634e066f9"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:46:49.919984 master-0 kubenswrapper[29097]: I0312 18:46:49.919943 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fba29002-7a45-4c61-ad7c-ee1634e066f9-var-run" (OuterVolumeSpecName: "var-run") pod "fba29002-7a45-4c61-ad7c-ee1634e066f9" (UID: "fba29002-7a45-4c61-ad7c-ee1634e066f9"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:46:49.920249 master-0 kubenswrapper[29097]: I0312 18:46:49.920182 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fba29002-7a45-4c61-ad7c-ee1634e066f9-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "fba29002-7a45-4c61-ad7c-ee1634e066f9" (UID: "fba29002-7a45-4c61-ad7c-ee1634e066f9"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:46:49.920959 master-0 kubenswrapper[29097]: I0312 18:46:49.920936 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fba29002-7a45-4c61-ad7c-ee1634e066f9-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "fba29002-7a45-4c61-ad7c-ee1634e066f9" (UID: "fba29002-7a45-4c61-ad7c-ee1634e066f9"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:46:49.921350 master-0 kubenswrapper[29097]: I0312 18:46:49.921295 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fba29002-7a45-4c61-ad7c-ee1634e066f9-scripts" (OuterVolumeSpecName: "scripts") pod "fba29002-7a45-4c61-ad7c-ee1634e066f9" (UID: "fba29002-7a45-4c61-ad7c-ee1634e066f9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:46:49.924864 master-0 kubenswrapper[29097]: I0312 18:46:49.924815 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fba29002-7a45-4c61-ad7c-ee1634e066f9-kube-api-access-bqqvx" (OuterVolumeSpecName: "kube-api-access-bqqvx") pod "fba29002-7a45-4c61-ad7c-ee1634e066f9" (UID: "fba29002-7a45-4c61-ad7c-ee1634e066f9"). InnerVolumeSpecName "kube-api-access-bqqvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:46:50.023904 master-0 kubenswrapper[29097]: I0312 18:46:50.022610 29097 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fba29002-7a45-4c61-ad7c-ee1634e066f9-additional-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:50.023904 master-0 kubenswrapper[29097]: I0312 18:46:50.022641 29097 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fba29002-7a45-4c61-ad7c-ee1634e066f9-var-run-ovn\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:50.023904 master-0 kubenswrapper[29097]: I0312 18:46:50.022652 29097 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fba29002-7a45-4c61-ad7c-ee1634e066f9-var-log-ovn\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:50.023904 master-0 kubenswrapper[29097]: I0312 18:46:50.022663 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqqvx\" (UniqueName: \"kubernetes.io/projected/fba29002-7a45-4c61-ad7c-ee1634e066f9-kube-api-access-bqqvx\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:50.023904 master-0 kubenswrapper[29097]: I0312 18:46:50.022673 29097 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fba29002-7a45-4c61-ad7c-ee1634e066f9-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:50.023904 master-0 kubenswrapper[29097]: I0312 18:46:50.022704 29097 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fba29002-7a45-4c61-ad7c-ee1634e066f9-var-run\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:50.234269 master-0 kubenswrapper[29097]: I0312 18:46:50.228571 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8clsj"] Mar 12 18:46:50.234269 master-0 kubenswrapper[29097]: W0312 18:46:50.232044 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9be4e8e_96b5_424a_9137_b468887ed037.slice/crio-728d51871e600701c50760694a7cfd876e637d73ce6cf63b9c9f2ef32ef7ac66 WatchSource:0}: Error finding container 728d51871e600701c50760694a7cfd876e637d73ce6cf63b9c9f2ef32ef7ac66: Status 404 returned error can't find the container with id 728d51871e600701c50760694a7cfd876e637d73ce6cf63b9c9f2ef32ef7ac66 Mar 12 18:46:50.248064 master-0 kubenswrapper[29097]: I0312 18:46:50.248011 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8clsj" event={"ID":"f9be4e8e-96b5-424a-9137-b468887ed037","Type":"ContainerStarted","Data":"728d51871e600701c50760694a7cfd876e637d73ce6cf63b9c9f2ef32ef7ac66"} Mar 12 18:46:50.249791 master-0 kubenswrapper[29097]: I0312 18:46:50.249760 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zrpnx-config-z79x2" Mar 12 18:46:50.249791 master-0 kubenswrapper[29097]: I0312 18:46:50.249762 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zrpnx-config-z79x2" event={"ID":"fba29002-7a45-4c61-ad7c-ee1634e066f9","Type":"ContainerDied","Data":"ccf0c55267877f2f7a6f54dcb6c97b8f81aec56256d31ba2e7fe43a7f9efa974"} Mar 12 18:46:50.249922 master-0 kubenswrapper[29097]: I0312 18:46:50.249813 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccf0c55267877f2f7a6f54dcb6c97b8f81aec56256d31ba2e7fe43a7f9efa974" Mar 12 18:46:50.258770 master-0 kubenswrapper[29097]: I0312 18:46:50.258713 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"066d07a5-82a7-49a5-b345-203a1ee212f0","Type":"ContainerStarted","Data":"1d998cfa099f8ed705951478cf8fe6a771956bed712e2ec28bd36ee0de5e0f19"} Mar 12 18:46:51.041786 master-0 kubenswrapper[29097]: I0312 18:46:51.040287 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-zrpnx-config-z79x2"] Mar 12 18:46:51.069718 master-0 kubenswrapper[29097]: I0312 18:46:51.069658 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-zrpnx-config-z79x2"] Mar 12 18:46:51.271158 master-0 kubenswrapper[29097]: I0312 18:46:51.271097 29097 generic.go:334] "Generic (PLEG): container finished" podID="f9be4e8e-96b5-424a-9137-b468887ed037" containerID="fb290217bac2cb50ffb559dce63f8cbe7b38dc3f12bd45a5bee06c3b8c3649a7" exitCode=0 Mar 12 18:46:51.271352 master-0 kubenswrapper[29097]: I0312 18:46:51.271158 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8clsj" event={"ID":"f9be4e8e-96b5-424a-9137-b468887ed037","Type":"ContainerDied","Data":"fb290217bac2cb50ffb559dce63f8cbe7b38dc3f12bd45a5bee06c3b8c3649a7"} Mar 12 18:46:51.282845 master-0 kubenswrapper[29097]: I0312 18:46:51.282791 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"066d07a5-82a7-49a5-b345-203a1ee212f0","Type":"ContainerStarted","Data":"6cdd5ea04caf2906bfca0cd2b1017a72e89c83b7f0d2717eb1b7638a0d9a3929"} Mar 12 18:46:51.295226 master-0 kubenswrapper[29097]: I0312 18:46:51.295063 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cmn4b" event={"ID":"2557ae75-2d67-4831-ace5-a6e46d581c7f","Type":"ContainerStarted","Data":"7ceb068952db0208ad6652dc01035fcd86997ed6692998cf299d280ffbc657aa"} Mar 12 18:46:51.348849 master-0 kubenswrapper[29097]: I0312 18:46:51.348721 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=30.090698599 podStartE2EDuration="36.348695315s" podCreationTimestamp="2026-03-12 18:46:15 +0000 UTC" firstStartedPulling="2026-03-12 18:46:34.632643862 +0000 UTC m=+1034.186623959" lastFinishedPulling="2026-03-12 18:46:40.890640578 +0000 UTC m=+1040.444620675" observedRunningTime="2026-03-12 18:46:51.336322506 +0000 UTC m=+1050.890302663" watchObservedRunningTime="2026-03-12 18:46:51.348695315 +0000 UTC m=+1050.902675452" Mar 12 18:46:51.382742 master-0 kubenswrapper[29097]: I0312 18:46:51.382634 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-cmn4b" podStartSLOduration=2.67867981 podStartE2EDuration="17.382615731s" podCreationTimestamp="2026-03-12 18:46:34 +0000 UTC" firstStartedPulling="2026-03-12 18:46:35.150116643 +0000 UTC m=+1034.704096740" lastFinishedPulling="2026-03-12 18:46:49.854052564 +0000 UTC m=+1049.408032661" observedRunningTime="2026-03-12 18:46:51.374628332 +0000 UTC m=+1050.928608429" watchObservedRunningTime="2026-03-12 18:46:51.382615731 +0000 UTC m=+1050.936595828" Mar 12 18:46:51.662338 master-0 kubenswrapper[29097]: I0312 18:46:51.662163 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76986c7db5-59cjq"] Mar 12 18:46:51.662837 master-0 kubenswrapper[29097]: E0312 18:46:51.662729 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba29002-7a45-4c61-ad7c-ee1634e066f9" containerName="ovn-config" Mar 12 18:46:51.662837 master-0 kubenswrapper[29097]: I0312 18:46:51.662757 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba29002-7a45-4c61-ad7c-ee1634e066f9" containerName="ovn-config" Mar 12 18:46:51.663070 master-0 kubenswrapper[29097]: I0312 18:46:51.663017 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="fba29002-7a45-4c61-ad7c-ee1634e066f9" containerName="ovn-config" Mar 12 18:46:51.664386 master-0 kubenswrapper[29097]: I0312 18:46:51.664329 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76986c7db5-59cjq" Mar 12 18:46:51.670668 master-0 kubenswrapper[29097]: I0312 18:46:51.667211 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Mar 12 18:46:51.686473 master-0 kubenswrapper[29097]: I0312 18:46:51.686430 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76986c7db5-59cjq"] Mar 12 18:46:51.769104 master-0 kubenswrapper[29097]: I0312 18:46:51.769039 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa2283ab-4112-4a09-83e9-0d40cf04e864-ovsdbserver-sb\") pod \"dnsmasq-dns-76986c7db5-59cjq\" (UID: \"aa2283ab-4112-4a09-83e9-0d40cf04e864\") " pod="openstack/dnsmasq-dns-76986c7db5-59cjq" Mar 12 18:46:51.769319 master-0 kubenswrapper[29097]: I0312 18:46:51.769124 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aa2283ab-4112-4a09-83e9-0d40cf04e864-dns-swift-storage-0\") pod \"dnsmasq-dns-76986c7db5-59cjq\" (UID: \"aa2283ab-4112-4a09-83e9-0d40cf04e864\") " pod="openstack/dnsmasq-dns-76986c7db5-59cjq" Mar 12 18:46:51.769359 master-0 kubenswrapper[29097]: I0312 18:46:51.769301 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vpnq\" (UniqueName: \"kubernetes.io/projected/aa2283ab-4112-4a09-83e9-0d40cf04e864-kube-api-access-7vpnq\") pod \"dnsmasq-dns-76986c7db5-59cjq\" (UID: \"aa2283ab-4112-4a09-83e9-0d40cf04e864\") " pod="openstack/dnsmasq-dns-76986c7db5-59cjq" Mar 12 18:46:51.769439 master-0 kubenswrapper[29097]: I0312 18:46:51.769411 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa2283ab-4112-4a09-83e9-0d40cf04e864-config\") pod \"dnsmasq-dns-76986c7db5-59cjq\" (UID: \"aa2283ab-4112-4a09-83e9-0d40cf04e864\") " pod="openstack/dnsmasq-dns-76986c7db5-59cjq" Mar 12 18:46:51.769545 master-0 kubenswrapper[29097]: I0312 18:46:51.769501 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa2283ab-4112-4a09-83e9-0d40cf04e864-ovsdbserver-nb\") pod \"dnsmasq-dns-76986c7db5-59cjq\" (UID: \"aa2283ab-4112-4a09-83e9-0d40cf04e864\") " pod="openstack/dnsmasq-dns-76986c7db5-59cjq" Mar 12 18:46:51.769652 master-0 kubenswrapper[29097]: I0312 18:46:51.769624 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa2283ab-4112-4a09-83e9-0d40cf04e864-dns-svc\") pod \"dnsmasq-dns-76986c7db5-59cjq\" (UID: \"aa2283ab-4112-4a09-83e9-0d40cf04e864\") " pod="openstack/dnsmasq-dns-76986c7db5-59cjq" Mar 12 18:46:51.872058 master-0 kubenswrapper[29097]: I0312 18:46:51.871990 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa2283ab-4112-4a09-83e9-0d40cf04e864-config\") pod \"dnsmasq-dns-76986c7db5-59cjq\" (UID: \"aa2283ab-4112-4a09-83e9-0d40cf04e864\") " pod="openstack/dnsmasq-dns-76986c7db5-59cjq" Mar 12 18:46:51.872254 master-0 kubenswrapper[29097]: I0312 18:46:51.872182 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa2283ab-4112-4a09-83e9-0d40cf04e864-ovsdbserver-nb\") pod \"dnsmasq-dns-76986c7db5-59cjq\" (UID: \"aa2283ab-4112-4a09-83e9-0d40cf04e864\") " pod="openstack/dnsmasq-dns-76986c7db5-59cjq" Mar 12 18:46:51.872380 master-0 kubenswrapper[29097]: I0312 18:46:51.872350 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa2283ab-4112-4a09-83e9-0d40cf04e864-dns-svc\") pod \"dnsmasq-dns-76986c7db5-59cjq\" (UID: \"aa2283ab-4112-4a09-83e9-0d40cf04e864\") " pod="openstack/dnsmasq-dns-76986c7db5-59cjq" Mar 12 18:46:51.872653 master-0 kubenswrapper[29097]: I0312 18:46:51.872602 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa2283ab-4112-4a09-83e9-0d40cf04e864-ovsdbserver-sb\") pod \"dnsmasq-dns-76986c7db5-59cjq\" (UID: \"aa2283ab-4112-4a09-83e9-0d40cf04e864\") " pod="openstack/dnsmasq-dns-76986c7db5-59cjq" Mar 12 18:46:51.872703 master-0 kubenswrapper[29097]: I0312 18:46:51.872659 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aa2283ab-4112-4a09-83e9-0d40cf04e864-dns-swift-storage-0\") pod \"dnsmasq-dns-76986c7db5-59cjq\" (UID: \"aa2283ab-4112-4a09-83e9-0d40cf04e864\") " pod="openstack/dnsmasq-dns-76986c7db5-59cjq" Mar 12 18:46:51.872703 master-0 kubenswrapper[29097]: I0312 18:46:51.872690 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vpnq\" (UniqueName: \"kubernetes.io/projected/aa2283ab-4112-4a09-83e9-0d40cf04e864-kube-api-access-7vpnq\") pod \"dnsmasq-dns-76986c7db5-59cjq\" (UID: \"aa2283ab-4112-4a09-83e9-0d40cf04e864\") " pod="openstack/dnsmasq-dns-76986c7db5-59cjq" Mar 12 18:46:51.873007 master-0 kubenswrapper[29097]: I0312 18:46:51.872960 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa2283ab-4112-4a09-83e9-0d40cf04e864-config\") pod \"dnsmasq-dns-76986c7db5-59cjq\" (UID: \"aa2283ab-4112-4a09-83e9-0d40cf04e864\") " pod="openstack/dnsmasq-dns-76986c7db5-59cjq" Mar 12 18:46:51.873065 master-0 kubenswrapper[29097]: I0312 18:46:51.873042 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa2283ab-4112-4a09-83e9-0d40cf04e864-ovsdbserver-nb\") pod \"dnsmasq-dns-76986c7db5-59cjq\" (UID: \"aa2283ab-4112-4a09-83e9-0d40cf04e864\") " pod="openstack/dnsmasq-dns-76986c7db5-59cjq" Mar 12 18:46:51.873313 master-0 kubenswrapper[29097]: I0312 18:46:51.873285 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa2283ab-4112-4a09-83e9-0d40cf04e864-ovsdbserver-sb\") pod \"dnsmasq-dns-76986c7db5-59cjq\" (UID: \"aa2283ab-4112-4a09-83e9-0d40cf04e864\") " pod="openstack/dnsmasq-dns-76986c7db5-59cjq" Mar 12 18:46:51.873357 master-0 kubenswrapper[29097]: I0312 18:46:51.873321 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa2283ab-4112-4a09-83e9-0d40cf04e864-dns-svc\") pod \"dnsmasq-dns-76986c7db5-59cjq\" (UID: \"aa2283ab-4112-4a09-83e9-0d40cf04e864\") " pod="openstack/dnsmasq-dns-76986c7db5-59cjq" Mar 12 18:46:51.873549 master-0 kubenswrapper[29097]: I0312 18:46:51.873505 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aa2283ab-4112-4a09-83e9-0d40cf04e864-dns-swift-storage-0\") pod \"dnsmasq-dns-76986c7db5-59cjq\" (UID: \"aa2283ab-4112-4a09-83e9-0d40cf04e864\") " pod="openstack/dnsmasq-dns-76986c7db5-59cjq" Mar 12 18:46:51.886431 master-0 kubenswrapper[29097]: I0312 18:46:51.886378 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vpnq\" (UniqueName: \"kubernetes.io/projected/aa2283ab-4112-4a09-83e9-0d40cf04e864-kube-api-access-7vpnq\") pod \"dnsmasq-dns-76986c7db5-59cjq\" (UID: \"aa2283ab-4112-4a09-83e9-0d40cf04e864\") " pod="openstack/dnsmasq-dns-76986c7db5-59cjq" Mar 12 18:46:51.999929 master-0 kubenswrapper[29097]: I0312 18:46:51.999804 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76986c7db5-59cjq" Mar 12 18:46:52.482723 master-0 kubenswrapper[29097]: W0312 18:46:52.482647 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa2283ab_4112_4a09_83e9_0d40cf04e864.slice/crio-c753879d4cbffbfaac00a52f592732ca68924bc4746062a760b1bf6a5616a535 WatchSource:0}: Error finding container c753879d4cbffbfaac00a52f592732ca68924bc4746062a760b1bf6a5616a535: Status 404 returned error can't find the container with id c753879d4cbffbfaac00a52f592732ca68924bc4746062a760b1bf6a5616a535 Mar 12 18:46:52.485633 master-0 kubenswrapper[29097]: I0312 18:46:52.485485 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76986c7db5-59cjq"] Mar 12 18:46:52.737074 master-0 kubenswrapper[29097]: I0312 18:46:52.736514 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fba29002-7a45-4c61-ad7c-ee1634e066f9" path="/var/lib/kubelet/pods/fba29002-7a45-4c61-ad7c-ee1634e066f9/volumes" Mar 12 18:46:52.740496 master-0 kubenswrapper[29097]: I0312 18:46:52.740449 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8clsj" Mar 12 18:46:52.917563 master-0 kubenswrapper[29097]: I0312 18:46:52.917405 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pp64\" (UniqueName: \"kubernetes.io/projected/f9be4e8e-96b5-424a-9137-b468887ed037-kube-api-access-5pp64\") pod \"f9be4e8e-96b5-424a-9137-b468887ed037\" (UID: \"f9be4e8e-96b5-424a-9137-b468887ed037\") " Mar 12 18:46:52.917776 master-0 kubenswrapper[29097]: I0312 18:46:52.917579 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9be4e8e-96b5-424a-9137-b468887ed037-operator-scripts\") pod \"f9be4e8e-96b5-424a-9137-b468887ed037\" (UID: \"f9be4e8e-96b5-424a-9137-b468887ed037\") " Mar 12 18:46:52.919133 master-0 kubenswrapper[29097]: I0312 18:46:52.918952 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9be4e8e-96b5-424a-9137-b468887ed037-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f9be4e8e-96b5-424a-9137-b468887ed037" (UID: "f9be4e8e-96b5-424a-9137-b468887ed037"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:46:52.922264 master-0 kubenswrapper[29097]: I0312 18:46:52.922215 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9be4e8e-96b5-424a-9137-b468887ed037-kube-api-access-5pp64" (OuterVolumeSpecName: "kube-api-access-5pp64") pod "f9be4e8e-96b5-424a-9137-b468887ed037" (UID: "f9be4e8e-96b5-424a-9137-b468887ed037"). InnerVolumeSpecName "kube-api-access-5pp64". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:46:53.020883 master-0 kubenswrapper[29097]: I0312 18:46:53.020818 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pp64\" (UniqueName: \"kubernetes.io/projected/f9be4e8e-96b5-424a-9137-b468887ed037-kube-api-access-5pp64\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:53.020883 master-0 kubenswrapper[29097]: I0312 18:46:53.020877 29097 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9be4e8e-96b5-424a-9137-b468887ed037-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:53.325375 master-0 kubenswrapper[29097]: I0312 18:46:53.325318 29097 generic.go:334] "Generic (PLEG): container finished" podID="aa2283ab-4112-4a09-83e9-0d40cf04e864" containerID="39e78e7380348f2aa48b190ad619b10858bc2b80440895be630cffc3336808d9" exitCode=0 Mar 12 18:46:53.325611 master-0 kubenswrapper[29097]: I0312 18:46:53.325403 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76986c7db5-59cjq" event={"ID":"aa2283ab-4112-4a09-83e9-0d40cf04e864","Type":"ContainerDied","Data":"39e78e7380348f2aa48b190ad619b10858bc2b80440895be630cffc3336808d9"} Mar 12 18:46:53.325611 master-0 kubenswrapper[29097]: I0312 18:46:53.325435 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76986c7db5-59cjq" event={"ID":"aa2283ab-4112-4a09-83e9-0d40cf04e864","Type":"ContainerStarted","Data":"c753879d4cbffbfaac00a52f592732ca68924bc4746062a760b1bf6a5616a535"} Mar 12 18:46:53.327718 master-0 kubenswrapper[29097]: I0312 18:46:53.327344 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8clsj" event={"ID":"f9be4e8e-96b5-424a-9137-b468887ed037","Type":"ContainerDied","Data":"728d51871e600701c50760694a7cfd876e637d73ce6cf63b9c9f2ef32ef7ac66"} Mar 12 18:46:53.327718 master-0 kubenswrapper[29097]: I0312 18:46:53.327373 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="728d51871e600701c50760694a7cfd876e637d73ce6cf63b9c9f2ef32ef7ac66" Mar 12 18:46:53.327718 master-0 kubenswrapper[29097]: I0312 18:46:53.327415 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8clsj" Mar 12 18:46:54.340046 master-0 kubenswrapper[29097]: I0312 18:46:54.339973 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76986c7db5-59cjq" event={"ID":"aa2283ab-4112-4a09-83e9-0d40cf04e864","Type":"ContainerStarted","Data":"3f24109038d9b5a61c5e5d31d04b3c8ecf1da95fc9aad9ab64af18a7e69364da"} Mar 12 18:46:54.340892 master-0 kubenswrapper[29097]: I0312 18:46:54.340244 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76986c7db5-59cjq" Mar 12 18:46:54.370880 master-0 kubenswrapper[29097]: I0312 18:46:54.370788 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76986c7db5-59cjq" podStartSLOduration=3.370762015 podStartE2EDuration="3.370762015s" podCreationTimestamp="2026-03-12 18:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:46:54.362389716 +0000 UTC m=+1053.916369853" watchObservedRunningTime="2026-03-12 18:46:54.370762015 +0000 UTC m=+1053.924742122" Mar 12 18:46:54.596951 master-0 kubenswrapper[29097]: I0312 18:46:54.596812 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 12 18:46:54.946735 master-0 kubenswrapper[29097]: I0312 18:46:54.946665 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-vwzjx"] Mar 12 18:46:54.947624 master-0 kubenswrapper[29097]: E0312 18:46:54.947193 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9be4e8e-96b5-424a-9137-b468887ed037" containerName="mariadb-account-create-update" Mar 12 18:46:54.947624 master-0 kubenswrapper[29097]: I0312 18:46:54.947217 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9be4e8e-96b5-424a-9137-b468887ed037" containerName="mariadb-account-create-update" Mar 12 18:46:54.947624 master-0 kubenswrapper[29097]: I0312 18:46:54.947483 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9be4e8e-96b5-424a-9137-b468887ed037" containerName="mariadb-account-create-update" Mar 12 18:46:54.949990 master-0 kubenswrapper[29097]: I0312 18:46:54.948308 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-vwzjx"] Mar 12 18:46:54.949990 master-0 kubenswrapper[29097]: I0312 18:46:54.948408 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vwzjx" Mar 12 18:46:55.018492 master-0 kubenswrapper[29097]: I0312 18:46:55.017062 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-eac7-account-create-update-tvr2h"] Mar 12 18:46:55.018740 master-0 kubenswrapper[29097]: I0312 18:46:55.018605 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eac7-account-create-update-tvr2h" Mar 12 18:46:55.030015 master-0 kubenswrapper[29097]: I0312 18:46:55.029957 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 12 18:46:55.031455 master-0 kubenswrapper[29097]: I0312 18:46:55.031380 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-eac7-account-create-update-tvr2h"] Mar 12 18:46:55.080603 master-0 kubenswrapper[29097]: I0312 18:46:55.079081 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcwxx\" (UniqueName: \"kubernetes.io/projected/86a5b88e-2b06-40db-b90e-e027e9876bfa-kube-api-access-fcwxx\") pod \"cinder-db-create-vwzjx\" (UID: \"86a5b88e-2b06-40db-b90e-e027e9876bfa\") " pod="openstack/cinder-db-create-vwzjx" Mar 12 18:46:55.080603 master-0 kubenswrapper[29097]: I0312 18:46:55.079368 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86a5b88e-2b06-40db-b90e-e027e9876bfa-operator-scripts\") pod \"cinder-db-create-vwzjx\" (UID: \"86a5b88e-2b06-40db-b90e-e027e9876bfa\") " pod="openstack/cinder-db-create-vwzjx" Mar 12 18:46:55.188692 master-0 kubenswrapper[29097]: I0312 18:46:55.188555 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86a5b88e-2b06-40db-b90e-e027e9876bfa-operator-scripts\") pod \"cinder-db-create-vwzjx\" (UID: \"86a5b88e-2b06-40db-b90e-e027e9876bfa\") " pod="openstack/cinder-db-create-vwzjx" Mar 12 18:46:55.188890 master-0 kubenswrapper[29097]: I0312 18:46:55.188714 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw7hl\" (UniqueName: \"kubernetes.io/projected/857a7bc9-0e2c-48b8-bc16-d1c0d409049e-kube-api-access-dw7hl\") pod \"cinder-eac7-account-create-update-tvr2h\" (UID: \"857a7bc9-0e2c-48b8-bc16-d1c0d409049e\") " pod="openstack/cinder-eac7-account-create-update-tvr2h" Mar 12 18:46:55.188890 master-0 kubenswrapper[29097]: I0312 18:46:55.188749 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/857a7bc9-0e2c-48b8-bc16-d1c0d409049e-operator-scripts\") pod \"cinder-eac7-account-create-update-tvr2h\" (UID: \"857a7bc9-0e2c-48b8-bc16-d1c0d409049e\") " pod="openstack/cinder-eac7-account-create-update-tvr2h" Mar 12 18:46:55.188890 master-0 kubenswrapper[29097]: I0312 18:46:55.188786 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcwxx\" (UniqueName: \"kubernetes.io/projected/86a5b88e-2b06-40db-b90e-e027e9876bfa-kube-api-access-fcwxx\") pod \"cinder-db-create-vwzjx\" (UID: \"86a5b88e-2b06-40db-b90e-e027e9876bfa\") " pod="openstack/cinder-db-create-vwzjx" Mar 12 18:46:55.191169 master-0 kubenswrapper[29097]: I0312 18:46:55.190405 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86a5b88e-2b06-40db-b90e-e027e9876bfa-operator-scripts\") pod \"cinder-db-create-vwzjx\" (UID: \"86a5b88e-2b06-40db-b90e-e027e9876bfa\") " pod="openstack/cinder-db-create-vwzjx" Mar 12 18:46:55.218092 master-0 kubenswrapper[29097]: I0312 18:46:55.217687 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-brvsm"] Mar 12 18:46:55.220988 master-0 kubenswrapper[29097]: I0312 18:46:55.219081 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-brvsm" Mar 12 18:46:55.228547 master-0 kubenswrapper[29097]: I0312 18:46:55.227160 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcwxx\" (UniqueName: \"kubernetes.io/projected/86a5b88e-2b06-40db-b90e-e027e9876bfa-kube-api-access-fcwxx\") pod \"cinder-db-create-vwzjx\" (UID: \"86a5b88e-2b06-40db-b90e-e027e9876bfa\") " pod="openstack/cinder-db-create-vwzjx" Mar 12 18:46:55.237945 master-0 kubenswrapper[29097]: I0312 18:46:55.232180 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-brvsm"] Mar 12 18:46:55.289287 master-0 kubenswrapper[29097]: I0312 18:46:55.286723 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vwzjx" Mar 12 18:46:55.292842 master-0 kubenswrapper[29097]: I0312 18:46:55.292805 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw7hl\" (UniqueName: \"kubernetes.io/projected/857a7bc9-0e2c-48b8-bc16-d1c0d409049e-kube-api-access-dw7hl\") pod \"cinder-eac7-account-create-update-tvr2h\" (UID: \"857a7bc9-0e2c-48b8-bc16-d1c0d409049e\") " pod="openstack/cinder-eac7-account-create-update-tvr2h" Mar 12 18:46:55.292997 master-0 kubenswrapper[29097]: I0312 18:46:55.292979 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ff2681b-2ab1-4a46-a0d3-2bbcab0695dd-operator-scripts\") pod \"neutron-db-create-brvsm\" (UID: \"6ff2681b-2ab1-4a46-a0d3-2bbcab0695dd\") " pod="openstack/neutron-db-create-brvsm" Mar 12 18:46:55.293082 master-0 kubenswrapper[29097]: I0312 18:46:55.293068 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkbzh\" (UniqueName: \"kubernetes.io/projected/6ff2681b-2ab1-4a46-a0d3-2bbcab0695dd-kube-api-access-bkbzh\") pod \"neutron-db-create-brvsm\" (UID: \"6ff2681b-2ab1-4a46-a0d3-2bbcab0695dd\") " pod="openstack/neutron-db-create-brvsm" Mar 12 18:46:55.293167 master-0 kubenswrapper[29097]: I0312 18:46:55.293154 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/857a7bc9-0e2c-48b8-bc16-d1c0d409049e-operator-scripts\") pod \"cinder-eac7-account-create-update-tvr2h\" (UID: \"857a7bc9-0e2c-48b8-bc16-d1c0d409049e\") " pod="openstack/cinder-eac7-account-create-update-tvr2h" Mar 12 18:46:55.294959 master-0 kubenswrapper[29097]: I0312 18:46:55.294923 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/857a7bc9-0e2c-48b8-bc16-d1c0d409049e-operator-scripts\") pod \"cinder-eac7-account-create-update-tvr2h\" (UID: \"857a7bc9-0e2c-48b8-bc16-d1c0d409049e\") " pod="openstack/cinder-eac7-account-create-update-tvr2h" Mar 12 18:46:55.325684 master-0 kubenswrapper[29097]: I0312 18:46:55.325639 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw7hl\" (UniqueName: \"kubernetes.io/projected/857a7bc9-0e2c-48b8-bc16-d1c0d409049e-kube-api-access-dw7hl\") pod \"cinder-eac7-account-create-update-tvr2h\" (UID: \"857a7bc9-0e2c-48b8-bc16-d1c0d409049e\") " pod="openstack/cinder-eac7-account-create-update-tvr2h" Mar 12 18:46:55.365097 master-0 kubenswrapper[29097]: I0312 18:46:55.361802 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eac7-account-create-update-tvr2h" Mar 12 18:46:55.398550 master-0 kubenswrapper[29097]: I0312 18:46:55.397562 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ff2681b-2ab1-4a46-a0d3-2bbcab0695dd-operator-scripts\") pod \"neutron-db-create-brvsm\" (UID: \"6ff2681b-2ab1-4a46-a0d3-2bbcab0695dd\") " pod="openstack/neutron-db-create-brvsm" Mar 12 18:46:55.398550 master-0 kubenswrapper[29097]: I0312 18:46:55.397658 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkbzh\" (UniqueName: \"kubernetes.io/projected/6ff2681b-2ab1-4a46-a0d3-2bbcab0695dd-kube-api-access-bkbzh\") pod \"neutron-db-create-brvsm\" (UID: \"6ff2681b-2ab1-4a46-a0d3-2bbcab0695dd\") " pod="openstack/neutron-db-create-brvsm" Mar 12 18:46:55.398807 master-0 kubenswrapper[29097]: I0312 18:46:55.398592 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ff2681b-2ab1-4a46-a0d3-2bbcab0695dd-operator-scripts\") pod \"neutron-db-create-brvsm\" (UID: \"6ff2681b-2ab1-4a46-a0d3-2bbcab0695dd\") " pod="openstack/neutron-db-create-brvsm" Mar 12 18:46:55.414626 master-0 kubenswrapper[29097]: I0312 18:46:55.414432 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-54fb-account-create-update-bbqlw"] Mar 12 18:46:55.427578 master-0 kubenswrapper[29097]: I0312 18:46:55.421260 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54fb-account-create-update-bbqlw" Mar 12 18:46:55.427578 master-0 kubenswrapper[29097]: I0312 18:46:55.426938 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 12 18:46:55.438165 master-0 kubenswrapper[29097]: I0312 18:46:55.433176 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-bb2r7"] Mar 12 18:46:55.438165 master-0 kubenswrapper[29097]: I0312 18:46:55.434623 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bb2r7" Mar 12 18:46:55.438165 master-0 kubenswrapper[29097]: I0312 18:46:55.437030 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 12 18:46:55.438165 master-0 kubenswrapper[29097]: I0312 18:46:55.437264 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 12 18:46:55.438165 master-0 kubenswrapper[29097]: I0312 18:46:55.437594 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 12 18:46:55.443724 master-0 kubenswrapper[29097]: I0312 18:46:55.443213 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkbzh\" (UniqueName: \"kubernetes.io/projected/6ff2681b-2ab1-4a46-a0d3-2bbcab0695dd-kube-api-access-bkbzh\") pod \"neutron-db-create-brvsm\" (UID: \"6ff2681b-2ab1-4a46-a0d3-2bbcab0695dd\") " pod="openstack/neutron-db-create-brvsm" Mar 12 18:46:55.449124 master-0 kubenswrapper[29097]: I0312 18:46:55.447959 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-54fb-account-create-update-bbqlw"] Mar 12 18:46:55.462225 master-0 kubenswrapper[29097]: I0312 18:46:55.459888 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-bb2r7"] Mar 12 18:46:55.499776 master-0 kubenswrapper[29097]: I0312 18:46:55.499338 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk7f5\" (UniqueName: \"kubernetes.io/projected/2df983bf-3a2e-4d67-80e2-eb309ec03afd-kube-api-access-dk7f5\") pod \"neutron-54fb-account-create-update-bbqlw\" (UID: \"2df983bf-3a2e-4d67-80e2-eb309ec03afd\") " pod="openstack/neutron-54fb-account-create-update-bbqlw" Mar 12 18:46:55.499776 master-0 kubenswrapper[29097]: I0312 18:46:55.499400 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2df983bf-3a2e-4d67-80e2-eb309ec03afd-operator-scripts\") pod \"neutron-54fb-account-create-update-bbqlw\" (UID: \"2df983bf-3a2e-4d67-80e2-eb309ec03afd\") " pod="openstack/neutron-54fb-account-create-update-bbqlw" Mar 12 18:46:55.499776 master-0 kubenswrapper[29097]: I0312 18:46:55.499449 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26884ddb-c62f-429a-a6e6-0c7cb20ffc8d-config-data\") pod \"keystone-db-sync-bb2r7\" (UID: \"26884ddb-c62f-429a-a6e6-0c7cb20ffc8d\") " pod="openstack/keystone-db-sync-bb2r7" Mar 12 18:46:55.499776 master-0 kubenswrapper[29097]: I0312 18:46:55.499487 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9c7k\" (UniqueName: \"kubernetes.io/projected/26884ddb-c62f-429a-a6e6-0c7cb20ffc8d-kube-api-access-w9c7k\") pod \"keystone-db-sync-bb2r7\" (UID: \"26884ddb-c62f-429a-a6e6-0c7cb20ffc8d\") " pod="openstack/keystone-db-sync-bb2r7" Mar 12 18:46:55.499776 master-0 kubenswrapper[29097]: I0312 18:46:55.499514 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26884ddb-c62f-429a-a6e6-0c7cb20ffc8d-combined-ca-bundle\") pod \"keystone-db-sync-bb2r7\" (UID: \"26884ddb-c62f-429a-a6e6-0c7cb20ffc8d\") " pod="openstack/keystone-db-sync-bb2r7" Mar 12 18:46:55.603618 master-0 kubenswrapper[29097]: I0312 18:46:55.601091 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk7f5\" (UniqueName: \"kubernetes.io/projected/2df983bf-3a2e-4d67-80e2-eb309ec03afd-kube-api-access-dk7f5\") pod \"neutron-54fb-account-create-update-bbqlw\" (UID: \"2df983bf-3a2e-4d67-80e2-eb309ec03afd\") " pod="openstack/neutron-54fb-account-create-update-bbqlw" Mar 12 18:46:55.603618 master-0 kubenswrapper[29097]: I0312 18:46:55.601166 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2df983bf-3a2e-4d67-80e2-eb309ec03afd-operator-scripts\") pod \"neutron-54fb-account-create-update-bbqlw\" (UID: \"2df983bf-3a2e-4d67-80e2-eb309ec03afd\") " pod="openstack/neutron-54fb-account-create-update-bbqlw" Mar 12 18:46:55.603618 master-0 kubenswrapper[29097]: I0312 18:46:55.601226 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26884ddb-c62f-429a-a6e6-0c7cb20ffc8d-config-data\") pod \"keystone-db-sync-bb2r7\" (UID: \"26884ddb-c62f-429a-a6e6-0c7cb20ffc8d\") " pod="openstack/keystone-db-sync-bb2r7" Mar 12 18:46:55.603618 master-0 kubenswrapper[29097]: I0312 18:46:55.601261 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9c7k\" (UniqueName: \"kubernetes.io/projected/26884ddb-c62f-429a-a6e6-0c7cb20ffc8d-kube-api-access-w9c7k\") pod \"keystone-db-sync-bb2r7\" (UID: \"26884ddb-c62f-429a-a6e6-0c7cb20ffc8d\") " pod="openstack/keystone-db-sync-bb2r7" Mar 12 18:46:55.603618 master-0 kubenswrapper[29097]: I0312 18:46:55.601304 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26884ddb-c62f-429a-a6e6-0c7cb20ffc8d-combined-ca-bundle\") pod \"keystone-db-sync-bb2r7\" (UID: \"26884ddb-c62f-429a-a6e6-0c7cb20ffc8d\") " pod="openstack/keystone-db-sync-bb2r7" Mar 12 18:46:55.603618 master-0 kubenswrapper[29097]: I0312 18:46:55.602467 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2df983bf-3a2e-4d67-80e2-eb309ec03afd-operator-scripts\") pod \"neutron-54fb-account-create-update-bbqlw\" (UID: \"2df983bf-3a2e-4d67-80e2-eb309ec03afd\") " pod="openstack/neutron-54fb-account-create-update-bbqlw" Mar 12 18:46:55.608892 master-0 kubenswrapper[29097]: I0312 18:46:55.605455 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-brvsm" Mar 12 18:46:55.608892 master-0 kubenswrapper[29097]: I0312 18:46:55.607160 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26884ddb-c62f-429a-a6e6-0c7cb20ffc8d-config-data\") pod \"keystone-db-sync-bb2r7\" (UID: \"26884ddb-c62f-429a-a6e6-0c7cb20ffc8d\") " pod="openstack/keystone-db-sync-bb2r7" Mar 12 18:46:55.608892 master-0 kubenswrapper[29097]: I0312 18:46:55.607529 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26884ddb-c62f-429a-a6e6-0c7cb20ffc8d-combined-ca-bundle\") pod \"keystone-db-sync-bb2r7\" (UID: \"26884ddb-c62f-429a-a6e6-0c7cb20ffc8d\") " pod="openstack/keystone-db-sync-bb2r7" Mar 12 18:46:55.626065 master-0 kubenswrapper[29097]: I0312 18:46:55.626021 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk7f5\" (UniqueName: \"kubernetes.io/projected/2df983bf-3a2e-4d67-80e2-eb309ec03afd-kube-api-access-dk7f5\") pod \"neutron-54fb-account-create-update-bbqlw\" (UID: \"2df983bf-3a2e-4d67-80e2-eb309ec03afd\") " pod="openstack/neutron-54fb-account-create-update-bbqlw" Mar 12 18:46:55.628414 master-0 kubenswrapper[29097]: I0312 18:46:55.628389 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9c7k\" (UniqueName: \"kubernetes.io/projected/26884ddb-c62f-429a-a6e6-0c7cb20ffc8d-kube-api-access-w9c7k\") pod \"keystone-db-sync-bb2r7\" (UID: \"26884ddb-c62f-429a-a6e6-0c7cb20ffc8d\") " pod="openstack/keystone-db-sync-bb2r7" Mar 12 18:46:55.770723 master-0 kubenswrapper[29097]: I0312 18:46:55.770692 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 12 18:46:55.857177 master-0 kubenswrapper[29097]: I0312 18:46:55.856684 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54fb-account-create-update-bbqlw" Mar 12 18:46:55.866085 master-0 kubenswrapper[29097]: I0312 18:46:55.864907 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bb2r7" Mar 12 18:46:55.902754 master-0 kubenswrapper[29097]: I0312 18:46:55.902694 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-vwzjx"] Mar 12 18:46:56.091631 master-0 kubenswrapper[29097]: I0312 18:46:56.079191 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-eac7-account-create-update-tvr2h"] Mar 12 18:46:56.228847 master-0 kubenswrapper[29097]: I0312 18:46:56.219654 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-brvsm"] Mar 12 18:46:56.485619 master-0 kubenswrapper[29097]: I0312 18:46:56.477024 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eac7-account-create-update-tvr2h" event={"ID":"857a7bc9-0e2c-48b8-bc16-d1c0d409049e","Type":"ContainerStarted","Data":"51cc4f29f037cef1851147dbab0f6a89ae96f46da6de7c80df3a224bde076589"} Mar 12 18:46:56.485619 master-0 kubenswrapper[29097]: I0312 18:46:56.477079 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eac7-account-create-update-tvr2h" event={"ID":"857a7bc9-0e2c-48b8-bc16-d1c0d409049e","Type":"ContainerStarted","Data":"0e76b27b0ece14eebded565f62c36673e2b8842b9083cfa6b6b15aeb5a0bea58"} Mar 12 18:46:56.499554 master-0 kubenswrapper[29097]: I0312 18:46:56.498124 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-vwzjx" event={"ID":"86a5b88e-2b06-40db-b90e-e027e9876bfa","Type":"ContainerStarted","Data":"072ed01ff3afc1e5fdf004e02bb0023cf6e0588fb6f4d30f7c3c3577111fdf38"} Mar 12 18:46:56.499554 master-0 kubenswrapper[29097]: I0312 18:46:56.498178 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-vwzjx" event={"ID":"86a5b88e-2b06-40db-b90e-e027e9876bfa","Type":"ContainerStarted","Data":"fa71c55a60879d07a40c4e621a371dd330d8f3cc9744f6a7e465a8c0226dfeaa"} Mar 12 18:46:56.554580 master-0 kubenswrapper[29097]: I0312 18:46:56.551582 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-54fb-account-create-update-bbqlw"] Mar 12 18:46:56.578626 master-0 kubenswrapper[29097]: I0312 18:46:56.577239 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-brvsm" event={"ID":"6ff2681b-2ab1-4a46-a0d3-2bbcab0695dd","Type":"ContainerStarted","Data":"c9de3a2f7116c6a7544c835b8d5016587fc6e3bedbd379616e8c4e2b5405ea9d"} Mar 12 18:46:56.583711 master-0 kubenswrapper[29097]: I0312 18:46:56.582190 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-eac7-account-create-update-tvr2h" podStartSLOduration=2.5821707 podStartE2EDuration="2.5821707s" podCreationTimestamp="2026-03-12 18:46:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:46:56.54329867 +0000 UTC m=+1056.097278777" watchObservedRunningTime="2026-03-12 18:46:56.5821707 +0000 UTC m=+1056.136150797" Mar 12 18:46:56.599550 master-0 kubenswrapper[29097]: I0312 18:46:56.597624 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-vwzjx" podStartSLOduration=2.597607275 podStartE2EDuration="2.597607275s" podCreationTimestamp="2026-03-12 18:46:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:46:56.577140984 +0000 UTC m=+1056.131121081" watchObservedRunningTime="2026-03-12 18:46:56.597607275 +0000 UTC m=+1056.151587372" Mar 12 18:46:56.649810 master-0 kubenswrapper[29097]: I0312 18:46:56.643903 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-bb2r7"] Mar 12 18:46:57.586689 master-0 kubenswrapper[29097]: I0312 18:46:57.586627 29097 generic.go:334] "Generic (PLEG): container finished" podID="857a7bc9-0e2c-48b8-bc16-d1c0d409049e" containerID="51cc4f29f037cef1851147dbab0f6a89ae96f46da6de7c80df3a224bde076589" exitCode=0 Mar 12 18:46:57.587202 master-0 kubenswrapper[29097]: I0312 18:46:57.586703 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eac7-account-create-update-tvr2h" event={"ID":"857a7bc9-0e2c-48b8-bc16-d1c0d409049e","Type":"ContainerDied","Data":"51cc4f29f037cef1851147dbab0f6a89ae96f46da6de7c80df3a224bde076589"} Mar 12 18:46:57.588098 master-0 kubenswrapper[29097]: I0312 18:46:57.588069 29097 generic.go:334] "Generic (PLEG): container finished" podID="86a5b88e-2b06-40db-b90e-e027e9876bfa" containerID="072ed01ff3afc1e5fdf004e02bb0023cf6e0588fb6f4d30f7c3c3577111fdf38" exitCode=0 Mar 12 18:46:57.588160 master-0 kubenswrapper[29097]: I0312 18:46:57.588125 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-vwzjx" event={"ID":"86a5b88e-2b06-40db-b90e-e027e9876bfa","Type":"ContainerDied","Data":"072ed01ff3afc1e5fdf004e02bb0023cf6e0588fb6f4d30f7c3c3577111fdf38"} Mar 12 18:46:57.589221 master-0 kubenswrapper[29097]: I0312 18:46:57.589185 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bb2r7" event={"ID":"26884ddb-c62f-429a-a6e6-0c7cb20ffc8d","Type":"ContainerStarted","Data":"f34690a02729098e9bc56887bea5931bae3a03665ca1d072149ab446a55016ef"} Mar 12 18:46:57.590622 master-0 kubenswrapper[29097]: I0312 18:46:57.590589 29097 generic.go:334] "Generic (PLEG): container finished" podID="2df983bf-3a2e-4d67-80e2-eb309ec03afd" containerID="a3caf389bf9f9cc214a794acef6d430ae3dbe8b04ae95956ee61bc9e46a9ad97" exitCode=0 Mar 12 18:46:57.590702 master-0 kubenswrapper[29097]: I0312 18:46:57.590675 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54fb-account-create-update-bbqlw" event={"ID":"2df983bf-3a2e-4d67-80e2-eb309ec03afd","Type":"ContainerDied","Data":"a3caf389bf9f9cc214a794acef6d430ae3dbe8b04ae95956ee61bc9e46a9ad97"} Mar 12 18:46:57.590739 master-0 kubenswrapper[29097]: I0312 18:46:57.590710 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54fb-account-create-update-bbqlw" event={"ID":"2df983bf-3a2e-4d67-80e2-eb309ec03afd","Type":"ContainerStarted","Data":"84466f95930dd82b1336160a10485134cd04da329fb7a5d46bb39d9211192d05"} Mar 12 18:46:57.592660 master-0 kubenswrapper[29097]: I0312 18:46:57.591983 29097 generic.go:334] "Generic (PLEG): container finished" podID="6ff2681b-2ab1-4a46-a0d3-2bbcab0695dd" containerID="eaa2f0492850c7d00c20f5ac147115a77caf97060f0a0aca35fcf99956a149e8" exitCode=0 Mar 12 18:46:57.592660 master-0 kubenswrapper[29097]: I0312 18:46:57.592034 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-brvsm" event={"ID":"6ff2681b-2ab1-4a46-a0d3-2bbcab0695dd","Type":"ContainerDied","Data":"eaa2f0492850c7d00c20f5ac147115a77caf97060f0a0aca35fcf99956a149e8"} Mar 12 18:46:59.267600 master-0 kubenswrapper[29097]: I0312 18:46:59.267446 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-brvsm" Mar 12 18:46:59.310770 master-0 kubenswrapper[29097]: I0312 18:46:59.307194 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ff2681b-2ab1-4a46-a0d3-2bbcab0695dd-operator-scripts\") pod \"6ff2681b-2ab1-4a46-a0d3-2bbcab0695dd\" (UID: \"6ff2681b-2ab1-4a46-a0d3-2bbcab0695dd\") " Mar 12 18:46:59.310770 master-0 kubenswrapper[29097]: I0312 18:46:59.307513 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkbzh\" (UniqueName: \"kubernetes.io/projected/6ff2681b-2ab1-4a46-a0d3-2bbcab0695dd-kube-api-access-bkbzh\") pod \"6ff2681b-2ab1-4a46-a0d3-2bbcab0695dd\" (UID: \"6ff2681b-2ab1-4a46-a0d3-2bbcab0695dd\") " Mar 12 18:46:59.310770 master-0 kubenswrapper[29097]: I0312 18:46:59.307925 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ff2681b-2ab1-4a46-a0d3-2bbcab0695dd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6ff2681b-2ab1-4a46-a0d3-2bbcab0695dd" (UID: "6ff2681b-2ab1-4a46-a0d3-2bbcab0695dd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:46:59.347168 master-0 kubenswrapper[29097]: I0312 18:46:59.342012 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ff2681b-2ab1-4a46-a0d3-2bbcab0695dd-kube-api-access-bkbzh" (OuterVolumeSpecName: "kube-api-access-bkbzh") pod "6ff2681b-2ab1-4a46-a0d3-2bbcab0695dd" (UID: "6ff2681b-2ab1-4a46-a0d3-2bbcab0695dd"). InnerVolumeSpecName "kube-api-access-bkbzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:46:59.412150 master-0 kubenswrapper[29097]: I0312 18:46:59.412094 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkbzh\" (UniqueName: \"kubernetes.io/projected/6ff2681b-2ab1-4a46-a0d3-2bbcab0695dd-kube-api-access-bkbzh\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:59.412150 master-0 kubenswrapper[29097]: I0312 18:46:59.412137 29097 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ff2681b-2ab1-4a46-a0d3-2bbcab0695dd-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:59.485776 master-0 kubenswrapper[29097]: I0312 18:46:59.485727 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eac7-account-create-update-tvr2h" Mar 12 18:46:59.495417 master-0 kubenswrapper[29097]: I0312 18:46:59.495347 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54fb-account-create-update-bbqlw" Mar 12 18:46:59.514310 master-0 kubenswrapper[29097]: I0312 18:46:59.514033 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk7f5\" (UniqueName: \"kubernetes.io/projected/2df983bf-3a2e-4d67-80e2-eb309ec03afd-kube-api-access-dk7f5\") pod \"2df983bf-3a2e-4d67-80e2-eb309ec03afd\" (UID: \"2df983bf-3a2e-4d67-80e2-eb309ec03afd\") " Mar 12 18:46:59.514310 master-0 kubenswrapper[29097]: I0312 18:46:59.514101 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2df983bf-3a2e-4d67-80e2-eb309ec03afd-operator-scripts\") pod \"2df983bf-3a2e-4d67-80e2-eb309ec03afd\" (UID: \"2df983bf-3a2e-4d67-80e2-eb309ec03afd\") " Mar 12 18:46:59.514310 master-0 kubenswrapper[29097]: I0312 18:46:59.514127 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw7hl\" (UniqueName: \"kubernetes.io/projected/857a7bc9-0e2c-48b8-bc16-d1c0d409049e-kube-api-access-dw7hl\") pod \"857a7bc9-0e2c-48b8-bc16-d1c0d409049e\" (UID: \"857a7bc9-0e2c-48b8-bc16-d1c0d409049e\") " Mar 12 18:46:59.514310 master-0 kubenswrapper[29097]: I0312 18:46:59.514204 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/857a7bc9-0e2c-48b8-bc16-d1c0d409049e-operator-scripts\") pod \"857a7bc9-0e2c-48b8-bc16-d1c0d409049e\" (UID: \"857a7bc9-0e2c-48b8-bc16-d1c0d409049e\") " Mar 12 18:46:59.514310 master-0 kubenswrapper[29097]: I0312 18:46:59.514244 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vwzjx" Mar 12 18:46:59.515432 master-0 kubenswrapper[29097]: I0312 18:46:59.515383 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/857a7bc9-0e2c-48b8-bc16-d1c0d409049e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "857a7bc9-0e2c-48b8-bc16-d1c0d409049e" (UID: "857a7bc9-0e2c-48b8-bc16-d1c0d409049e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:46:59.516498 master-0 kubenswrapper[29097]: I0312 18:46:59.516458 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2df983bf-3a2e-4d67-80e2-eb309ec03afd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2df983bf-3a2e-4d67-80e2-eb309ec03afd" (UID: "2df983bf-3a2e-4d67-80e2-eb309ec03afd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:46:59.519864 master-0 kubenswrapper[29097]: I0312 18:46:59.519785 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2df983bf-3a2e-4d67-80e2-eb309ec03afd-kube-api-access-dk7f5" (OuterVolumeSpecName: "kube-api-access-dk7f5") pod "2df983bf-3a2e-4d67-80e2-eb309ec03afd" (UID: "2df983bf-3a2e-4d67-80e2-eb309ec03afd"). InnerVolumeSpecName "kube-api-access-dk7f5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:46:59.522970 master-0 kubenswrapper[29097]: I0312 18:46:59.522913 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/857a7bc9-0e2c-48b8-bc16-d1c0d409049e-kube-api-access-dw7hl" (OuterVolumeSpecName: "kube-api-access-dw7hl") pod "857a7bc9-0e2c-48b8-bc16-d1c0d409049e" (UID: "857a7bc9-0e2c-48b8-bc16-d1c0d409049e"). InnerVolumeSpecName "kube-api-access-dw7hl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:46:59.616536 master-0 kubenswrapper[29097]: I0312 18:46:59.616406 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86a5b88e-2b06-40db-b90e-e027e9876bfa-operator-scripts\") pod \"86a5b88e-2b06-40db-b90e-e027e9876bfa\" (UID: \"86a5b88e-2b06-40db-b90e-e027e9876bfa\") " Mar 12 18:46:59.616705 master-0 kubenswrapper[29097]: I0312 18:46:59.616645 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcwxx\" (UniqueName: \"kubernetes.io/projected/86a5b88e-2b06-40db-b90e-e027e9876bfa-kube-api-access-fcwxx\") pod \"86a5b88e-2b06-40db-b90e-e027e9876bfa\" (UID: \"86a5b88e-2b06-40db-b90e-e027e9876bfa\") " Mar 12 18:46:59.617338 master-0 kubenswrapper[29097]: I0312 18:46:59.617306 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86a5b88e-2b06-40db-b90e-e027e9876bfa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "86a5b88e-2b06-40db-b90e-e027e9876bfa" (UID: "86a5b88e-2b06-40db-b90e-e027e9876bfa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:46:59.617822 master-0 kubenswrapper[29097]: I0312 18:46:59.617806 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk7f5\" (UniqueName: \"kubernetes.io/projected/2df983bf-3a2e-4d67-80e2-eb309ec03afd-kube-api-access-dk7f5\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:59.617908 master-0 kubenswrapper[29097]: I0312 18:46:59.617897 29097 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2df983bf-3a2e-4d67-80e2-eb309ec03afd-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:59.617969 master-0 kubenswrapper[29097]: I0312 18:46:59.617959 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw7hl\" (UniqueName: \"kubernetes.io/projected/857a7bc9-0e2c-48b8-bc16-d1c0d409049e-kube-api-access-dw7hl\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:59.618070 master-0 kubenswrapper[29097]: I0312 18:46:59.618027 29097 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/857a7bc9-0e2c-48b8-bc16-d1c0d409049e-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:59.618156 master-0 kubenswrapper[29097]: I0312 18:46:59.618145 29097 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86a5b88e-2b06-40db-b90e-e027e9876bfa-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:46:59.619771 master-0 kubenswrapper[29097]: I0312 18:46:59.619723 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86a5b88e-2b06-40db-b90e-e027e9876bfa-kube-api-access-fcwxx" (OuterVolumeSpecName: "kube-api-access-fcwxx") pod "86a5b88e-2b06-40db-b90e-e027e9876bfa" (UID: "86a5b88e-2b06-40db-b90e-e027e9876bfa"). InnerVolumeSpecName "kube-api-access-fcwxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:46:59.628092 master-0 kubenswrapper[29097]: I0312 18:46:59.628036 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eac7-account-create-update-tvr2h" event={"ID":"857a7bc9-0e2c-48b8-bc16-d1c0d409049e","Type":"ContainerDied","Data":"0e76b27b0ece14eebded565f62c36673e2b8842b9083cfa6b6b15aeb5a0bea58"} Mar 12 18:46:59.628092 master-0 kubenswrapper[29097]: I0312 18:46:59.628091 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e76b27b0ece14eebded565f62c36673e2b8842b9083cfa6b6b15aeb5a0bea58" Mar 12 18:46:59.632561 master-0 kubenswrapper[29097]: I0312 18:46:59.632531 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eac7-account-create-update-tvr2h" Mar 12 18:46:59.632817 master-0 kubenswrapper[29097]: I0312 18:46:59.632743 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vwzjx" Mar 12 18:46:59.632817 master-0 kubenswrapper[29097]: I0312 18:46:59.632765 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-vwzjx" event={"ID":"86a5b88e-2b06-40db-b90e-e027e9876bfa","Type":"ContainerDied","Data":"fa71c55a60879d07a40c4e621a371dd330d8f3cc9744f6a7e465a8c0226dfeaa"} Mar 12 18:46:59.633386 master-0 kubenswrapper[29097]: I0312 18:46:59.633314 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa71c55a60879d07a40c4e621a371dd330d8f3cc9744f6a7e465a8c0226dfeaa" Mar 12 18:46:59.636435 master-0 kubenswrapper[29097]: I0312 18:46:59.636040 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54fb-account-create-update-bbqlw" Mar 12 18:46:59.636764 master-0 kubenswrapper[29097]: I0312 18:46:59.636033 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54fb-account-create-update-bbqlw" event={"ID":"2df983bf-3a2e-4d67-80e2-eb309ec03afd","Type":"ContainerDied","Data":"84466f95930dd82b1336160a10485134cd04da329fb7a5d46bb39d9211192d05"} Mar 12 18:46:59.636823 master-0 kubenswrapper[29097]: I0312 18:46:59.636784 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84466f95930dd82b1336160a10485134cd04da329fb7a5d46bb39d9211192d05" Mar 12 18:46:59.639028 master-0 kubenswrapper[29097]: I0312 18:46:59.638607 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-brvsm" event={"ID":"6ff2681b-2ab1-4a46-a0d3-2bbcab0695dd","Type":"ContainerDied","Data":"c9de3a2f7116c6a7544c835b8d5016587fc6e3bedbd379616e8c4e2b5405ea9d"} Mar 12 18:46:59.639028 master-0 kubenswrapper[29097]: I0312 18:46:59.638642 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9de3a2f7116c6a7544c835b8d5016587fc6e3bedbd379616e8c4e2b5405ea9d" Mar 12 18:46:59.639028 master-0 kubenswrapper[29097]: I0312 18:46:59.638701 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-brvsm" Mar 12 18:46:59.721057 master-0 kubenswrapper[29097]: I0312 18:46:59.720730 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcwxx\" (UniqueName: \"kubernetes.io/projected/86a5b88e-2b06-40db-b90e-e027e9876bfa-kube-api-access-fcwxx\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:02.001991 master-0 kubenswrapper[29097]: I0312 18:47:02.001915 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76986c7db5-59cjq" Mar 12 18:47:02.182564 master-0 kubenswrapper[29097]: I0312 18:47:02.160737 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf8b865dc-pqwwh"] Mar 12 18:47:02.182564 master-0 kubenswrapper[29097]: I0312 18:47:02.161000 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf8b865dc-pqwwh" podUID="fb76cb7f-6d8a-4ecd-8580-2f06202826f4" containerName="dnsmasq-dns" containerID="cri-o://e75c0e2252e75cebdd0ab117eacec869cb590b6eb419cb7a183ab58a57b63f8e" gracePeriod=10 Mar 12 18:47:02.700434 master-0 kubenswrapper[29097]: I0312 18:47:02.700385 29097 generic.go:334] "Generic (PLEG): container finished" podID="2557ae75-2d67-4831-ace5-a6e46d581c7f" containerID="7ceb068952db0208ad6652dc01035fcd86997ed6692998cf299d280ffbc657aa" exitCode=0 Mar 12 18:47:02.700693 master-0 kubenswrapper[29097]: I0312 18:47:02.700460 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cmn4b" event={"ID":"2557ae75-2d67-4831-ace5-a6e46d581c7f","Type":"ContainerDied","Data":"7ceb068952db0208ad6652dc01035fcd86997ed6692998cf299d280ffbc657aa"} Mar 12 18:47:02.704259 master-0 kubenswrapper[29097]: I0312 18:47:02.704223 29097 generic.go:334] "Generic (PLEG): container finished" podID="fb76cb7f-6d8a-4ecd-8580-2f06202826f4" containerID="e75c0e2252e75cebdd0ab117eacec869cb590b6eb419cb7a183ab58a57b63f8e" exitCode=0 Mar 12 18:47:02.704398 master-0 kubenswrapper[29097]: I0312 18:47:02.704291 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf8b865dc-pqwwh" event={"ID":"fb76cb7f-6d8a-4ecd-8580-2f06202826f4","Type":"ContainerDied","Data":"e75c0e2252e75cebdd0ab117eacec869cb590b6eb419cb7a183ab58a57b63f8e"} Mar 12 18:47:02.710865 master-0 kubenswrapper[29097]: I0312 18:47:02.705672 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bb2r7" event={"ID":"26884ddb-c62f-429a-a6e6-0c7cb20ffc8d","Type":"ContainerStarted","Data":"ada7874b69c550091c28070524956984fb48fc05c6db0c0d8e9729bf5dc5ed78"} Mar 12 18:47:02.790149 master-0 kubenswrapper[29097]: I0312 18:47:02.789941 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-bb2r7" podStartSLOduration=2.0713060739999998 podStartE2EDuration="7.789921982s" podCreationTimestamp="2026-03-12 18:46:55 +0000 UTC" firstStartedPulling="2026-03-12 18:46:56.672638597 +0000 UTC m=+1056.226618694" lastFinishedPulling="2026-03-12 18:47:02.391254505 +0000 UTC m=+1061.945234602" observedRunningTime="2026-03-12 18:47:02.779578914 +0000 UTC m=+1062.333559011" watchObservedRunningTime="2026-03-12 18:47:02.789921982 +0000 UTC m=+1062.343902079" Mar 12 18:47:02.856907 master-0 kubenswrapper[29097]: I0312 18:47:02.855864 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf8b865dc-pqwwh" Mar 12 18:47:02.932427 master-0 kubenswrapper[29097]: I0312 18:47:02.930146 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb76cb7f-6d8a-4ecd-8580-2f06202826f4-ovsdbserver-nb\") pod \"fb76cb7f-6d8a-4ecd-8580-2f06202826f4\" (UID: \"fb76cb7f-6d8a-4ecd-8580-2f06202826f4\") " Mar 12 18:47:02.932427 master-0 kubenswrapper[29097]: I0312 18:47:02.930269 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb76cb7f-6d8a-4ecd-8580-2f06202826f4-ovsdbserver-sb\") pod \"fb76cb7f-6d8a-4ecd-8580-2f06202826f4\" (UID: \"fb76cb7f-6d8a-4ecd-8580-2f06202826f4\") " Mar 12 18:47:02.932427 master-0 kubenswrapper[29097]: I0312 18:47:02.930298 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njhws\" (UniqueName: \"kubernetes.io/projected/fb76cb7f-6d8a-4ecd-8580-2f06202826f4-kube-api-access-njhws\") pod \"fb76cb7f-6d8a-4ecd-8580-2f06202826f4\" (UID: \"fb76cb7f-6d8a-4ecd-8580-2f06202826f4\") " Mar 12 18:47:02.932427 master-0 kubenswrapper[29097]: I0312 18:47:02.930496 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb76cb7f-6d8a-4ecd-8580-2f06202826f4-config\") pod \"fb76cb7f-6d8a-4ecd-8580-2f06202826f4\" (UID: \"fb76cb7f-6d8a-4ecd-8580-2f06202826f4\") " Mar 12 18:47:02.932427 master-0 kubenswrapper[29097]: I0312 18:47:02.930641 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb76cb7f-6d8a-4ecd-8580-2f06202826f4-dns-svc\") pod \"fb76cb7f-6d8a-4ecd-8580-2f06202826f4\" (UID: \"fb76cb7f-6d8a-4ecd-8580-2f06202826f4\") " Mar 12 18:47:02.935227 master-0 kubenswrapper[29097]: I0312 18:47:02.935167 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb76cb7f-6d8a-4ecd-8580-2f06202826f4-kube-api-access-njhws" (OuterVolumeSpecName: "kube-api-access-njhws") pod "fb76cb7f-6d8a-4ecd-8580-2f06202826f4" (UID: "fb76cb7f-6d8a-4ecd-8580-2f06202826f4"). InnerVolumeSpecName "kube-api-access-njhws". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:47:02.973091 master-0 kubenswrapper[29097]: I0312 18:47:02.972983 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb76cb7f-6d8a-4ecd-8580-2f06202826f4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fb76cb7f-6d8a-4ecd-8580-2f06202826f4" (UID: "fb76cb7f-6d8a-4ecd-8580-2f06202826f4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:47:02.974995 master-0 kubenswrapper[29097]: I0312 18:47:02.974949 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb76cb7f-6d8a-4ecd-8580-2f06202826f4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fb76cb7f-6d8a-4ecd-8580-2f06202826f4" (UID: "fb76cb7f-6d8a-4ecd-8580-2f06202826f4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:47:02.977248 master-0 kubenswrapper[29097]: I0312 18:47:02.977126 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb76cb7f-6d8a-4ecd-8580-2f06202826f4-config" (OuterVolumeSpecName: "config") pod "fb76cb7f-6d8a-4ecd-8580-2f06202826f4" (UID: "fb76cb7f-6d8a-4ecd-8580-2f06202826f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:47:03.009005 master-0 kubenswrapper[29097]: I0312 18:47:03.008946 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb76cb7f-6d8a-4ecd-8580-2f06202826f4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fb76cb7f-6d8a-4ecd-8580-2f06202826f4" (UID: "fb76cb7f-6d8a-4ecd-8580-2f06202826f4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:47:03.035862 master-0 kubenswrapper[29097]: I0312 18:47:03.034980 29097 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb76cb7f-6d8a-4ecd-8580-2f06202826f4-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:03.035862 master-0 kubenswrapper[29097]: I0312 18:47:03.035017 29097 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb76cb7f-6d8a-4ecd-8580-2f06202826f4-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:03.035862 master-0 kubenswrapper[29097]: I0312 18:47:03.035029 29097 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb76cb7f-6d8a-4ecd-8580-2f06202826f4-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:03.035862 master-0 kubenswrapper[29097]: I0312 18:47:03.035042 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njhws\" (UniqueName: \"kubernetes.io/projected/fb76cb7f-6d8a-4ecd-8580-2f06202826f4-kube-api-access-njhws\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:03.035862 master-0 kubenswrapper[29097]: I0312 18:47:03.035052 29097 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb76cb7f-6d8a-4ecd-8580-2f06202826f4-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:03.719147 master-0 kubenswrapper[29097]: I0312 18:47:03.719110 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf8b865dc-pqwwh" Mar 12 18:47:03.724811 master-0 kubenswrapper[29097]: I0312 18:47:03.724741 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf8b865dc-pqwwh" event={"ID":"fb76cb7f-6d8a-4ecd-8580-2f06202826f4","Type":"ContainerDied","Data":"887fd363b5741a8687b7271899dfa2cab988232351994529981d8eaf49ceb969"} Mar 12 18:47:03.725049 master-0 kubenswrapper[29097]: I0312 18:47:03.724949 29097 scope.go:117] "RemoveContainer" containerID="e75c0e2252e75cebdd0ab117eacec869cb590b6eb419cb7a183ab58a57b63f8e" Mar 12 18:47:03.763287 master-0 kubenswrapper[29097]: I0312 18:47:03.763253 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf8b865dc-pqwwh"] Mar 12 18:47:03.763973 master-0 kubenswrapper[29097]: I0312 18:47:03.763955 29097 scope.go:117] "RemoveContainer" containerID="cf883aa6f72afeda64766364560c9bc259e3ee260ec73f15ae534957adf1ea8a" Mar 12 18:47:03.779589 master-0 kubenswrapper[29097]: I0312 18:47:03.777679 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf8b865dc-pqwwh"] Mar 12 18:47:04.346544 master-0 kubenswrapper[29097]: I0312 18:47:04.346487 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cmn4b" Mar 12 18:47:04.511191 master-0 kubenswrapper[29097]: I0312 18:47:04.511023 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2557ae75-2d67-4831-ace5-a6e46d581c7f-config-data\") pod \"2557ae75-2d67-4831-ace5-a6e46d581c7f\" (UID: \"2557ae75-2d67-4831-ace5-a6e46d581c7f\") " Mar 12 18:47:04.511191 master-0 kubenswrapper[29097]: I0312 18:47:04.511160 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2557ae75-2d67-4831-ace5-a6e46d581c7f-combined-ca-bundle\") pod \"2557ae75-2d67-4831-ace5-a6e46d581c7f\" (UID: \"2557ae75-2d67-4831-ace5-a6e46d581c7f\") " Mar 12 18:47:04.511191 master-0 kubenswrapper[29097]: I0312 18:47:04.511191 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsw2b\" (UniqueName: \"kubernetes.io/projected/2557ae75-2d67-4831-ace5-a6e46d581c7f-kube-api-access-xsw2b\") pod \"2557ae75-2d67-4831-ace5-a6e46d581c7f\" (UID: \"2557ae75-2d67-4831-ace5-a6e46d581c7f\") " Mar 12 18:47:04.511572 master-0 kubenswrapper[29097]: I0312 18:47:04.511213 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2557ae75-2d67-4831-ace5-a6e46d581c7f-db-sync-config-data\") pod \"2557ae75-2d67-4831-ace5-a6e46d581c7f\" (UID: \"2557ae75-2d67-4831-ace5-a6e46d581c7f\") " Mar 12 18:47:04.515930 master-0 kubenswrapper[29097]: I0312 18:47:04.515865 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2557ae75-2d67-4831-ace5-a6e46d581c7f-kube-api-access-xsw2b" (OuterVolumeSpecName: "kube-api-access-xsw2b") pod "2557ae75-2d67-4831-ace5-a6e46d581c7f" (UID: "2557ae75-2d67-4831-ace5-a6e46d581c7f"). InnerVolumeSpecName "kube-api-access-xsw2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:47:04.517609 master-0 kubenswrapper[29097]: I0312 18:47:04.517565 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2557ae75-2d67-4831-ace5-a6e46d581c7f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2557ae75-2d67-4831-ace5-a6e46d581c7f" (UID: "2557ae75-2d67-4831-ace5-a6e46d581c7f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:04.535762 master-0 kubenswrapper[29097]: I0312 18:47:04.535395 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2557ae75-2d67-4831-ace5-a6e46d581c7f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2557ae75-2d67-4831-ace5-a6e46d581c7f" (UID: "2557ae75-2d67-4831-ace5-a6e46d581c7f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:04.576762 master-0 kubenswrapper[29097]: I0312 18:47:04.576698 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2557ae75-2d67-4831-ace5-a6e46d581c7f-config-data" (OuterVolumeSpecName: "config-data") pod "2557ae75-2d67-4831-ace5-a6e46d581c7f" (UID: "2557ae75-2d67-4831-ace5-a6e46d581c7f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:04.614060 master-0 kubenswrapper[29097]: I0312 18:47:04.613972 29097 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2557ae75-2d67-4831-ace5-a6e46d581c7f-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:04.614060 master-0 kubenswrapper[29097]: I0312 18:47:04.614029 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2557ae75-2d67-4831-ace5-a6e46d581c7f-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:04.614060 master-0 kubenswrapper[29097]: I0312 18:47:04.614044 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsw2b\" (UniqueName: \"kubernetes.io/projected/2557ae75-2d67-4831-ace5-a6e46d581c7f-kube-api-access-xsw2b\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:04.614060 master-0 kubenswrapper[29097]: I0312 18:47:04.614057 29097 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2557ae75-2d67-4831-ace5-a6e46d581c7f-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:04.739150 master-0 kubenswrapper[29097]: I0312 18:47:04.739053 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb76cb7f-6d8a-4ecd-8580-2f06202826f4" path="/var/lib/kubelet/pods/fb76cb7f-6d8a-4ecd-8580-2f06202826f4/volumes" Mar 12 18:47:04.743069 master-0 kubenswrapper[29097]: I0312 18:47:04.743007 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cmn4b" Mar 12 18:47:04.743069 master-0 kubenswrapper[29097]: I0312 18:47:04.743037 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cmn4b" event={"ID":"2557ae75-2d67-4831-ace5-a6e46d581c7f","Type":"ContainerDied","Data":"f486d1686b6be42df22d82652b69219c593502208692a4ef111c0454c3aa1daf"} Mar 12 18:47:04.743232 master-0 kubenswrapper[29097]: I0312 18:47:04.743092 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f486d1686b6be42df22d82652b69219c593502208692a4ef111c0454c3aa1daf" Mar 12 18:47:05.234543 master-0 kubenswrapper[29097]: I0312 18:47:05.234404 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5995bddff5-9l72c"] Mar 12 18:47:05.234943 master-0 kubenswrapper[29097]: E0312 18:47:05.234917 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86a5b88e-2b06-40db-b90e-e027e9876bfa" containerName="mariadb-database-create" Mar 12 18:47:05.234943 master-0 kubenswrapper[29097]: I0312 18:47:05.234937 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="86a5b88e-2b06-40db-b90e-e027e9876bfa" containerName="mariadb-database-create" Mar 12 18:47:05.235056 master-0 kubenswrapper[29097]: E0312 18:47:05.234955 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb76cb7f-6d8a-4ecd-8580-2f06202826f4" containerName="dnsmasq-dns" Mar 12 18:47:05.235056 master-0 kubenswrapper[29097]: I0312 18:47:05.234963 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb76cb7f-6d8a-4ecd-8580-2f06202826f4" containerName="dnsmasq-dns" Mar 12 18:47:05.235056 master-0 kubenswrapper[29097]: E0312 18:47:05.234975 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ff2681b-2ab1-4a46-a0d3-2bbcab0695dd" containerName="mariadb-database-create" Mar 12 18:47:05.235056 master-0 kubenswrapper[29097]: I0312 18:47:05.234983 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ff2681b-2ab1-4a46-a0d3-2bbcab0695dd" containerName="mariadb-database-create" Mar 12 18:47:05.235056 master-0 kubenswrapper[29097]: E0312 18:47:05.234993 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="857a7bc9-0e2c-48b8-bc16-d1c0d409049e" containerName="mariadb-account-create-update" Mar 12 18:47:05.235056 master-0 kubenswrapper[29097]: I0312 18:47:05.234999 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="857a7bc9-0e2c-48b8-bc16-d1c0d409049e" containerName="mariadb-account-create-update" Mar 12 18:47:05.235056 master-0 kubenswrapper[29097]: E0312 18:47:05.235016 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb76cb7f-6d8a-4ecd-8580-2f06202826f4" containerName="init" Mar 12 18:47:05.235056 master-0 kubenswrapper[29097]: I0312 18:47:05.235021 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb76cb7f-6d8a-4ecd-8580-2f06202826f4" containerName="init" Mar 12 18:47:05.235056 master-0 kubenswrapper[29097]: E0312 18:47:05.235036 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2557ae75-2d67-4831-ace5-a6e46d581c7f" containerName="glance-db-sync" Mar 12 18:47:05.235056 master-0 kubenswrapper[29097]: I0312 18:47:05.235042 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="2557ae75-2d67-4831-ace5-a6e46d581c7f" containerName="glance-db-sync" Mar 12 18:47:05.235056 master-0 kubenswrapper[29097]: E0312 18:47:05.235049 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2df983bf-3a2e-4d67-80e2-eb309ec03afd" containerName="mariadb-account-create-update" Mar 12 18:47:05.235056 master-0 kubenswrapper[29097]: I0312 18:47:05.235054 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="2df983bf-3a2e-4d67-80e2-eb309ec03afd" containerName="mariadb-account-create-update" Mar 12 18:47:05.235533 master-0 kubenswrapper[29097]: I0312 18:47:05.235243 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb76cb7f-6d8a-4ecd-8580-2f06202826f4" containerName="dnsmasq-dns" Mar 12 18:47:05.235533 master-0 kubenswrapper[29097]: I0312 18:47:05.235260 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="2df983bf-3a2e-4d67-80e2-eb309ec03afd" containerName="mariadb-account-create-update" Mar 12 18:47:05.235533 master-0 kubenswrapper[29097]: I0312 18:47:05.235278 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="86a5b88e-2b06-40db-b90e-e027e9876bfa" containerName="mariadb-database-create" Mar 12 18:47:05.235533 master-0 kubenswrapper[29097]: I0312 18:47:05.235298 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ff2681b-2ab1-4a46-a0d3-2bbcab0695dd" containerName="mariadb-database-create" Mar 12 18:47:05.235533 master-0 kubenswrapper[29097]: I0312 18:47:05.235313 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="857a7bc9-0e2c-48b8-bc16-d1c0d409049e" containerName="mariadb-account-create-update" Mar 12 18:47:05.235533 master-0 kubenswrapper[29097]: I0312 18:47:05.235326 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="2557ae75-2d67-4831-ace5-a6e46d581c7f" containerName="glance-db-sync" Mar 12 18:47:05.237272 master-0 kubenswrapper[29097]: I0312 18:47:05.236327 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5995bddff5-9l72c" Mar 12 18:47:05.244654 master-0 kubenswrapper[29097]: I0312 18:47:05.243706 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5995bddff5-9l72c"] Mar 12 18:47:05.333753 master-0 kubenswrapper[29097]: I0312 18:47:05.333693 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gthf\" (UniqueName: \"kubernetes.io/projected/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-kube-api-access-4gthf\") pod \"dnsmasq-dns-5995bddff5-9l72c\" (UID: \"7f5d0599-2540-4927-aceb-7cac7e4fb7c3\") " pod="openstack/dnsmasq-dns-5995bddff5-9l72c" Mar 12 18:47:05.334045 master-0 kubenswrapper[29097]: I0312 18:47:05.333779 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-ovsdbserver-sb\") pod \"dnsmasq-dns-5995bddff5-9l72c\" (UID: \"7f5d0599-2540-4927-aceb-7cac7e4fb7c3\") " pod="openstack/dnsmasq-dns-5995bddff5-9l72c" Mar 12 18:47:05.334045 master-0 kubenswrapper[29097]: I0312 18:47:05.333804 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-ovsdbserver-nb\") pod \"dnsmasq-dns-5995bddff5-9l72c\" (UID: \"7f5d0599-2540-4927-aceb-7cac7e4fb7c3\") " pod="openstack/dnsmasq-dns-5995bddff5-9l72c" Mar 12 18:47:05.334045 master-0 kubenswrapper[29097]: I0312 18:47:05.333884 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-dns-svc\") pod \"dnsmasq-dns-5995bddff5-9l72c\" (UID: \"7f5d0599-2540-4927-aceb-7cac7e4fb7c3\") " pod="openstack/dnsmasq-dns-5995bddff5-9l72c" Mar 12 18:47:05.334177 master-0 kubenswrapper[29097]: I0312 18:47:05.334044 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-config\") pod \"dnsmasq-dns-5995bddff5-9l72c\" (UID: \"7f5d0599-2540-4927-aceb-7cac7e4fb7c3\") " pod="openstack/dnsmasq-dns-5995bddff5-9l72c" Mar 12 18:47:05.334177 master-0 kubenswrapper[29097]: I0312 18:47:05.334119 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-dns-swift-storage-0\") pod \"dnsmasq-dns-5995bddff5-9l72c\" (UID: \"7f5d0599-2540-4927-aceb-7cac7e4fb7c3\") " pod="openstack/dnsmasq-dns-5995bddff5-9l72c" Mar 12 18:47:05.436594 master-0 kubenswrapper[29097]: I0312 18:47:05.436534 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-ovsdbserver-sb\") pod \"dnsmasq-dns-5995bddff5-9l72c\" (UID: \"7f5d0599-2540-4927-aceb-7cac7e4fb7c3\") " pod="openstack/dnsmasq-dns-5995bddff5-9l72c" Mar 12 18:47:05.437109 master-0 kubenswrapper[29097]: I0312 18:47:05.436609 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-ovsdbserver-nb\") pod \"dnsmasq-dns-5995bddff5-9l72c\" (UID: \"7f5d0599-2540-4927-aceb-7cac7e4fb7c3\") " pod="openstack/dnsmasq-dns-5995bddff5-9l72c" Mar 12 18:47:05.437109 master-0 kubenswrapper[29097]: I0312 18:47:05.436709 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-dns-svc\") pod \"dnsmasq-dns-5995bddff5-9l72c\" (UID: \"7f5d0599-2540-4927-aceb-7cac7e4fb7c3\") " pod="openstack/dnsmasq-dns-5995bddff5-9l72c" Mar 12 18:47:05.437109 master-0 kubenswrapper[29097]: I0312 18:47:05.436737 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-config\") pod \"dnsmasq-dns-5995bddff5-9l72c\" (UID: \"7f5d0599-2540-4927-aceb-7cac7e4fb7c3\") " pod="openstack/dnsmasq-dns-5995bddff5-9l72c" Mar 12 18:47:05.437109 master-0 kubenswrapper[29097]: I0312 18:47:05.436788 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-dns-swift-storage-0\") pod \"dnsmasq-dns-5995bddff5-9l72c\" (UID: \"7f5d0599-2540-4927-aceb-7cac7e4fb7c3\") " pod="openstack/dnsmasq-dns-5995bddff5-9l72c" Mar 12 18:47:05.437109 master-0 kubenswrapper[29097]: I0312 18:47:05.436863 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gthf\" (UniqueName: \"kubernetes.io/projected/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-kube-api-access-4gthf\") pod \"dnsmasq-dns-5995bddff5-9l72c\" (UID: \"7f5d0599-2540-4927-aceb-7cac7e4fb7c3\") " pod="openstack/dnsmasq-dns-5995bddff5-9l72c" Mar 12 18:47:05.438709 master-0 kubenswrapper[29097]: I0312 18:47:05.438052 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-ovsdbserver-sb\") pod \"dnsmasq-dns-5995bddff5-9l72c\" (UID: \"7f5d0599-2540-4927-aceb-7cac7e4fb7c3\") " pod="openstack/dnsmasq-dns-5995bddff5-9l72c" Mar 12 18:47:05.438960 master-0 kubenswrapper[29097]: I0312 18:47:05.438923 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-ovsdbserver-nb\") pod \"dnsmasq-dns-5995bddff5-9l72c\" (UID: \"7f5d0599-2540-4927-aceb-7cac7e4fb7c3\") " pod="openstack/dnsmasq-dns-5995bddff5-9l72c" Mar 12 18:47:05.439513 master-0 kubenswrapper[29097]: I0312 18:47:05.439475 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-dns-svc\") pod \"dnsmasq-dns-5995bddff5-9l72c\" (UID: \"7f5d0599-2540-4927-aceb-7cac7e4fb7c3\") " pod="openstack/dnsmasq-dns-5995bddff5-9l72c" Mar 12 18:47:05.440574 master-0 kubenswrapper[29097]: I0312 18:47:05.440509 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-config\") pod \"dnsmasq-dns-5995bddff5-9l72c\" (UID: \"7f5d0599-2540-4927-aceb-7cac7e4fb7c3\") " pod="openstack/dnsmasq-dns-5995bddff5-9l72c" Mar 12 18:47:05.440644 master-0 kubenswrapper[29097]: I0312 18:47:05.440553 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-dns-swift-storage-0\") pod \"dnsmasq-dns-5995bddff5-9l72c\" (UID: \"7f5d0599-2540-4927-aceb-7cac7e4fb7c3\") " pod="openstack/dnsmasq-dns-5995bddff5-9l72c" Mar 12 18:47:05.471631 master-0 kubenswrapper[29097]: I0312 18:47:05.470256 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gthf\" (UniqueName: \"kubernetes.io/projected/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-kube-api-access-4gthf\") pod \"dnsmasq-dns-5995bddff5-9l72c\" (UID: \"7f5d0599-2540-4927-aceb-7cac7e4fb7c3\") " pod="openstack/dnsmasq-dns-5995bddff5-9l72c" Mar 12 18:47:05.562960 master-0 kubenswrapper[29097]: I0312 18:47:05.562908 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5995bddff5-9l72c" Mar 12 18:47:06.050884 master-0 kubenswrapper[29097]: I0312 18:47:06.050840 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5995bddff5-9l72c"] Mar 12 18:47:06.772432 master-0 kubenswrapper[29097]: I0312 18:47:06.772380 29097 generic.go:334] "Generic (PLEG): container finished" podID="7f5d0599-2540-4927-aceb-7cac7e4fb7c3" containerID="055f4329332c182e0ec61e3825ba4545f5ec87a983813619f833ac4f462a188e" exitCode=0 Mar 12 18:47:06.772432 master-0 kubenswrapper[29097]: I0312 18:47:06.772438 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5995bddff5-9l72c" event={"ID":"7f5d0599-2540-4927-aceb-7cac7e4fb7c3","Type":"ContainerDied","Data":"055f4329332c182e0ec61e3825ba4545f5ec87a983813619f833ac4f462a188e"} Mar 12 18:47:06.773100 master-0 kubenswrapper[29097]: I0312 18:47:06.772468 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5995bddff5-9l72c" event={"ID":"7f5d0599-2540-4927-aceb-7cac7e4fb7c3","Type":"ContainerStarted","Data":"d0dc8cf957189be028f21d6b3e9c20c395ff389e6ae3b4da4ab08e62cae7ef95"} Mar 12 18:47:07.782835 master-0 kubenswrapper[29097]: I0312 18:47:07.782733 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5995bddff5-9l72c" event={"ID":"7f5d0599-2540-4927-aceb-7cac7e4fb7c3","Type":"ContainerStarted","Data":"555150304c91c37d70a3b1552184b8e3f15ffc2bb2eff22068a4bbe623834939"} Mar 12 18:47:07.783370 master-0 kubenswrapper[29097]: I0312 18:47:07.782864 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5995bddff5-9l72c" Mar 12 18:47:07.784963 master-0 kubenswrapper[29097]: I0312 18:47:07.784918 29097 generic.go:334] "Generic (PLEG): container finished" podID="26884ddb-c62f-429a-a6e6-0c7cb20ffc8d" containerID="ada7874b69c550091c28070524956984fb48fc05c6db0c0d8e9729bf5dc5ed78" exitCode=0 Mar 12 18:47:07.785054 master-0 kubenswrapper[29097]: I0312 18:47:07.784969 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bb2r7" event={"ID":"26884ddb-c62f-429a-a6e6-0c7cb20ffc8d","Type":"ContainerDied","Data":"ada7874b69c550091c28070524956984fb48fc05c6db0c0d8e9729bf5dc5ed78"} Mar 12 18:47:07.810327 master-0 kubenswrapper[29097]: I0312 18:47:07.810252 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5995bddff5-9l72c" podStartSLOduration=2.810234899 podStartE2EDuration="2.810234899s" podCreationTimestamp="2026-03-12 18:47:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:47:07.801658455 +0000 UTC m=+1067.355638562" watchObservedRunningTime="2026-03-12 18:47:07.810234899 +0000 UTC m=+1067.364214996" Mar 12 18:47:09.396467 master-0 kubenswrapper[29097]: I0312 18:47:09.396425 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bb2r7" Mar 12 18:47:09.539865 master-0 kubenswrapper[29097]: I0312 18:47:09.539791 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26884ddb-c62f-429a-a6e6-0c7cb20ffc8d-config-data\") pod \"26884ddb-c62f-429a-a6e6-0c7cb20ffc8d\" (UID: \"26884ddb-c62f-429a-a6e6-0c7cb20ffc8d\") " Mar 12 18:47:09.540100 master-0 kubenswrapper[29097]: I0312 18:47:09.540076 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26884ddb-c62f-429a-a6e6-0c7cb20ffc8d-combined-ca-bundle\") pod \"26884ddb-c62f-429a-a6e6-0c7cb20ffc8d\" (UID: \"26884ddb-c62f-429a-a6e6-0c7cb20ffc8d\") " Mar 12 18:47:09.540157 master-0 kubenswrapper[29097]: I0312 18:47:09.540135 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9c7k\" (UniqueName: \"kubernetes.io/projected/26884ddb-c62f-429a-a6e6-0c7cb20ffc8d-kube-api-access-w9c7k\") pod \"26884ddb-c62f-429a-a6e6-0c7cb20ffc8d\" (UID: \"26884ddb-c62f-429a-a6e6-0c7cb20ffc8d\") " Mar 12 18:47:09.543746 master-0 kubenswrapper[29097]: I0312 18:47:09.543701 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26884ddb-c62f-429a-a6e6-0c7cb20ffc8d-kube-api-access-w9c7k" (OuterVolumeSpecName: "kube-api-access-w9c7k") pod "26884ddb-c62f-429a-a6e6-0c7cb20ffc8d" (UID: "26884ddb-c62f-429a-a6e6-0c7cb20ffc8d"). InnerVolumeSpecName "kube-api-access-w9c7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:47:09.563873 master-0 kubenswrapper[29097]: I0312 18:47:09.563706 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26884ddb-c62f-429a-a6e6-0c7cb20ffc8d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26884ddb-c62f-429a-a6e6-0c7cb20ffc8d" (UID: "26884ddb-c62f-429a-a6e6-0c7cb20ffc8d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:09.593464 master-0 kubenswrapper[29097]: I0312 18:47:09.593390 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26884ddb-c62f-429a-a6e6-0c7cb20ffc8d-config-data" (OuterVolumeSpecName: "config-data") pod "26884ddb-c62f-429a-a6e6-0c7cb20ffc8d" (UID: "26884ddb-c62f-429a-a6e6-0c7cb20ffc8d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:09.643741 master-0 kubenswrapper[29097]: I0312 18:47:09.643642 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26884ddb-c62f-429a-a6e6-0c7cb20ffc8d-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:09.643741 master-0 kubenswrapper[29097]: I0312 18:47:09.643718 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9c7k\" (UniqueName: \"kubernetes.io/projected/26884ddb-c62f-429a-a6e6-0c7cb20ffc8d-kube-api-access-w9c7k\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:09.643741 master-0 kubenswrapper[29097]: I0312 18:47:09.643744 29097 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26884ddb-c62f-429a-a6e6-0c7cb20ffc8d-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:09.808727 master-0 kubenswrapper[29097]: I0312 18:47:09.808651 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bb2r7" event={"ID":"26884ddb-c62f-429a-a6e6-0c7cb20ffc8d","Type":"ContainerDied","Data":"f34690a02729098e9bc56887bea5931bae3a03665ca1d072149ab446a55016ef"} Mar 12 18:47:09.808727 master-0 kubenswrapper[29097]: I0312 18:47:09.808710 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f34690a02729098e9bc56887bea5931bae3a03665ca1d072149ab446a55016ef" Mar 12 18:47:09.808727 master-0 kubenswrapper[29097]: I0312 18:47:09.808730 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bb2r7" Mar 12 18:47:10.144301 master-0 kubenswrapper[29097]: I0312 18:47:10.143464 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-pxk9z"] Mar 12 18:47:10.144301 master-0 kubenswrapper[29097]: E0312 18:47:10.144292 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26884ddb-c62f-429a-a6e6-0c7cb20ffc8d" containerName="keystone-db-sync" Mar 12 18:47:10.144555 master-0 kubenswrapper[29097]: I0312 18:47:10.144313 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="26884ddb-c62f-429a-a6e6-0c7cb20ffc8d" containerName="keystone-db-sync" Mar 12 18:47:10.144636 master-0 kubenswrapper[29097]: I0312 18:47:10.144613 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="26884ddb-c62f-429a-a6e6-0c7cb20ffc8d" containerName="keystone-db-sync" Mar 12 18:47:10.146222 master-0 kubenswrapper[29097]: I0312 18:47:10.146188 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pxk9z" Mar 12 18:47:10.155627 master-0 kubenswrapper[29097]: I0312 18:47:10.151481 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 12 18:47:10.155627 master-0 kubenswrapper[29097]: I0312 18:47:10.151543 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 12 18:47:10.162598 master-0 kubenswrapper[29097]: I0312 18:47:10.160697 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 12 18:47:10.162598 master-0 kubenswrapper[29097]: I0312 18:47:10.160967 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 12 18:47:10.219950 master-0 kubenswrapper[29097]: I0312 18:47:10.218836 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pxk9z"] Mar 12 18:47:10.259708 master-0 kubenswrapper[29097]: I0312 18:47:10.251608 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5995bddff5-9l72c"] Mar 12 18:47:10.259708 master-0 kubenswrapper[29097]: I0312 18:47:10.251897 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5995bddff5-9l72c" podUID="7f5d0599-2540-4927-aceb-7cac7e4fb7c3" containerName="dnsmasq-dns" containerID="cri-o://555150304c91c37d70a3b1552184b8e3f15ffc2bb2eff22068a4bbe623834939" gracePeriod=10 Mar 12 18:47:10.270581 master-0 kubenswrapper[29097]: I0312 18:47:10.269221 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-combined-ca-bundle\") pod \"keystone-bootstrap-pxk9z\" (UID: \"a8cbcb3d-a29a-40f9-afe8-401b3db17fd0\") " pod="openstack/keystone-bootstrap-pxk9z" Mar 12 18:47:10.270581 master-0 kubenswrapper[29097]: I0312 18:47:10.269477 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-scripts\") pod \"keystone-bootstrap-pxk9z\" (UID: \"a8cbcb3d-a29a-40f9-afe8-401b3db17fd0\") " pod="openstack/keystone-bootstrap-pxk9z" Mar 12 18:47:10.270581 master-0 kubenswrapper[29097]: I0312 18:47:10.269679 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-config-data\") pod \"keystone-bootstrap-pxk9z\" (UID: \"a8cbcb3d-a29a-40f9-afe8-401b3db17fd0\") " pod="openstack/keystone-bootstrap-pxk9z" Mar 12 18:47:10.270581 master-0 kubenswrapper[29097]: I0312 18:47:10.269719 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-credential-keys\") pod \"keystone-bootstrap-pxk9z\" (UID: \"a8cbcb3d-a29a-40f9-afe8-401b3db17fd0\") " pod="openstack/keystone-bootstrap-pxk9z" Mar 12 18:47:10.270581 master-0 kubenswrapper[29097]: I0312 18:47:10.269806 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n788\" (UniqueName: \"kubernetes.io/projected/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-kube-api-access-7n788\") pod \"keystone-bootstrap-pxk9z\" (UID: \"a8cbcb3d-a29a-40f9-afe8-401b3db17fd0\") " pod="openstack/keystone-bootstrap-pxk9z" Mar 12 18:47:10.270581 master-0 kubenswrapper[29097]: I0312 18:47:10.269848 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-fernet-keys\") pod \"keystone-bootstrap-pxk9z\" (UID: \"a8cbcb3d-a29a-40f9-afe8-401b3db17fd0\") " pod="openstack/keystone-bootstrap-pxk9z" Mar 12 18:47:10.302540 master-0 kubenswrapper[29097]: I0312 18:47:10.300817 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8695667fff-k8fx9"] Mar 12 18:47:10.308624 master-0 kubenswrapper[29097]: I0312 18:47:10.303017 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8695667fff-k8fx9" Mar 12 18:47:10.316636 master-0 kubenswrapper[29097]: I0312 18:47:10.313923 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8695667fff-k8fx9"] Mar 12 18:47:10.324703 master-0 kubenswrapper[29097]: I0312 18:47:10.324569 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-create-4v6wn"] Mar 12 18:47:10.347717 master-0 kubenswrapper[29097]: I0312 18:47:10.342504 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-4v6wn" Mar 12 18:47:10.391126 master-0 kubenswrapper[29097]: I0312 18:47:10.386332 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n788\" (UniqueName: \"kubernetes.io/projected/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-kube-api-access-7n788\") pod \"keystone-bootstrap-pxk9z\" (UID: \"a8cbcb3d-a29a-40f9-afe8-401b3db17fd0\") " pod="openstack/keystone-bootstrap-pxk9z" Mar 12 18:47:10.391126 master-0 kubenswrapper[29097]: I0312 18:47:10.386426 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-fernet-keys\") pod \"keystone-bootstrap-pxk9z\" (UID: \"a8cbcb3d-a29a-40f9-afe8-401b3db17fd0\") " pod="openstack/keystone-bootstrap-pxk9z" Mar 12 18:47:10.391126 master-0 kubenswrapper[29097]: I0312 18:47:10.386475 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-combined-ca-bundle\") pod \"keystone-bootstrap-pxk9z\" (UID: \"a8cbcb3d-a29a-40f9-afe8-401b3db17fd0\") " pod="openstack/keystone-bootstrap-pxk9z" Mar 12 18:47:10.391126 master-0 kubenswrapper[29097]: I0312 18:47:10.386645 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-scripts\") pod \"keystone-bootstrap-pxk9z\" (UID: \"a8cbcb3d-a29a-40f9-afe8-401b3db17fd0\") " pod="openstack/keystone-bootstrap-pxk9z" Mar 12 18:47:10.391126 master-0 kubenswrapper[29097]: I0312 18:47:10.386792 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-config-data\") pod \"keystone-bootstrap-pxk9z\" (UID: \"a8cbcb3d-a29a-40f9-afe8-401b3db17fd0\") " pod="openstack/keystone-bootstrap-pxk9z" Mar 12 18:47:10.391126 master-0 kubenswrapper[29097]: I0312 18:47:10.386835 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-credential-keys\") pod \"keystone-bootstrap-pxk9z\" (UID: \"a8cbcb3d-a29a-40f9-afe8-401b3db17fd0\") " pod="openstack/keystone-bootstrap-pxk9z" Mar 12 18:47:10.394287 master-0 kubenswrapper[29097]: I0312 18:47:10.394244 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-combined-ca-bundle\") pod \"keystone-bootstrap-pxk9z\" (UID: \"a8cbcb3d-a29a-40f9-afe8-401b3db17fd0\") " pod="openstack/keystone-bootstrap-pxk9z" Mar 12 18:47:10.439644 master-0 kubenswrapper[29097]: I0312 18:47:10.415627 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-4v6wn"] Mar 12 18:47:10.439644 master-0 kubenswrapper[29097]: I0312 18:47:10.424873 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-scripts\") pod \"keystone-bootstrap-pxk9z\" (UID: \"a8cbcb3d-a29a-40f9-afe8-401b3db17fd0\") " pod="openstack/keystone-bootstrap-pxk9z" Mar 12 18:47:10.439644 master-0 kubenswrapper[29097]: I0312 18:47:10.431464 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n788\" (UniqueName: \"kubernetes.io/projected/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-kube-api-access-7n788\") pod \"keystone-bootstrap-pxk9z\" (UID: \"a8cbcb3d-a29a-40f9-afe8-401b3db17fd0\") " pod="openstack/keystone-bootstrap-pxk9z" Mar 12 18:47:10.446507 master-0 kubenswrapper[29097]: I0312 18:47:10.442631 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-config-data\") pod \"keystone-bootstrap-pxk9z\" (UID: \"a8cbcb3d-a29a-40f9-afe8-401b3db17fd0\") " pod="openstack/keystone-bootstrap-pxk9z" Mar 12 18:47:10.453400 master-0 kubenswrapper[29097]: I0312 18:47:10.451318 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-fernet-keys\") pod \"keystone-bootstrap-pxk9z\" (UID: \"a8cbcb3d-a29a-40f9-afe8-401b3db17fd0\") " pod="openstack/keystone-bootstrap-pxk9z" Mar 12 18:47:10.461964 master-0 kubenswrapper[29097]: I0312 18:47:10.456890 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-credential-keys\") pod \"keystone-bootstrap-pxk9z\" (UID: \"a8cbcb3d-a29a-40f9-afe8-401b3db17fd0\") " pod="openstack/keystone-bootstrap-pxk9z" Mar 12 18:47:10.490151 master-0 kubenswrapper[29097]: I0312 18:47:10.486100 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pxk9z" Mar 12 18:47:10.500565 master-0 kubenswrapper[29097]: I0312 18:47:10.499421 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/58ba054c-0e16-46e8-b7e5-861db4f81fa3-dns-swift-storage-0\") pod \"dnsmasq-dns-8695667fff-k8fx9\" (UID: \"58ba054c-0e16-46e8-b7e5-861db4f81fa3\") " pod="openstack/dnsmasq-dns-8695667fff-k8fx9" Mar 12 18:47:10.500565 master-0 kubenswrapper[29097]: I0312 18:47:10.499587 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/784b2b8e-d340-4f65-8abb-ad196b08ed6f-operator-scripts\") pod \"ironic-db-create-4v6wn\" (UID: \"784b2b8e-d340-4f65-8abb-ad196b08ed6f\") " pod="openstack/ironic-db-create-4v6wn" Mar 12 18:47:10.500565 master-0 kubenswrapper[29097]: I0312 18:47:10.499640 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkgph\" (UniqueName: \"kubernetes.io/projected/58ba054c-0e16-46e8-b7e5-861db4f81fa3-kube-api-access-hkgph\") pod \"dnsmasq-dns-8695667fff-k8fx9\" (UID: \"58ba054c-0e16-46e8-b7e5-861db4f81fa3\") " pod="openstack/dnsmasq-dns-8695667fff-k8fx9" Mar 12 18:47:10.500565 master-0 kubenswrapper[29097]: I0312 18:47:10.499684 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58ba054c-0e16-46e8-b7e5-861db4f81fa3-ovsdbserver-sb\") pod \"dnsmasq-dns-8695667fff-k8fx9\" (UID: \"58ba054c-0e16-46e8-b7e5-861db4f81fa3\") " pod="openstack/dnsmasq-dns-8695667fff-k8fx9" Mar 12 18:47:10.500565 master-0 kubenswrapper[29097]: I0312 18:47:10.499701 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/58ba054c-0e16-46e8-b7e5-861db4f81fa3-ovsdbserver-nb\") pod \"dnsmasq-dns-8695667fff-k8fx9\" (UID: \"58ba054c-0e16-46e8-b7e5-861db4f81fa3\") " pod="openstack/dnsmasq-dns-8695667fff-k8fx9" Mar 12 18:47:10.500565 master-0 kubenswrapper[29097]: I0312 18:47:10.499802 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58ba054c-0e16-46e8-b7e5-861db4f81fa3-dns-svc\") pod \"dnsmasq-dns-8695667fff-k8fx9\" (UID: \"58ba054c-0e16-46e8-b7e5-861db4f81fa3\") " pod="openstack/dnsmasq-dns-8695667fff-k8fx9" Mar 12 18:47:10.500565 master-0 kubenswrapper[29097]: I0312 18:47:10.499852 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cscx\" (UniqueName: \"kubernetes.io/projected/784b2b8e-d340-4f65-8abb-ad196b08ed6f-kube-api-access-5cscx\") pod \"ironic-db-create-4v6wn\" (UID: \"784b2b8e-d340-4f65-8abb-ad196b08ed6f\") " pod="openstack/ironic-db-create-4v6wn" Mar 12 18:47:10.500565 master-0 kubenswrapper[29097]: I0312 18:47:10.499899 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58ba054c-0e16-46e8-b7e5-861db4f81fa3-config\") pod \"dnsmasq-dns-8695667fff-k8fx9\" (UID: \"58ba054c-0e16-46e8-b7e5-861db4f81fa3\") " pod="openstack/dnsmasq-dns-8695667fff-k8fx9" Mar 12 18:47:10.591627 master-0 kubenswrapper[29097]: I0312 18:47:10.590506 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-fa62f-db-sync-xn4dx"] Mar 12 18:47:10.629722 master-0 kubenswrapper[29097]: I0312 18:47:10.628060 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58ba054c-0e16-46e8-b7e5-861db4f81fa3-dns-svc\") pod \"dnsmasq-dns-8695667fff-k8fx9\" (UID: \"58ba054c-0e16-46e8-b7e5-861db4f81fa3\") " pod="openstack/dnsmasq-dns-8695667fff-k8fx9" Mar 12 18:47:10.629722 master-0 kubenswrapper[29097]: I0312 18:47:10.628177 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cscx\" (UniqueName: \"kubernetes.io/projected/784b2b8e-d340-4f65-8abb-ad196b08ed6f-kube-api-access-5cscx\") pod \"ironic-db-create-4v6wn\" (UID: \"784b2b8e-d340-4f65-8abb-ad196b08ed6f\") " pod="openstack/ironic-db-create-4v6wn" Mar 12 18:47:10.629722 master-0 kubenswrapper[29097]: I0312 18:47:10.628237 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58ba054c-0e16-46e8-b7e5-861db4f81fa3-config\") pod \"dnsmasq-dns-8695667fff-k8fx9\" (UID: \"58ba054c-0e16-46e8-b7e5-861db4f81fa3\") " pod="openstack/dnsmasq-dns-8695667fff-k8fx9" Mar 12 18:47:10.629722 master-0 kubenswrapper[29097]: I0312 18:47:10.628415 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/58ba054c-0e16-46e8-b7e5-861db4f81fa3-dns-swift-storage-0\") pod \"dnsmasq-dns-8695667fff-k8fx9\" (UID: \"58ba054c-0e16-46e8-b7e5-861db4f81fa3\") " pod="openstack/dnsmasq-dns-8695667fff-k8fx9" Mar 12 18:47:10.630041 master-0 kubenswrapper[29097]: I0312 18:47:10.630008 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/784b2b8e-d340-4f65-8abb-ad196b08ed6f-operator-scripts\") pod \"ironic-db-create-4v6wn\" (UID: \"784b2b8e-d340-4f65-8abb-ad196b08ed6f\") " pod="openstack/ironic-db-create-4v6wn" Mar 12 18:47:10.630109 master-0 kubenswrapper[29097]: I0312 18:47:10.630092 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkgph\" (UniqueName: \"kubernetes.io/projected/58ba054c-0e16-46e8-b7e5-861db4f81fa3-kube-api-access-hkgph\") pod \"dnsmasq-dns-8695667fff-k8fx9\" (UID: \"58ba054c-0e16-46e8-b7e5-861db4f81fa3\") " pod="openstack/dnsmasq-dns-8695667fff-k8fx9" Mar 12 18:47:10.630153 master-0 kubenswrapper[29097]: I0312 18:47:10.630134 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58ba054c-0e16-46e8-b7e5-861db4f81fa3-ovsdbserver-sb\") pod \"dnsmasq-dns-8695667fff-k8fx9\" (UID: \"58ba054c-0e16-46e8-b7e5-861db4f81fa3\") " pod="openstack/dnsmasq-dns-8695667fff-k8fx9" Mar 12 18:47:10.630193 master-0 kubenswrapper[29097]: I0312 18:47:10.630155 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/58ba054c-0e16-46e8-b7e5-861db4f81fa3-ovsdbserver-nb\") pod \"dnsmasq-dns-8695667fff-k8fx9\" (UID: \"58ba054c-0e16-46e8-b7e5-861db4f81fa3\") " pod="openstack/dnsmasq-dns-8695667fff-k8fx9" Mar 12 18:47:10.640536 master-0 kubenswrapper[29097]: I0312 18:47:10.631000 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/58ba054c-0e16-46e8-b7e5-861db4f81fa3-ovsdbserver-nb\") pod \"dnsmasq-dns-8695667fff-k8fx9\" (UID: \"58ba054c-0e16-46e8-b7e5-861db4f81fa3\") " pod="openstack/dnsmasq-dns-8695667fff-k8fx9" Mar 12 18:47:10.640536 master-0 kubenswrapper[29097]: I0312 18:47:10.631547 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58ba054c-0e16-46e8-b7e5-861db4f81fa3-dns-svc\") pod \"dnsmasq-dns-8695667fff-k8fx9\" (UID: \"58ba054c-0e16-46e8-b7e5-861db4f81fa3\") " pod="openstack/dnsmasq-dns-8695667fff-k8fx9" Mar 12 18:47:10.640536 master-0 kubenswrapper[29097]: I0312 18:47:10.635801 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/784b2b8e-d340-4f65-8abb-ad196b08ed6f-operator-scripts\") pod \"ironic-db-create-4v6wn\" (UID: \"784b2b8e-d340-4f65-8abb-ad196b08ed6f\") " pod="openstack/ironic-db-create-4v6wn" Mar 12 18:47:10.640536 master-0 kubenswrapper[29097]: I0312 18:47:10.637476 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58ba054c-0e16-46e8-b7e5-861db4f81fa3-ovsdbserver-sb\") pod \"dnsmasq-dns-8695667fff-k8fx9\" (UID: \"58ba054c-0e16-46e8-b7e5-861db4f81fa3\") " pod="openstack/dnsmasq-dns-8695667fff-k8fx9" Mar 12 18:47:10.640536 master-0 kubenswrapper[29097]: I0312 18:47:10.637541 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/58ba054c-0e16-46e8-b7e5-861db4f81fa3-dns-swift-storage-0\") pod \"dnsmasq-dns-8695667fff-k8fx9\" (UID: \"58ba054c-0e16-46e8-b7e5-861db4f81fa3\") " pod="openstack/dnsmasq-dns-8695667fff-k8fx9" Mar 12 18:47:10.644093 master-0 kubenswrapper[29097]: I0312 18:47:10.643004 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58ba054c-0e16-46e8-b7e5-861db4f81fa3-config\") pod \"dnsmasq-dns-8695667fff-k8fx9\" (UID: \"58ba054c-0e16-46e8-b7e5-861db4f81fa3\") " pod="openstack/dnsmasq-dns-8695667fff-k8fx9" Mar 12 18:47:10.653946 master-0 kubenswrapper[29097]: I0312 18:47:10.651865 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fa62f-db-sync-xn4dx" Mar 12 18:47:10.668795 master-0 kubenswrapper[29097]: I0312 18:47:10.657242 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-fa62f-scripts" Mar 12 18:47:10.668795 master-0 kubenswrapper[29097]: I0312 18:47:10.657497 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-fa62f-config-data" Mar 12 18:47:10.668795 master-0 kubenswrapper[29097]: I0312 18:47:10.665573 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-fa62f-db-sync-xn4dx"] Mar 12 18:47:10.688638 master-0 kubenswrapper[29097]: I0312 18:47:10.682580 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-0e3d-account-create-update-pb9cg"] Mar 12 18:47:10.688638 master-0 kubenswrapper[29097]: I0312 18:47:10.683924 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-0e3d-account-create-update-pb9cg" Mar 12 18:47:10.701983 master-0 kubenswrapper[29097]: I0312 18:47:10.691620 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-db-secret" Mar 12 18:47:10.701983 master-0 kubenswrapper[29097]: I0312 18:47:10.700419 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-ptnfn"] Mar 12 18:47:10.701983 master-0 kubenswrapper[29097]: I0312 18:47:10.700696 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkgph\" (UniqueName: \"kubernetes.io/projected/58ba054c-0e16-46e8-b7e5-861db4f81fa3-kube-api-access-hkgph\") pod \"dnsmasq-dns-8695667fff-k8fx9\" (UID: \"58ba054c-0e16-46e8-b7e5-861db4f81fa3\") " pod="openstack/dnsmasq-dns-8695667fff-k8fx9" Mar 12 18:47:10.713050 master-0 kubenswrapper[29097]: I0312 18:47:10.703394 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ptnfn" Mar 12 18:47:10.713050 master-0 kubenswrapper[29097]: I0312 18:47:10.706251 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cscx\" (UniqueName: \"kubernetes.io/projected/784b2b8e-d340-4f65-8abb-ad196b08ed6f-kube-api-access-5cscx\") pod \"ironic-db-create-4v6wn\" (UID: \"784b2b8e-d340-4f65-8abb-ad196b08ed6f\") " pod="openstack/ironic-db-create-4v6wn" Mar 12 18:47:10.713050 master-0 kubenswrapper[29097]: I0312 18:47:10.712665 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-0e3d-account-create-update-pb9cg"] Mar 12 18:47:10.722621 master-0 kubenswrapper[29097]: I0312 18:47:10.722528 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 12 18:47:10.727381 master-0 kubenswrapper[29097]: I0312 18:47:10.722951 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 12 18:47:10.769010 master-0 kubenswrapper[29097]: I0312 18:47:10.731495 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-etc-machine-id\") pod \"cinder-fa62f-db-sync-xn4dx\" (UID: \"d7af9e78-07ea-42c2-8d0a-d73fe46d8d36\") " pod="openstack/cinder-fa62f-db-sync-xn4dx" Mar 12 18:47:10.769010 master-0 kubenswrapper[29097]: I0312 18:47:10.731563 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-combined-ca-bundle\") pod \"cinder-fa62f-db-sync-xn4dx\" (UID: \"d7af9e78-07ea-42c2-8d0a-d73fe46d8d36\") " pod="openstack/cinder-fa62f-db-sync-xn4dx" Mar 12 18:47:10.769010 master-0 kubenswrapper[29097]: I0312 18:47:10.731612 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22939625-8570-4e99-9070-5031a539e183-operator-scripts\") pod \"ironic-0e3d-account-create-update-pb9cg\" (UID: \"22939625-8570-4e99-9070-5031a539e183\") " pod="openstack/ironic-0e3d-account-create-update-pb9cg" Mar 12 18:47:10.769010 master-0 kubenswrapper[29097]: I0312 18:47:10.731679 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-config-data\") pod \"cinder-fa62f-db-sync-xn4dx\" (UID: \"d7af9e78-07ea-42c2-8d0a-d73fe46d8d36\") " pod="openstack/cinder-fa62f-db-sync-xn4dx" Mar 12 18:47:10.769010 master-0 kubenswrapper[29097]: I0312 18:47:10.754779 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-4v6wn" Mar 12 18:47:10.769010 master-0 kubenswrapper[29097]: I0312 18:47:10.755312 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4v7d\" (UniqueName: \"kubernetes.io/projected/22939625-8570-4e99-9070-5031a539e183-kube-api-access-b4v7d\") pod \"ironic-0e3d-account-create-update-pb9cg\" (UID: \"22939625-8570-4e99-9070-5031a539e183\") " pod="openstack/ironic-0e3d-account-create-update-pb9cg" Mar 12 18:47:10.769010 master-0 kubenswrapper[29097]: I0312 18:47:10.755410 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-db-sync-config-data\") pod \"cinder-fa62f-db-sync-xn4dx\" (UID: \"d7af9e78-07ea-42c2-8d0a-d73fe46d8d36\") " pod="openstack/cinder-fa62f-db-sync-xn4dx" Mar 12 18:47:10.769010 master-0 kubenswrapper[29097]: I0312 18:47:10.755446 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnfhw\" (UniqueName: \"kubernetes.io/projected/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-kube-api-access-xnfhw\") pod \"cinder-fa62f-db-sync-xn4dx\" (UID: \"d7af9e78-07ea-42c2-8d0a-d73fe46d8d36\") " pod="openstack/cinder-fa62f-db-sync-xn4dx" Mar 12 18:47:10.769010 master-0 kubenswrapper[29097]: I0312 18:47:10.755524 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-scripts\") pod \"cinder-fa62f-db-sync-xn4dx\" (UID: \"d7af9e78-07ea-42c2-8d0a-d73fe46d8d36\") " pod="openstack/cinder-fa62f-db-sync-xn4dx" Mar 12 18:47:10.769010 master-0 kubenswrapper[29097]: I0312 18:47:10.765714 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-ptnfn"] Mar 12 18:47:10.815610 master-0 kubenswrapper[29097]: I0312 18:47:10.808359 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8695667fff-k8fx9"] Mar 12 18:47:10.815610 master-0 kubenswrapper[29097]: I0312 18:47:10.809105 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8695667fff-k8fx9" Mar 12 18:47:10.837590 master-0 kubenswrapper[29097]: I0312 18:47:10.826615 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-s5w8c"] Mar 12 18:47:10.837590 master-0 kubenswrapper[29097]: I0312 18:47:10.828033 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s5w8c" Mar 12 18:47:10.837590 master-0 kubenswrapper[29097]: I0312 18:47:10.830986 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 12 18:47:10.837590 master-0 kubenswrapper[29097]: I0312 18:47:10.831140 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 12 18:47:10.841066 master-0 kubenswrapper[29097]: I0312 18:47:10.841018 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-s5w8c"] Mar 12 18:47:10.852441 master-0 kubenswrapper[29097]: I0312 18:47:10.852407 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56c5578c7c-zjbch"] Mar 12 18:47:10.854532 master-0 kubenswrapper[29097]: I0312 18:47:10.854475 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c5578c7c-zjbch" Mar 12 18:47:10.871582 master-0 kubenswrapper[29097]: I0312 18:47:10.857658 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1-config\") pod \"neutron-db-sync-ptnfn\" (UID: \"1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1\") " pod="openstack/neutron-db-sync-ptnfn" Mar 12 18:47:10.871582 master-0 kubenswrapper[29097]: I0312 18:47:10.857737 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-etc-machine-id\") pod \"cinder-fa62f-db-sync-xn4dx\" (UID: \"d7af9e78-07ea-42c2-8d0a-d73fe46d8d36\") " pod="openstack/cinder-fa62f-db-sync-xn4dx" Mar 12 18:47:10.871582 master-0 kubenswrapper[29097]: I0312 18:47:10.857756 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-combined-ca-bundle\") pod \"cinder-fa62f-db-sync-xn4dx\" (UID: \"d7af9e78-07ea-42c2-8d0a-d73fe46d8d36\") " pod="openstack/cinder-fa62f-db-sync-xn4dx" Mar 12 18:47:10.871582 master-0 kubenswrapper[29097]: I0312 18:47:10.857798 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1-combined-ca-bundle\") pod \"neutron-db-sync-ptnfn\" (UID: \"1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1\") " pod="openstack/neutron-db-sync-ptnfn" Mar 12 18:47:10.871582 master-0 kubenswrapper[29097]: I0312 18:47:10.857835 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22939625-8570-4e99-9070-5031a539e183-operator-scripts\") pod \"ironic-0e3d-account-create-update-pb9cg\" (UID: \"22939625-8570-4e99-9070-5031a539e183\") " pod="openstack/ironic-0e3d-account-create-update-pb9cg" Mar 12 18:47:10.871582 master-0 kubenswrapper[29097]: I0312 18:47:10.857893 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-config-data\") pod \"cinder-fa62f-db-sync-xn4dx\" (UID: \"d7af9e78-07ea-42c2-8d0a-d73fe46d8d36\") " pod="openstack/cinder-fa62f-db-sync-xn4dx" Mar 12 18:47:10.871582 master-0 kubenswrapper[29097]: I0312 18:47:10.857926 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wvlf\" (UniqueName: \"kubernetes.io/projected/1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1-kube-api-access-4wvlf\") pod \"neutron-db-sync-ptnfn\" (UID: \"1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1\") " pod="openstack/neutron-db-sync-ptnfn" Mar 12 18:47:10.871582 master-0 kubenswrapper[29097]: I0312 18:47:10.857949 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4v7d\" (UniqueName: \"kubernetes.io/projected/22939625-8570-4e99-9070-5031a539e183-kube-api-access-b4v7d\") pod \"ironic-0e3d-account-create-update-pb9cg\" (UID: \"22939625-8570-4e99-9070-5031a539e183\") " pod="openstack/ironic-0e3d-account-create-update-pb9cg" Mar 12 18:47:10.871582 master-0 kubenswrapper[29097]: I0312 18:47:10.858156 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-db-sync-config-data\") pod \"cinder-fa62f-db-sync-xn4dx\" (UID: \"d7af9e78-07ea-42c2-8d0a-d73fe46d8d36\") " pod="openstack/cinder-fa62f-db-sync-xn4dx" Mar 12 18:47:10.871582 master-0 kubenswrapper[29097]: I0312 18:47:10.858220 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnfhw\" (UniqueName: \"kubernetes.io/projected/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-kube-api-access-xnfhw\") pod \"cinder-fa62f-db-sync-xn4dx\" (UID: \"d7af9e78-07ea-42c2-8d0a-d73fe46d8d36\") " pod="openstack/cinder-fa62f-db-sync-xn4dx" Mar 12 18:47:10.871582 master-0 kubenswrapper[29097]: I0312 18:47:10.858323 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-scripts\") pod \"cinder-fa62f-db-sync-xn4dx\" (UID: \"d7af9e78-07ea-42c2-8d0a-d73fe46d8d36\") " pod="openstack/cinder-fa62f-db-sync-xn4dx" Mar 12 18:47:10.871582 master-0 kubenswrapper[29097]: I0312 18:47:10.863208 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-etc-machine-id\") pod \"cinder-fa62f-db-sync-xn4dx\" (UID: \"d7af9e78-07ea-42c2-8d0a-d73fe46d8d36\") " pod="openstack/cinder-fa62f-db-sync-xn4dx" Mar 12 18:47:10.871582 master-0 kubenswrapper[29097]: I0312 18:47:10.863212 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-combined-ca-bundle\") pod \"cinder-fa62f-db-sync-xn4dx\" (UID: \"d7af9e78-07ea-42c2-8d0a-d73fe46d8d36\") " pod="openstack/cinder-fa62f-db-sync-xn4dx" Mar 12 18:47:10.871582 master-0 kubenswrapper[29097]: I0312 18:47:10.863911 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22939625-8570-4e99-9070-5031a539e183-operator-scripts\") pod \"ironic-0e3d-account-create-update-pb9cg\" (UID: \"22939625-8570-4e99-9070-5031a539e183\") " pod="openstack/ironic-0e3d-account-create-update-pb9cg" Mar 12 18:47:10.875780 master-0 kubenswrapper[29097]: I0312 18:47:10.874487 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-scripts\") pod \"cinder-fa62f-db-sync-xn4dx\" (UID: \"d7af9e78-07ea-42c2-8d0a-d73fe46d8d36\") " pod="openstack/cinder-fa62f-db-sync-xn4dx" Mar 12 18:47:10.884742 master-0 kubenswrapper[29097]: I0312 18:47:10.876350 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-db-sync-config-data\") pod \"cinder-fa62f-db-sync-xn4dx\" (UID: \"d7af9e78-07ea-42c2-8d0a-d73fe46d8d36\") " pod="openstack/cinder-fa62f-db-sync-xn4dx" Mar 12 18:47:10.884742 master-0 kubenswrapper[29097]: I0312 18:47:10.879622 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56c5578c7c-zjbch"] Mar 12 18:47:10.886238 master-0 kubenswrapper[29097]: I0312 18:47:10.886212 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-config-data\") pod \"cinder-fa62f-db-sync-xn4dx\" (UID: \"d7af9e78-07ea-42c2-8d0a-d73fe46d8d36\") " pod="openstack/cinder-fa62f-db-sync-xn4dx" Mar 12 18:47:10.886990 master-0 kubenswrapper[29097]: I0312 18:47:10.886946 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4v7d\" (UniqueName: \"kubernetes.io/projected/22939625-8570-4e99-9070-5031a539e183-kube-api-access-b4v7d\") pod \"ironic-0e3d-account-create-update-pb9cg\" (UID: \"22939625-8570-4e99-9070-5031a539e183\") " pod="openstack/ironic-0e3d-account-create-update-pb9cg" Mar 12 18:47:10.901165 master-0 kubenswrapper[29097]: I0312 18:47:10.889806 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnfhw\" (UniqueName: \"kubernetes.io/projected/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-kube-api-access-xnfhw\") pod \"cinder-fa62f-db-sync-xn4dx\" (UID: \"d7af9e78-07ea-42c2-8d0a-d73fe46d8d36\") " pod="openstack/cinder-fa62f-db-sync-xn4dx" Mar 12 18:47:10.930769 master-0 kubenswrapper[29097]: I0312 18:47:10.928634 29097 generic.go:334] "Generic (PLEG): container finished" podID="7f5d0599-2540-4927-aceb-7cac7e4fb7c3" containerID="555150304c91c37d70a3b1552184b8e3f15ffc2bb2eff22068a4bbe623834939" exitCode=0 Mar 12 18:47:10.930769 master-0 kubenswrapper[29097]: I0312 18:47:10.928665 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5995bddff5-9l72c" event={"ID":"7f5d0599-2540-4927-aceb-7cac7e4fb7c3","Type":"ContainerDied","Data":"555150304c91c37d70a3b1552184b8e3f15ffc2bb2eff22068a4bbe623834939"} Mar 12 18:47:10.963718 master-0 kubenswrapper[29097]: I0312 18:47:10.961167 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73e3cd3a-c873-4b0f-870d-26ba00b0a910-ovsdbserver-nb\") pod \"dnsmasq-dns-56c5578c7c-zjbch\" (UID: \"73e3cd3a-c873-4b0f-870d-26ba00b0a910\") " pod="openstack/dnsmasq-dns-56c5578c7c-zjbch" Mar 12 18:47:10.963718 master-0 kubenswrapper[29097]: I0312 18:47:10.961324 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1-config\") pod \"neutron-db-sync-ptnfn\" (UID: \"1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1\") " pod="openstack/neutron-db-sync-ptnfn" Mar 12 18:47:10.963718 master-0 kubenswrapper[29097]: I0312 18:47:10.961437 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1-combined-ca-bundle\") pod \"neutron-db-sync-ptnfn\" (UID: \"1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1\") " pod="openstack/neutron-db-sync-ptnfn" Mar 12 18:47:10.963718 master-0 kubenswrapper[29097]: I0312 18:47:10.961468 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73e3cd3a-c873-4b0f-870d-26ba00b0a910-dns-swift-storage-0\") pod \"dnsmasq-dns-56c5578c7c-zjbch\" (UID: \"73e3cd3a-c873-4b0f-870d-26ba00b0a910\") " pod="openstack/dnsmasq-dns-56c5578c7c-zjbch" Mar 12 18:47:10.963718 master-0 kubenswrapper[29097]: I0312 18:47:10.961498 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw4gh\" (UniqueName: \"kubernetes.io/projected/73e3cd3a-c873-4b0f-870d-26ba00b0a910-kube-api-access-dw4gh\") pod \"dnsmasq-dns-56c5578c7c-zjbch\" (UID: \"73e3cd3a-c873-4b0f-870d-26ba00b0a910\") " pod="openstack/dnsmasq-dns-56c5578c7c-zjbch" Mar 12 18:47:10.963718 master-0 kubenswrapper[29097]: I0312 18:47:10.961545 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73e3cd3a-c873-4b0f-870d-26ba00b0a910-config\") pod \"dnsmasq-dns-56c5578c7c-zjbch\" (UID: \"73e3cd3a-c873-4b0f-870d-26ba00b0a910\") " pod="openstack/dnsmasq-dns-56c5578c7c-zjbch" Mar 12 18:47:10.963718 master-0 kubenswrapper[29097]: I0312 18:47:10.961568 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae6f8a81-a597-4c6d-ae77-b60b36190af6-combined-ca-bundle\") pod \"placement-db-sync-s5w8c\" (UID: \"ae6f8a81-a597-4c6d-ae77-b60b36190af6\") " pod="openstack/placement-db-sync-s5w8c" Mar 12 18:47:10.963718 master-0 kubenswrapper[29097]: I0312 18:47:10.961643 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73e3cd3a-c873-4b0f-870d-26ba00b0a910-ovsdbserver-sb\") pod \"dnsmasq-dns-56c5578c7c-zjbch\" (UID: \"73e3cd3a-c873-4b0f-870d-26ba00b0a910\") " pod="openstack/dnsmasq-dns-56c5578c7c-zjbch" Mar 12 18:47:10.963718 master-0 kubenswrapper[29097]: I0312 18:47:10.961715 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wvlf\" (UniqueName: \"kubernetes.io/projected/1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1-kube-api-access-4wvlf\") pod \"neutron-db-sync-ptnfn\" (UID: \"1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1\") " pod="openstack/neutron-db-sync-ptnfn" Mar 12 18:47:10.963718 master-0 kubenswrapper[29097]: I0312 18:47:10.961793 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae6f8a81-a597-4c6d-ae77-b60b36190af6-config-data\") pod \"placement-db-sync-s5w8c\" (UID: \"ae6f8a81-a597-4c6d-ae77-b60b36190af6\") " pod="openstack/placement-db-sync-s5w8c" Mar 12 18:47:10.963718 master-0 kubenswrapper[29097]: I0312 18:47:10.962795 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73e3cd3a-c873-4b0f-870d-26ba00b0a910-dns-svc\") pod \"dnsmasq-dns-56c5578c7c-zjbch\" (UID: \"73e3cd3a-c873-4b0f-870d-26ba00b0a910\") " pod="openstack/dnsmasq-dns-56c5578c7c-zjbch" Mar 12 18:47:10.963718 master-0 kubenswrapper[29097]: I0312 18:47:10.962887 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae6f8a81-a597-4c6d-ae77-b60b36190af6-logs\") pod \"placement-db-sync-s5w8c\" (UID: \"ae6f8a81-a597-4c6d-ae77-b60b36190af6\") " pod="openstack/placement-db-sync-s5w8c" Mar 12 18:47:10.963718 master-0 kubenswrapper[29097]: I0312 18:47:10.962940 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc85z\" (UniqueName: \"kubernetes.io/projected/ae6f8a81-a597-4c6d-ae77-b60b36190af6-kube-api-access-lc85z\") pod \"placement-db-sync-s5w8c\" (UID: \"ae6f8a81-a597-4c6d-ae77-b60b36190af6\") " pod="openstack/placement-db-sync-s5w8c" Mar 12 18:47:10.963718 master-0 kubenswrapper[29097]: I0312 18:47:10.962970 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae6f8a81-a597-4c6d-ae77-b60b36190af6-scripts\") pod \"placement-db-sync-s5w8c\" (UID: \"ae6f8a81-a597-4c6d-ae77-b60b36190af6\") " pod="openstack/placement-db-sync-s5w8c" Mar 12 18:47:10.974628 master-0 kubenswrapper[29097]: I0312 18:47:10.971678 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1-combined-ca-bundle\") pod \"neutron-db-sync-ptnfn\" (UID: \"1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1\") " pod="openstack/neutron-db-sync-ptnfn" Mar 12 18:47:10.999549 master-0 kubenswrapper[29097]: I0312 18:47:10.995870 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1-config\") pod \"neutron-db-sync-ptnfn\" (UID: \"1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1\") " pod="openstack/neutron-db-sync-ptnfn" Mar 12 18:47:11.008926 master-0 kubenswrapper[29097]: I0312 18:47:11.004898 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wvlf\" (UniqueName: \"kubernetes.io/projected/1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1-kube-api-access-4wvlf\") pod \"neutron-db-sync-ptnfn\" (UID: \"1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1\") " pod="openstack/neutron-db-sync-ptnfn" Mar 12 18:47:11.068787 master-0 kubenswrapper[29097]: I0312 18:47:11.068728 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73e3cd3a-c873-4b0f-870d-26ba00b0a910-dns-swift-storage-0\") pod \"dnsmasq-dns-56c5578c7c-zjbch\" (UID: \"73e3cd3a-c873-4b0f-870d-26ba00b0a910\") " pod="openstack/dnsmasq-dns-56c5578c7c-zjbch" Mar 12 18:47:11.068787 master-0 kubenswrapper[29097]: I0312 18:47:11.068779 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw4gh\" (UniqueName: \"kubernetes.io/projected/73e3cd3a-c873-4b0f-870d-26ba00b0a910-kube-api-access-dw4gh\") pod \"dnsmasq-dns-56c5578c7c-zjbch\" (UID: \"73e3cd3a-c873-4b0f-870d-26ba00b0a910\") " pod="openstack/dnsmasq-dns-56c5578c7c-zjbch" Mar 12 18:47:11.068934 master-0 kubenswrapper[29097]: I0312 18:47:11.068807 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73e3cd3a-c873-4b0f-870d-26ba00b0a910-config\") pod \"dnsmasq-dns-56c5578c7c-zjbch\" (UID: \"73e3cd3a-c873-4b0f-870d-26ba00b0a910\") " pod="openstack/dnsmasq-dns-56c5578c7c-zjbch" Mar 12 18:47:11.068934 master-0 kubenswrapper[29097]: I0312 18:47:11.068828 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae6f8a81-a597-4c6d-ae77-b60b36190af6-combined-ca-bundle\") pod \"placement-db-sync-s5w8c\" (UID: \"ae6f8a81-a597-4c6d-ae77-b60b36190af6\") " pod="openstack/placement-db-sync-s5w8c" Mar 12 18:47:11.068934 master-0 kubenswrapper[29097]: I0312 18:47:11.068860 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73e3cd3a-c873-4b0f-870d-26ba00b0a910-ovsdbserver-sb\") pod \"dnsmasq-dns-56c5578c7c-zjbch\" (UID: \"73e3cd3a-c873-4b0f-870d-26ba00b0a910\") " pod="openstack/dnsmasq-dns-56c5578c7c-zjbch" Mar 12 18:47:11.068934 master-0 kubenswrapper[29097]: I0312 18:47:11.068909 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae6f8a81-a597-4c6d-ae77-b60b36190af6-config-data\") pod \"placement-db-sync-s5w8c\" (UID: \"ae6f8a81-a597-4c6d-ae77-b60b36190af6\") " pod="openstack/placement-db-sync-s5w8c" Mar 12 18:47:11.069064 master-0 kubenswrapper[29097]: I0312 18:47:11.068942 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73e3cd3a-c873-4b0f-870d-26ba00b0a910-dns-svc\") pod \"dnsmasq-dns-56c5578c7c-zjbch\" (UID: \"73e3cd3a-c873-4b0f-870d-26ba00b0a910\") " pod="openstack/dnsmasq-dns-56c5578c7c-zjbch" Mar 12 18:47:11.069064 master-0 kubenswrapper[29097]: I0312 18:47:11.068966 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae6f8a81-a597-4c6d-ae77-b60b36190af6-logs\") pod \"placement-db-sync-s5w8c\" (UID: \"ae6f8a81-a597-4c6d-ae77-b60b36190af6\") " pod="openstack/placement-db-sync-s5w8c" Mar 12 18:47:11.069064 master-0 kubenswrapper[29097]: I0312 18:47:11.068993 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc85z\" (UniqueName: \"kubernetes.io/projected/ae6f8a81-a597-4c6d-ae77-b60b36190af6-kube-api-access-lc85z\") pod \"placement-db-sync-s5w8c\" (UID: \"ae6f8a81-a597-4c6d-ae77-b60b36190af6\") " pod="openstack/placement-db-sync-s5w8c" Mar 12 18:47:11.069064 master-0 kubenswrapper[29097]: I0312 18:47:11.069010 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae6f8a81-a597-4c6d-ae77-b60b36190af6-scripts\") pod \"placement-db-sync-s5w8c\" (UID: \"ae6f8a81-a597-4c6d-ae77-b60b36190af6\") " pod="openstack/placement-db-sync-s5w8c" Mar 12 18:47:11.069064 master-0 kubenswrapper[29097]: I0312 18:47:11.069050 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73e3cd3a-c873-4b0f-870d-26ba00b0a910-ovsdbserver-nb\") pod \"dnsmasq-dns-56c5578c7c-zjbch\" (UID: \"73e3cd3a-c873-4b0f-870d-26ba00b0a910\") " pod="openstack/dnsmasq-dns-56c5578c7c-zjbch" Mar 12 18:47:11.070505 master-0 kubenswrapper[29097]: I0312 18:47:11.069908 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73e3cd3a-c873-4b0f-870d-26ba00b0a910-ovsdbserver-nb\") pod \"dnsmasq-dns-56c5578c7c-zjbch\" (UID: \"73e3cd3a-c873-4b0f-870d-26ba00b0a910\") " pod="openstack/dnsmasq-dns-56c5578c7c-zjbch" Mar 12 18:47:11.070505 master-0 kubenswrapper[29097]: I0312 18:47:11.070489 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73e3cd3a-c873-4b0f-870d-26ba00b0a910-dns-swift-storage-0\") pod \"dnsmasq-dns-56c5578c7c-zjbch\" (UID: \"73e3cd3a-c873-4b0f-870d-26ba00b0a910\") " pod="openstack/dnsmasq-dns-56c5578c7c-zjbch" Mar 12 18:47:11.071740 master-0 kubenswrapper[29097]: I0312 18:47:11.071691 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae6f8a81-a597-4c6d-ae77-b60b36190af6-logs\") pod \"placement-db-sync-s5w8c\" (UID: \"ae6f8a81-a597-4c6d-ae77-b60b36190af6\") " pod="openstack/placement-db-sync-s5w8c" Mar 12 18:47:11.072579 master-0 kubenswrapper[29097]: I0312 18:47:11.072545 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73e3cd3a-c873-4b0f-870d-26ba00b0a910-dns-svc\") pod \"dnsmasq-dns-56c5578c7c-zjbch\" (UID: \"73e3cd3a-c873-4b0f-870d-26ba00b0a910\") " pod="openstack/dnsmasq-dns-56c5578c7c-zjbch" Mar 12 18:47:11.077468 master-0 kubenswrapper[29097]: I0312 18:47:11.076738 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73e3cd3a-c873-4b0f-870d-26ba00b0a910-config\") pod \"dnsmasq-dns-56c5578c7c-zjbch\" (UID: \"73e3cd3a-c873-4b0f-870d-26ba00b0a910\") " pod="openstack/dnsmasq-dns-56c5578c7c-zjbch" Mar 12 18:47:11.078253 master-0 kubenswrapper[29097]: I0312 18:47:11.078169 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73e3cd3a-c873-4b0f-870d-26ba00b0a910-ovsdbserver-sb\") pod \"dnsmasq-dns-56c5578c7c-zjbch\" (UID: \"73e3cd3a-c873-4b0f-870d-26ba00b0a910\") " pod="openstack/dnsmasq-dns-56c5578c7c-zjbch" Mar 12 18:47:11.085950 master-0 kubenswrapper[29097]: I0312 18:47:11.085706 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae6f8a81-a597-4c6d-ae77-b60b36190af6-scripts\") pod \"placement-db-sync-s5w8c\" (UID: \"ae6f8a81-a597-4c6d-ae77-b60b36190af6\") " pod="openstack/placement-db-sync-s5w8c" Mar 12 18:47:11.087749 master-0 kubenswrapper[29097]: I0312 18:47:11.086476 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae6f8a81-a597-4c6d-ae77-b60b36190af6-combined-ca-bundle\") pod \"placement-db-sync-s5w8c\" (UID: \"ae6f8a81-a597-4c6d-ae77-b60b36190af6\") " pod="openstack/placement-db-sync-s5w8c" Mar 12 18:47:11.087749 master-0 kubenswrapper[29097]: I0312 18:47:11.087719 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fa62f-db-sync-xn4dx" Mar 12 18:47:11.107976 master-0 kubenswrapper[29097]: I0312 18:47:11.107924 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw4gh\" (UniqueName: \"kubernetes.io/projected/73e3cd3a-c873-4b0f-870d-26ba00b0a910-kube-api-access-dw4gh\") pod \"dnsmasq-dns-56c5578c7c-zjbch\" (UID: \"73e3cd3a-c873-4b0f-870d-26ba00b0a910\") " pod="openstack/dnsmasq-dns-56c5578c7c-zjbch" Mar 12 18:47:11.111375 master-0 kubenswrapper[29097]: I0312 18:47:11.111089 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-0e3d-account-create-update-pb9cg" Mar 12 18:47:11.111535 master-0 kubenswrapper[29097]: I0312 18:47:11.111423 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae6f8a81-a597-4c6d-ae77-b60b36190af6-config-data\") pod \"placement-db-sync-s5w8c\" (UID: \"ae6f8a81-a597-4c6d-ae77-b60b36190af6\") " pod="openstack/placement-db-sync-s5w8c" Mar 12 18:47:11.113780 master-0 kubenswrapper[29097]: I0312 18:47:11.113747 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc85z\" (UniqueName: \"kubernetes.io/projected/ae6f8a81-a597-4c6d-ae77-b60b36190af6-kube-api-access-lc85z\") pod \"placement-db-sync-s5w8c\" (UID: \"ae6f8a81-a597-4c6d-ae77-b60b36190af6\") " pod="openstack/placement-db-sync-s5w8c" Mar 12 18:47:11.143837 master-0 kubenswrapper[29097]: I0312 18:47:11.143727 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ptnfn" Mar 12 18:47:11.170278 master-0 kubenswrapper[29097]: I0312 18:47:11.170228 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s5w8c" Mar 12 18:47:11.221028 master-0 kubenswrapper[29097]: I0312 18:47:11.220798 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c5578c7c-zjbch" Mar 12 18:47:11.364886 master-0 kubenswrapper[29097]: I0312 18:47:11.364361 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pxk9z"] Mar 12 18:47:11.666864 master-0 kubenswrapper[29097]: I0312 18:47:11.666627 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8695667fff-k8fx9"] Mar 12 18:47:11.861694 master-0 kubenswrapper[29097]: I0312 18:47:11.860361 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5995bddff5-9l72c" Mar 12 18:47:11.945270 master-0 kubenswrapper[29097]: I0312 18:47:11.945188 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5995bddff5-9l72c" Mar 12 18:47:11.945270 master-0 kubenswrapper[29097]: I0312 18:47:11.945202 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5995bddff5-9l72c" event={"ID":"7f5d0599-2540-4927-aceb-7cac7e4fb7c3","Type":"ContainerDied","Data":"d0dc8cf957189be028f21d6b3e9c20c395ff389e6ae3b4da4ab08e62cae7ef95"} Mar 12 18:47:11.945270 master-0 kubenswrapper[29097]: I0312 18:47:11.945258 29097 scope.go:117] "RemoveContainer" containerID="555150304c91c37d70a3b1552184b8e3f15ffc2bb2eff22068a4bbe623834939" Mar 12 18:47:11.947598 master-0 kubenswrapper[29097]: I0312 18:47:11.947504 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8695667fff-k8fx9" event={"ID":"58ba054c-0e16-46e8-b7e5-861db4f81fa3","Type":"ContainerStarted","Data":"1c749e67601f0b1079d48af77ba06c1f53a42ab3845c87324c5ca3bdcc08cffa"} Mar 12 18:47:11.950067 master-0 kubenswrapper[29097]: I0312 18:47:11.949553 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pxk9z" event={"ID":"a8cbcb3d-a29a-40f9-afe8-401b3db17fd0","Type":"ContainerStarted","Data":"a33e03551302f73072c8ce86712dda4172045d05cc6f3eb0a861ecda2da6243f"} Mar 12 18:47:11.979925 master-0 kubenswrapper[29097]: I0312 18:47:11.979865 29097 scope.go:117] "RemoveContainer" containerID="055f4329332c182e0ec61e3825ba4545f5ec87a983813619f833ac4f462a188e" Mar 12 18:47:12.012414 master-0 kubenswrapper[29097]: I0312 18:47:12.012356 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-config\") pod \"7f5d0599-2540-4927-aceb-7cac7e4fb7c3\" (UID: \"7f5d0599-2540-4927-aceb-7cac7e4fb7c3\") " Mar 12 18:47:12.012521 master-0 kubenswrapper[29097]: I0312 18:47:12.012460 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gthf\" (UniqueName: \"kubernetes.io/projected/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-kube-api-access-4gthf\") pod \"7f5d0599-2540-4927-aceb-7cac7e4fb7c3\" (UID: \"7f5d0599-2540-4927-aceb-7cac7e4fb7c3\") " Mar 12 18:47:12.012521 master-0 kubenswrapper[29097]: I0312 18:47:12.012483 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-ovsdbserver-nb\") pod \"7f5d0599-2540-4927-aceb-7cac7e4fb7c3\" (UID: \"7f5d0599-2540-4927-aceb-7cac7e4fb7c3\") " Mar 12 18:47:12.012521 master-0 kubenswrapper[29097]: I0312 18:47:12.012501 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-dns-svc\") pod \"7f5d0599-2540-4927-aceb-7cac7e4fb7c3\" (UID: \"7f5d0599-2540-4927-aceb-7cac7e4fb7c3\") " Mar 12 18:47:12.012625 master-0 kubenswrapper[29097]: I0312 18:47:12.012535 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-dns-swift-storage-0\") pod \"7f5d0599-2540-4927-aceb-7cac7e4fb7c3\" (UID: \"7f5d0599-2540-4927-aceb-7cac7e4fb7c3\") " Mar 12 18:47:12.012760 master-0 kubenswrapper[29097]: I0312 18:47:12.012740 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-ovsdbserver-sb\") pod \"7f5d0599-2540-4927-aceb-7cac7e4fb7c3\" (UID: \"7f5d0599-2540-4927-aceb-7cac7e4fb7c3\") " Mar 12 18:47:12.038124 master-0 kubenswrapper[29097]: I0312 18:47:12.037329 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-kube-api-access-4gthf" (OuterVolumeSpecName: "kube-api-access-4gthf") pod "7f5d0599-2540-4927-aceb-7cac7e4fb7c3" (UID: "7f5d0599-2540-4927-aceb-7cac7e4fb7c3"). InnerVolumeSpecName "kube-api-access-4gthf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:47:12.074699 master-0 kubenswrapper[29097]: I0312 18:47:12.073932 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7f5d0599-2540-4927-aceb-7cac7e4fb7c3" (UID: "7f5d0599-2540-4927-aceb-7cac7e4fb7c3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:47:12.086825 master-0 kubenswrapper[29097]: I0312 18:47:12.086773 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7f5d0599-2540-4927-aceb-7cac7e4fb7c3" (UID: "7f5d0599-2540-4927-aceb-7cac7e4fb7c3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:47:12.088308 master-0 kubenswrapper[29097]: I0312 18:47:12.088268 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-config" (OuterVolumeSpecName: "config") pod "7f5d0599-2540-4927-aceb-7cac7e4fb7c3" (UID: "7f5d0599-2540-4927-aceb-7cac7e4fb7c3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:47:12.090604 master-0 kubenswrapper[29097]: I0312 18:47:12.090543 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7f5d0599-2540-4927-aceb-7cac7e4fb7c3" (UID: "7f5d0599-2540-4927-aceb-7cac7e4fb7c3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:47:12.091306 master-0 kubenswrapper[29097]: I0312 18:47:12.091259 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7f5d0599-2540-4927-aceb-7cac7e4fb7c3" (UID: "7f5d0599-2540-4927-aceb-7cac7e4fb7c3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:47:12.115103 master-0 kubenswrapper[29097]: I0312 18:47:12.115053 29097 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:12.115103 master-0 kubenswrapper[29097]: I0312 18:47:12.115091 29097 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:12.115103 master-0 kubenswrapper[29097]: I0312 18:47:12.115105 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gthf\" (UniqueName: \"kubernetes.io/projected/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-kube-api-access-4gthf\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:12.115103 master-0 kubenswrapper[29097]: I0312 18:47:12.115117 29097 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:12.115103 master-0 kubenswrapper[29097]: I0312 18:47:12.115130 29097 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:12.115483 master-0 kubenswrapper[29097]: I0312 18:47:12.115138 29097 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f5d0599-2540-4927-aceb-7cac7e4fb7c3-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:12.396759 master-0 kubenswrapper[29097]: I0312 18:47:12.396707 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-4v6wn"] Mar 12 18:47:12.406175 master-0 kubenswrapper[29097]: I0312 18:47:12.406121 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-16afb-default-external-api-0"] Mar 12 18:47:12.406600 master-0 kubenswrapper[29097]: E0312 18:47:12.406566 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f5d0599-2540-4927-aceb-7cac7e4fb7c3" containerName="init" Mar 12 18:47:12.406600 master-0 kubenswrapper[29097]: I0312 18:47:12.406584 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f5d0599-2540-4927-aceb-7cac7e4fb7c3" containerName="init" Mar 12 18:47:12.406689 master-0 kubenswrapper[29097]: E0312 18:47:12.406637 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f5d0599-2540-4927-aceb-7cac7e4fb7c3" containerName="dnsmasq-dns" Mar 12 18:47:12.406689 master-0 kubenswrapper[29097]: I0312 18:47:12.406645 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f5d0599-2540-4927-aceb-7cac7e4fb7c3" containerName="dnsmasq-dns" Mar 12 18:47:12.406871 master-0 kubenswrapper[29097]: I0312 18:47:12.406852 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f5d0599-2540-4927-aceb-7cac7e4fb7c3" containerName="dnsmasq-dns" Mar 12 18:47:12.407910 master-0 kubenswrapper[29097]: I0312 18:47:12.407880 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:12.409950 master-0 kubenswrapper[29097]: I0312 18:47:12.409909 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 12 18:47:12.410114 master-0 kubenswrapper[29097]: I0312 18:47:12.410096 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-16afb-default-external-config-data" Mar 12 18:47:12.410162 master-0 kubenswrapper[29097]: I0312 18:47:12.410096 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 12 18:47:12.798388 master-0 kubenswrapper[29097]: W0312 18:47:12.798297 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae6f8a81_a597_4c6d_ae77_b60b36190af6.slice/crio-ce3cefb9cc08074338d1fd90e809169d3c44bc562d18b6a72f93f94b78a04843 WatchSource:0}: Error finding container ce3cefb9cc08074338d1fd90e809169d3c44bc562d18b6a72f93f94b78a04843: Status 404 returned error can't find the container with id ce3cefb9cc08074338d1fd90e809169d3c44bc562d18b6a72f93f94b78a04843 Mar 12 18:47:12.805096 master-0 kubenswrapper[29097]: I0312 18:47:12.805036 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-0e3d-account-create-update-pb9cg"] Mar 12 18:47:12.833031 master-0 kubenswrapper[29097]: I0312 18:47:12.832606 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-16afb-default-external-api-0"] Mar 12 18:47:12.868786 master-0 kubenswrapper[29097]: I0312 18:47:12.868736 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-ptnfn"] Mar 12 18:47:12.881757 master-0 kubenswrapper[29097]: I0312 18:47:12.881363 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-s5w8c"] Mar 12 18:47:12.892419 master-0 kubenswrapper[29097]: I0312 18:47:12.892346 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-fa62f-db-sync-xn4dx"] Mar 12 18:47:12.914301 master-0 kubenswrapper[29097]: I0312 18:47:12.914208 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56c5578c7c-zjbch"] Mar 12 18:47:12.962731 master-0 kubenswrapper[29097]: I0312 18:47:12.962655 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-db-sync-xn4dx" event={"ID":"d7af9e78-07ea-42c2-8d0a-d73fe46d8d36","Type":"ContainerStarted","Data":"111769e3cb10fdb21c0d1ed8552e1c8c26831b7e7d381203afd7c6d2bc4321c1"} Mar 12 18:47:12.964328 master-0 kubenswrapper[29097]: I0312 18:47:12.964252 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pxk9z" event={"ID":"a8cbcb3d-a29a-40f9-afe8-401b3db17fd0","Type":"ContainerStarted","Data":"6408ee724587b198615016f317bd50a06e48a50da1f318412c9cff6ffcf4f26b"} Mar 12 18:47:12.965871 master-0 kubenswrapper[29097]: I0312 18:47:12.965820 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ptnfn" event={"ID":"1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1","Type":"ContainerStarted","Data":"f548747709560e7576bb3c4efba89c954af8d216e65043165428fef1ee6cd2e6"} Mar 12 18:47:12.967356 master-0 kubenswrapper[29097]: I0312 18:47:12.967309 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c5578c7c-zjbch" event={"ID":"73e3cd3a-c873-4b0f-870d-26ba00b0a910","Type":"ContainerStarted","Data":"25a2631cc76c36bd5968deb2f8daac913bc3fe80841d42f526f5b96f4948900d"} Mar 12 18:47:12.969584 master-0 kubenswrapper[29097]: I0312 18:47:12.969489 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-0e3d-account-create-update-pb9cg" event={"ID":"22939625-8570-4e99-9070-5031a539e183","Type":"ContainerStarted","Data":"fb8a425557e39c52dcde215fc0b56507d2c5d84ad564ea54d0f672a4887e5660"} Mar 12 18:47:12.970823 master-0 kubenswrapper[29097]: I0312 18:47:12.970752 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-4v6wn" event={"ID":"784b2b8e-d340-4f65-8abb-ad196b08ed6f","Type":"ContainerStarted","Data":"eba4b16243809e5c889945613f52735c30dd95bfd7a575ecbc9ea6433c5b8602"} Mar 12 18:47:12.970823 master-0 kubenswrapper[29097]: I0312 18:47:12.970782 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-4v6wn" event={"ID":"784b2b8e-d340-4f65-8abb-ad196b08ed6f","Type":"ContainerStarted","Data":"457eeeb734f40057dc9177f9d65b919e70754c92cad5bd1c34caeb21a26f64b4"} Mar 12 18:47:12.975747 master-0 kubenswrapper[29097]: I0312 18:47:12.975422 29097 generic.go:334] "Generic (PLEG): container finished" podID="58ba054c-0e16-46e8-b7e5-861db4f81fa3" containerID="e14904e6c03c33d9fb86e635d3cd05240cf57e5acda4f719d8603dc73e8ef35f" exitCode=0 Mar 12 18:47:12.975747 master-0 kubenswrapper[29097]: I0312 18:47:12.975474 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8695667fff-k8fx9" event={"ID":"58ba054c-0e16-46e8-b7e5-861db4f81fa3","Type":"ContainerDied","Data":"e14904e6c03c33d9fb86e635d3cd05240cf57e5acda4f719d8603dc73e8ef35f"} Mar 12 18:47:12.977824 master-0 kubenswrapper[29097]: I0312 18:47:12.977795 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s5w8c" event={"ID":"ae6f8a81-a597-4c6d-ae77-b60b36190af6","Type":"ContainerStarted","Data":"ce3cefb9cc08074338d1fd90e809169d3c44bc562d18b6a72f93f94b78a04843"} Mar 12 18:47:13.134163 master-0 kubenswrapper[29097]: I0312 18:47:13.134100 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr5hq\" (UniqueName: \"kubernetes.io/projected/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-kube-api-access-jr5hq\") pod \"glance-16afb-default-external-api-0\" (UID: \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:13.134163 master-0 kubenswrapper[29097]: I0312 18:47:13.134164 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-logs\") pod \"glance-16afb-default-external-api-0\" (UID: \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:13.134451 master-0 kubenswrapper[29097]: I0312 18:47:13.134192 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-combined-ca-bundle\") pod \"glance-16afb-default-external-api-0\" (UID: \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:13.134451 master-0 kubenswrapper[29097]: I0312 18:47:13.134267 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-httpd-run\") pod \"glance-16afb-default-external-api-0\" (UID: \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:13.134451 master-0 kubenswrapper[29097]: I0312 18:47:13.134303 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-scripts\") pod \"glance-16afb-default-external-api-0\" (UID: \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:13.134451 master-0 kubenswrapper[29097]: I0312 18:47:13.134339 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ec8aab56-f74f-48cf-9ada-d06acc37cf58\" (UniqueName: \"kubernetes.io/csi/topolvm.io^310fb28b-3797-432d-a3e8-4c4039b4350a\") pod \"glance-16afb-default-external-api-0\" (UID: \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:13.134660 master-0 kubenswrapper[29097]: I0312 18:47:13.134453 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-config-data\") pod \"glance-16afb-default-external-api-0\" (UID: \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:13.134660 master-0 kubenswrapper[29097]: I0312 18:47:13.134533 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-public-tls-certs\") pod \"glance-16afb-default-external-api-0\" (UID: \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:13.237380 master-0 kubenswrapper[29097]: I0312 18:47:13.236705 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-config-data\") pod \"glance-16afb-default-external-api-0\" (UID: \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:13.237380 master-0 kubenswrapper[29097]: I0312 18:47:13.236852 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-public-tls-certs\") pod \"glance-16afb-default-external-api-0\" (UID: \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:13.237380 master-0 kubenswrapper[29097]: I0312 18:47:13.237070 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr5hq\" (UniqueName: \"kubernetes.io/projected/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-kube-api-access-jr5hq\") pod \"glance-16afb-default-external-api-0\" (UID: \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:13.237380 master-0 kubenswrapper[29097]: I0312 18:47:13.237147 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-logs\") pod \"glance-16afb-default-external-api-0\" (UID: \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:13.237380 master-0 kubenswrapper[29097]: I0312 18:47:13.237170 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-combined-ca-bundle\") pod \"glance-16afb-default-external-api-0\" (UID: \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:13.237380 master-0 kubenswrapper[29097]: I0312 18:47:13.237316 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-httpd-run\") pod \"glance-16afb-default-external-api-0\" (UID: \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:13.238592 master-0 kubenswrapper[29097]: I0312 18:47:13.237802 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-scripts\") pod \"glance-16afb-default-external-api-0\" (UID: \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:13.238592 master-0 kubenswrapper[29097]: I0312 18:47:13.237818 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-httpd-run\") pod \"glance-16afb-default-external-api-0\" (UID: \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:13.238592 master-0 kubenswrapper[29097]: I0312 18:47:13.237892 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-logs\") pod \"glance-16afb-default-external-api-0\" (UID: \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:13.240392 master-0 kubenswrapper[29097]: I0312 18:47:13.240328 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-public-tls-certs\") pod \"glance-16afb-default-external-api-0\" (UID: \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:13.242411 master-0 kubenswrapper[29097]: I0312 18:47:13.242368 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-config-data\") pod \"glance-16afb-default-external-api-0\" (UID: \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:13.242826 master-0 kubenswrapper[29097]: I0312 18:47:13.242788 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-scripts\") pod \"glance-16afb-default-external-api-0\" (UID: \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:13.249833 master-0 kubenswrapper[29097]: I0312 18:47:13.249781 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-combined-ca-bundle\") pod \"glance-16afb-default-external-api-0\" (UID: \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:13.441259 master-0 kubenswrapper[29097]: I0312 18:47:13.441193 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ec8aab56-f74f-48cf-9ada-d06acc37cf58\" (UniqueName: \"kubernetes.io/csi/topolvm.io^310fb28b-3797-432d-a3e8-4c4039b4350a\") pod \"glance-16afb-default-external-api-0\" (UID: \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:13.446530 master-0 kubenswrapper[29097]: I0312 18:47:13.443366 29097 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 18:47:13.446530 master-0 kubenswrapper[29097]: I0312 18:47:13.443399 29097 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ec8aab56-f74f-48cf-9ada-d06acc37cf58\" (UniqueName: \"kubernetes.io/csi/topolvm.io^310fb28b-3797-432d-a3e8-4c4039b4350a\") pod \"glance-16afb-default-external-api-0\" (UID: \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/a675104d86a4ce743943f7962ef4d34dd002b87ad3cb26bbb0067dde16060ad0/globalmount\"" pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:13.473172 master-0 kubenswrapper[29097]: I0312 18:47:13.472726 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8695667fff-k8fx9" Mar 12 18:47:13.657657 master-0 kubenswrapper[29097]: I0312 18:47:13.653411 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/58ba054c-0e16-46e8-b7e5-861db4f81fa3-ovsdbserver-nb\") pod \"58ba054c-0e16-46e8-b7e5-861db4f81fa3\" (UID: \"58ba054c-0e16-46e8-b7e5-861db4f81fa3\") " Mar 12 18:47:13.657657 master-0 kubenswrapper[29097]: I0312 18:47:13.653586 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58ba054c-0e16-46e8-b7e5-861db4f81fa3-dns-svc\") pod \"58ba054c-0e16-46e8-b7e5-861db4f81fa3\" (UID: \"58ba054c-0e16-46e8-b7e5-861db4f81fa3\") " Mar 12 18:47:13.657657 master-0 kubenswrapper[29097]: I0312 18:47:13.653620 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/58ba054c-0e16-46e8-b7e5-861db4f81fa3-dns-swift-storage-0\") pod \"58ba054c-0e16-46e8-b7e5-861db4f81fa3\" (UID: \"58ba054c-0e16-46e8-b7e5-861db4f81fa3\") " Mar 12 18:47:13.657657 master-0 kubenswrapper[29097]: I0312 18:47:13.653691 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58ba054c-0e16-46e8-b7e5-861db4f81fa3-config\") pod \"58ba054c-0e16-46e8-b7e5-861db4f81fa3\" (UID: \"58ba054c-0e16-46e8-b7e5-861db4f81fa3\") " Mar 12 18:47:13.657657 master-0 kubenswrapper[29097]: I0312 18:47:13.653892 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkgph\" (UniqueName: \"kubernetes.io/projected/58ba054c-0e16-46e8-b7e5-861db4f81fa3-kube-api-access-hkgph\") pod \"58ba054c-0e16-46e8-b7e5-861db4f81fa3\" (UID: \"58ba054c-0e16-46e8-b7e5-861db4f81fa3\") " Mar 12 18:47:13.657657 master-0 kubenswrapper[29097]: I0312 18:47:13.653938 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58ba054c-0e16-46e8-b7e5-861db4f81fa3-ovsdbserver-sb\") pod \"58ba054c-0e16-46e8-b7e5-861db4f81fa3\" (UID: \"58ba054c-0e16-46e8-b7e5-861db4f81fa3\") " Mar 12 18:47:13.657657 master-0 kubenswrapper[29097]: I0312 18:47:13.657168 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5995bddff5-9l72c"] Mar 12 18:47:13.684292 master-0 kubenswrapper[29097]: I0312 18:47:13.684217 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58ba054c-0e16-46e8-b7e5-861db4f81fa3-kube-api-access-hkgph" (OuterVolumeSpecName: "kube-api-access-hkgph") pod "58ba054c-0e16-46e8-b7e5-861db4f81fa3" (UID: "58ba054c-0e16-46e8-b7e5-861db4f81fa3"). InnerVolumeSpecName "kube-api-access-hkgph". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:47:13.698913 master-0 kubenswrapper[29097]: I0312 18:47:13.691637 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58ba054c-0e16-46e8-b7e5-861db4f81fa3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "58ba054c-0e16-46e8-b7e5-861db4f81fa3" (UID: "58ba054c-0e16-46e8-b7e5-861db4f81fa3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:47:13.698913 master-0 kubenswrapper[29097]: I0312 18:47:13.697297 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58ba054c-0e16-46e8-b7e5-861db4f81fa3-config" (OuterVolumeSpecName: "config") pod "58ba054c-0e16-46e8-b7e5-861db4f81fa3" (UID: "58ba054c-0e16-46e8-b7e5-861db4f81fa3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:47:13.703836 master-0 kubenswrapper[29097]: I0312 18:47:13.703778 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58ba054c-0e16-46e8-b7e5-861db4f81fa3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "58ba054c-0e16-46e8-b7e5-861db4f81fa3" (UID: "58ba054c-0e16-46e8-b7e5-861db4f81fa3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:47:13.721654 master-0 kubenswrapper[29097]: I0312 18:47:13.720828 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58ba054c-0e16-46e8-b7e5-861db4f81fa3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "58ba054c-0e16-46e8-b7e5-861db4f81fa3" (UID: "58ba054c-0e16-46e8-b7e5-861db4f81fa3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:47:13.740632 master-0 kubenswrapper[29097]: I0312 18:47:13.740447 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58ba054c-0e16-46e8-b7e5-861db4f81fa3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "58ba054c-0e16-46e8-b7e5-861db4f81fa3" (UID: "58ba054c-0e16-46e8-b7e5-861db4f81fa3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:47:13.745498 master-0 kubenswrapper[29097]: I0312 18:47:13.745458 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr5hq\" (UniqueName: \"kubernetes.io/projected/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-kube-api-access-jr5hq\") pod \"glance-16afb-default-external-api-0\" (UID: \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:13.756950 master-0 kubenswrapper[29097]: I0312 18:47:13.756890 29097 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/58ba054c-0e16-46e8-b7e5-861db4f81fa3-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:13.756950 master-0 kubenswrapper[29097]: I0312 18:47:13.756932 29097 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58ba054c-0e16-46e8-b7e5-861db4f81fa3-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:13.756950 master-0 kubenswrapper[29097]: I0312 18:47:13.756944 29097 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/58ba054c-0e16-46e8-b7e5-861db4f81fa3-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:13.756950 master-0 kubenswrapper[29097]: I0312 18:47:13.756956 29097 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58ba054c-0e16-46e8-b7e5-861db4f81fa3-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:13.757217 master-0 kubenswrapper[29097]: I0312 18:47:13.756965 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkgph\" (UniqueName: \"kubernetes.io/projected/58ba054c-0e16-46e8-b7e5-861db4f81fa3-kube-api-access-hkgph\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:13.757217 master-0 kubenswrapper[29097]: I0312 18:47:13.756974 29097 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58ba054c-0e16-46e8-b7e5-861db4f81fa3-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:13.794536 master-0 kubenswrapper[29097]: I0312 18:47:13.793858 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5995bddff5-9l72c"] Mar 12 18:47:13.991265 master-0 kubenswrapper[29097]: I0312 18:47:13.991165 29097 generic.go:334] "Generic (PLEG): container finished" podID="73e3cd3a-c873-4b0f-870d-26ba00b0a910" containerID="ef933ccbf34bd1b64a9e361ac9b684403ebc011734f57febfef1626ad1d8ff42" exitCode=0 Mar 12 18:47:13.991265 master-0 kubenswrapper[29097]: I0312 18:47:13.991233 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c5578c7c-zjbch" event={"ID":"73e3cd3a-c873-4b0f-870d-26ba00b0a910","Type":"ContainerDied","Data":"ef933ccbf34bd1b64a9e361ac9b684403ebc011734f57febfef1626ad1d8ff42"} Mar 12 18:47:13.993583 master-0 kubenswrapper[29097]: I0312 18:47:13.993538 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ptnfn" event={"ID":"1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1","Type":"ContainerStarted","Data":"209247b4520b8861a60d0e6248bbe2bda42ad520bd33156189442acf885b604e"} Mar 12 18:47:13.997638 master-0 kubenswrapper[29097]: I0312 18:47:13.996491 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8695667fff-k8fx9" event={"ID":"58ba054c-0e16-46e8-b7e5-861db4f81fa3","Type":"ContainerDied","Data":"1c749e67601f0b1079d48af77ba06c1f53a42ab3845c87324c5ca3bdcc08cffa"} Mar 12 18:47:13.997638 master-0 kubenswrapper[29097]: I0312 18:47:13.996584 29097 scope.go:117] "RemoveContainer" containerID="e14904e6c03c33d9fb86e635d3cd05240cf57e5acda4f719d8603dc73e8ef35f" Mar 12 18:47:13.997638 master-0 kubenswrapper[29097]: I0312 18:47:13.996716 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8695667fff-k8fx9" Mar 12 18:47:14.024447 master-0 kubenswrapper[29097]: I0312 18:47:14.024327 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-0e3d-account-create-update-pb9cg" event={"ID":"22939625-8570-4e99-9070-5031a539e183","Type":"ContainerStarted","Data":"36b861d2ed23ff840df979731158210569fa20b6d3f0b09c9edfeed043adf337"} Mar 12 18:47:14.037459 master-0 kubenswrapper[29097]: I0312 18:47:14.037341 29097 generic.go:334] "Generic (PLEG): container finished" podID="784b2b8e-d340-4f65-8abb-ad196b08ed6f" containerID="eba4b16243809e5c889945613f52735c30dd95bfd7a575ecbc9ea6433c5b8602" exitCode=0 Mar 12 18:47:14.038437 master-0 kubenswrapper[29097]: I0312 18:47:14.038358 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-4v6wn" event={"ID":"784b2b8e-d340-4f65-8abb-ad196b08ed6f","Type":"ContainerDied","Data":"eba4b16243809e5c889945613f52735c30dd95bfd7a575ecbc9ea6433c5b8602"} Mar 12 18:47:14.312579 master-0 kubenswrapper[29097]: I0312 18:47:14.310554 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-0e3d-account-create-update-pb9cg" podStartSLOduration=4.31050626 podStartE2EDuration="4.31050626s" podCreationTimestamp="2026-03-12 18:47:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:47:14.300022748 +0000 UTC m=+1073.854002845" watchObservedRunningTime="2026-03-12 18:47:14.31050626 +0000 UTC m=+1073.864486367" Mar 12 18:47:14.682373 master-0 kubenswrapper[29097]: I0312 18:47:14.682316 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8695667fff-k8fx9"] Mar 12 18:47:14.733619 master-0 kubenswrapper[29097]: I0312 18:47:14.733568 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f5d0599-2540-4927-aceb-7cac7e4fb7c3" path="/var/lib/kubelet/pods/7f5d0599-2540-4927-aceb-7cac7e4fb7c3/volumes" Mar 12 18:47:14.886122 master-0 kubenswrapper[29097]: I0312 18:47:14.883023 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8695667fff-k8fx9"] Mar 12 18:47:15.054282 master-0 kubenswrapper[29097]: I0312 18:47:15.054220 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c5578c7c-zjbch" event={"ID":"73e3cd3a-c873-4b0f-870d-26ba00b0a910","Type":"ContainerStarted","Data":"3a9f18cb4fb84893628b85c0e0c2f3e983b38debc61be907f1207c28a89da2c3"} Mar 12 18:47:15.055218 master-0 kubenswrapper[29097]: I0312 18:47:15.055172 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56c5578c7c-zjbch" Mar 12 18:47:15.060893 master-0 kubenswrapper[29097]: I0312 18:47:15.060846 29097 generic.go:334] "Generic (PLEG): container finished" podID="22939625-8570-4e99-9070-5031a539e183" containerID="36b861d2ed23ff840df979731158210569fa20b6d3f0b09c9edfeed043adf337" exitCode=0 Mar 12 18:47:15.061114 master-0 kubenswrapper[29097]: I0312 18:47:15.060987 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-0e3d-account-create-update-pb9cg" event={"ID":"22939625-8570-4e99-9070-5031a539e183","Type":"ContainerDied","Data":"36b861d2ed23ff840df979731158210569fa20b6d3f0b09c9edfeed043adf337"} Mar 12 18:47:15.115215 master-0 kubenswrapper[29097]: I0312 18:47:15.115167 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ec8aab56-f74f-48cf-9ada-d06acc37cf58\" (UniqueName: \"kubernetes.io/csi/topolvm.io^310fb28b-3797-432d-a3e8-4c4039b4350a\") pod \"glance-16afb-default-external-api-0\" (UID: \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:15.753003 master-0 kubenswrapper[29097]: I0312 18:47:15.227673 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:15.889697 master-0 kubenswrapper[29097]: I0312 18:47:15.874868 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-pxk9z" podStartSLOduration=5.874830909 podStartE2EDuration="5.874830909s" podCreationTimestamp="2026-03-12 18:47:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:47:15.857005274 +0000 UTC m=+1075.410985371" watchObservedRunningTime="2026-03-12 18:47:15.874830909 +0000 UTC m=+1075.428811006" Mar 12 18:47:16.685985 master-0 kubenswrapper[29097]: I0312 18:47:16.685664 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-ptnfn" podStartSLOduration=6.685646828 podStartE2EDuration="6.685646828s" podCreationTimestamp="2026-03-12 18:47:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:47:16.647107987 +0000 UTC m=+1076.201088094" watchObservedRunningTime="2026-03-12 18:47:16.685646828 +0000 UTC m=+1076.239626925" Mar 12 18:47:16.689094 master-0 kubenswrapper[29097]: I0312 18:47:16.689044 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56c5578c7c-zjbch" podStartSLOduration=6.689037323 podStartE2EDuration="6.689037323s" podCreationTimestamp="2026-03-12 18:47:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:47:16.684647564 +0000 UTC m=+1076.238627661" watchObservedRunningTime="2026-03-12 18:47:16.689037323 +0000 UTC m=+1076.243017420" Mar 12 18:47:16.744882 master-0 kubenswrapper[29097]: I0312 18:47:16.741408 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58ba054c-0e16-46e8-b7e5-861db4f81fa3" path="/var/lib/kubelet/pods/58ba054c-0e16-46e8-b7e5-861db4f81fa3/volumes" Mar 12 18:47:17.649785 master-0 kubenswrapper[29097]: I0312 18:47:17.649709 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-16afb-default-internal-api-0"] Mar 12 18:47:17.650333 master-0 kubenswrapper[29097]: E0312 18:47:17.650309 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58ba054c-0e16-46e8-b7e5-861db4f81fa3" containerName="init" Mar 12 18:47:17.650333 master-0 kubenswrapper[29097]: I0312 18:47:17.650333 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="58ba054c-0e16-46e8-b7e5-861db4f81fa3" containerName="init" Mar 12 18:47:17.650815 master-0 kubenswrapper[29097]: I0312 18:47:17.650792 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="58ba054c-0e16-46e8-b7e5-861db4f81fa3" containerName="init" Mar 12 18:47:17.652689 master-0 kubenswrapper[29097]: I0312 18:47:17.652600 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:17.657127 master-0 kubenswrapper[29097]: I0312 18:47:17.657077 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 12 18:47:17.657453 master-0 kubenswrapper[29097]: I0312 18:47:17.657419 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-16afb-default-internal-config-data" Mar 12 18:47:18.127561 master-0 kubenswrapper[29097]: I0312 18:47:18.123921 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-16afb-default-internal-api-0"] Mar 12 18:47:18.234212 master-0 kubenswrapper[29097]: I0312 18:47:18.231787 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-httpd-run\") pod \"glance-16afb-default-internal-api-0\" (UID: \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:18.234212 master-0 kubenswrapper[29097]: I0312 18:47:18.231873 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-15bfb998-3b5a-4ce6-84e0-d75d1f841c87\" (UniqueName: \"kubernetes.io/csi/topolvm.io^50ae7dd3-ed24-4f03-8717-dcd38b66edcc\") pod \"glance-16afb-default-internal-api-0\" (UID: \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:18.234212 master-0 kubenswrapper[29097]: I0312 18:47:18.231912 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-combined-ca-bundle\") pod \"glance-16afb-default-internal-api-0\" (UID: \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:18.234212 master-0 kubenswrapper[29097]: I0312 18:47:18.231929 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-internal-tls-certs\") pod \"glance-16afb-default-internal-api-0\" (UID: \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:18.234212 master-0 kubenswrapper[29097]: I0312 18:47:18.231973 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-logs\") pod \"glance-16afb-default-internal-api-0\" (UID: \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:18.234212 master-0 kubenswrapper[29097]: I0312 18:47:18.232038 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-config-data\") pod \"glance-16afb-default-internal-api-0\" (UID: \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:18.234212 master-0 kubenswrapper[29097]: I0312 18:47:18.232055 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hkzf\" (UniqueName: \"kubernetes.io/projected/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-kube-api-access-8hkzf\") pod \"glance-16afb-default-internal-api-0\" (UID: \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:18.234212 master-0 kubenswrapper[29097]: I0312 18:47:18.232080 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-scripts\") pod \"glance-16afb-default-internal-api-0\" (UID: \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:18.338637 master-0 kubenswrapper[29097]: I0312 18:47:18.334752 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-15bfb998-3b5a-4ce6-84e0-d75d1f841c87\" (UniqueName: \"kubernetes.io/csi/topolvm.io^50ae7dd3-ed24-4f03-8717-dcd38b66edcc\") pod \"glance-16afb-default-internal-api-0\" (UID: \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:18.338637 master-0 kubenswrapper[29097]: I0312 18:47:18.334838 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-combined-ca-bundle\") pod \"glance-16afb-default-internal-api-0\" (UID: \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:18.338637 master-0 kubenswrapper[29097]: I0312 18:47:18.334865 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-internal-tls-certs\") pod \"glance-16afb-default-internal-api-0\" (UID: \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:18.338637 master-0 kubenswrapper[29097]: I0312 18:47:18.334916 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-logs\") pod \"glance-16afb-default-internal-api-0\" (UID: \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:18.338637 master-0 kubenswrapper[29097]: I0312 18:47:18.334963 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-config-data\") pod \"glance-16afb-default-internal-api-0\" (UID: \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:18.338637 master-0 kubenswrapper[29097]: I0312 18:47:18.334979 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hkzf\" (UniqueName: \"kubernetes.io/projected/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-kube-api-access-8hkzf\") pod \"glance-16afb-default-internal-api-0\" (UID: \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:18.338637 master-0 kubenswrapper[29097]: I0312 18:47:18.334997 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-scripts\") pod \"glance-16afb-default-internal-api-0\" (UID: \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:18.338637 master-0 kubenswrapper[29097]: I0312 18:47:18.335059 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-httpd-run\") pod \"glance-16afb-default-internal-api-0\" (UID: \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:18.338637 master-0 kubenswrapper[29097]: I0312 18:47:18.335568 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-httpd-run\") pod \"glance-16afb-default-internal-api-0\" (UID: \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:18.338637 master-0 kubenswrapper[29097]: I0312 18:47:18.336021 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-logs\") pod \"glance-16afb-default-internal-api-0\" (UID: \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:18.349563 master-0 kubenswrapper[29097]: I0312 18:47:18.347602 29097 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 18:47:18.349563 master-0 kubenswrapper[29097]: I0312 18:47:18.347656 29097 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-15bfb998-3b5a-4ce6-84e0-d75d1f841c87\" (UniqueName: \"kubernetes.io/csi/topolvm.io^50ae7dd3-ed24-4f03-8717-dcd38b66edcc\") pod \"glance-16afb-default-internal-api-0\" (UID: \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/bdf06cc41b16558e5d4e2346226e79fd70cee97d9259625a849a3aa2d0277459/globalmount\"" pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:18.349563 master-0 kubenswrapper[29097]: I0312 18:47:18.348130 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-config-data\") pod \"glance-16afb-default-internal-api-0\" (UID: \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:18.350635 master-0 kubenswrapper[29097]: I0312 18:47:18.350489 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-combined-ca-bundle\") pod \"glance-16afb-default-internal-api-0\" (UID: \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:18.350805 master-0 kubenswrapper[29097]: I0312 18:47:18.350762 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-scripts\") pod \"glance-16afb-default-internal-api-0\" (UID: \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:18.357962 master-0 kubenswrapper[29097]: I0312 18:47:18.357909 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-internal-tls-certs\") pod \"glance-16afb-default-internal-api-0\" (UID: \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:18.430068 master-0 kubenswrapper[29097]: I0312 18:47:18.430020 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hkzf\" (UniqueName: \"kubernetes.io/projected/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-kube-api-access-8hkzf\") pod \"glance-16afb-default-internal-api-0\" (UID: \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:18.435240 master-0 kubenswrapper[29097]: I0312 18:47:18.434801 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-4v6wn" Mar 12 18:47:18.438108 master-0 kubenswrapper[29097]: I0312 18:47:18.438074 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-0e3d-account-create-update-pb9cg" Mar 12 18:47:18.538340 master-0 kubenswrapper[29097]: I0312 18:47:18.538282 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/784b2b8e-d340-4f65-8abb-ad196b08ed6f-operator-scripts\") pod \"784b2b8e-d340-4f65-8abb-ad196b08ed6f\" (UID: \"784b2b8e-d340-4f65-8abb-ad196b08ed6f\") " Mar 12 18:47:18.538619 master-0 kubenswrapper[29097]: I0312 18:47:18.538381 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4v7d\" (UniqueName: \"kubernetes.io/projected/22939625-8570-4e99-9070-5031a539e183-kube-api-access-b4v7d\") pod \"22939625-8570-4e99-9070-5031a539e183\" (UID: \"22939625-8570-4e99-9070-5031a539e183\") " Mar 12 18:47:18.538619 master-0 kubenswrapper[29097]: I0312 18:47:18.538530 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22939625-8570-4e99-9070-5031a539e183-operator-scripts\") pod \"22939625-8570-4e99-9070-5031a539e183\" (UID: \"22939625-8570-4e99-9070-5031a539e183\") " Mar 12 18:47:18.538697 master-0 kubenswrapper[29097]: I0312 18:47:18.538624 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cscx\" (UniqueName: \"kubernetes.io/projected/784b2b8e-d340-4f65-8abb-ad196b08ed6f-kube-api-access-5cscx\") pod \"784b2b8e-d340-4f65-8abb-ad196b08ed6f\" (UID: \"784b2b8e-d340-4f65-8abb-ad196b08ed6f\") " Mar 12 18:47:18.538991 master-0 kubenswrapper[29097]: I0312 18:47:18.538926 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/784b2b8e-d340-4f65-8abb-ad196b08ed6f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "784b2b8e-d340-4f65-8abb-ad196b08ed6f" (UID: "784b2b8e-d340-4f65-8abb-ad196b08ed6f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:47:18.539937 master-0 kubenswrapper[29097]: I0312 18:47:18.539884 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22939625-8570-4e99-9070-5031a539e183-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "22939625-8570-4e99-9070-5031a539e183" (UID: "22939625-8570-4e99-9070-5031a539e183"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:47:18.540055 master-0 kubenswrapper[29097]: I0312 18:47:18.540030 29097 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/784b2b8e-d340-4f65-8abb-ad196b08ed6f-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:18.542019 master-0 kubenswrapper[29097]: I0312 18:47:18.541983 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/784b2b8e-d340-4f65-8abb-ad196b08ed6f-kube-api-access-5cscx" (OuterVolumeSpecName: "kube-api-access-5cscx") pod "784b2b8e-d340-4f65-8abb-ad196b08ed6f" (UID: "784b2b8e-d340-4f65-8abb-ad196b08ed6f"). InnerVolumeSpecName "kube-api-access-5cscx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:47:18.542977 master-0 kubenswrapper[29097]: I0312 18:47:18.542943 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22939625-8570-4e99-9070-5031a539e183-kube-api-access-b4v7d" (OuterVolumeSpecName: "kube-api-access-b4v7d") pod "22939625-8570-4e99-9070-5031a539e183" (UID: "22939625-8570-4e99-9070-5031a539e183"). InnerVolumeSpecName "kube-api-access-b4v7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:47:18.645759 master-0 kubenswrapper[29097]: I0312 18:47:18.645646 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4v7d\" (UniqueName: \"kubernetes.io/projected/22939625-8570-4e99-9070-5031a539e183-kube-api-access-b4v7d\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:18.646003 master-0 kubenswrapper[29097]: I0312 18:47:18.645990 29097 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22939625-8570-4e99-9070-5031a539e183-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:18.646082 master-0 kubenswrapper[29097]: I0312 18:47:18.646070 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cscx\" (UniqueName: \"kubernetes.io/projected/784b2b8e-d340-4f65-8abb-ad196b08ed6f-kube-api-access-5cscx\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:19.120934 master-0 kubenswrapper[29097]: I0312 18:47:19.120584 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-0e3d-account-create-update-pb9cg" event={"ID":"22939625-8570-4e99-9070-5031a539e183","Type":"ContainerDied","Data":"fb8a425557e39c52dcde215fc0b56507d2c5d84ad564ea54d0f672a4887e5660"} Mar 12 18:47:19.120934 master-0 kubenswrapper[29097]: I0312 18:47:19.120638 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb8a425557e39c52dcde215fc0b56507d2c5d84ad564ea54d0f672a4887e5660" Mar 12 18:47:19.120934 master-0 kubenswrapper[29097]: I0312 18:47:19.120714 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-0e3d-account-create-update-pb9cg" Mar 12 18:47:19.123497 master-0 kubenswrapper[29097]: I0312 18:47:19.123461 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-4v6wn" event={"ID":"784b2b8e-d340-4f65-8abb-ad196b08ed6f","Type":"ContainerDied","Data":"457eeeb734f40057dc9177f9d65b919e70754c92cad5bd1c34caeb21a26f64b4"} Mar 12 18:47:19.123613 master-0 kubenswrapper[29097]: I0312 18:47:19.123558 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="457eeeb734f40057dc9177f9d65b919e70754c92cad5bd1c34caeb21a26f64b4" Mar 12 18:47:19.123613 master-0 kubenswrapper[29097]: I0312 18:47:19.123584 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-4v6wn" Mar 12 18:47:19.126615 master-0 kubenswrapper[29097]: I0312 18:47:19.126580 29097 generic.go:334] "Generic (PLEG): container finished" podID="a8cbcb3d-a29a-40f9-afe8-401b3db17fd0" containerID="6408ee724587b198615016f317bd50a06e48a50da1f318412c9cff6ffcf4f26b" exitCode=0 Mar 12 18:47:19.126743 master-0 kubenswrapper[29097]: I0312 18:47:19.126642 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pxk9z" event={"ID":"a8cbcb3d-a29a-40f9-afe8-401b3db17fd0","Type":"ContainerDied","Data":"6408ee724587b198615016f317bd50a06e48a50da1f318412c9cff6ffcf4f26b"} Mar 12 18:47:19.366520 master-0 kubenswrapper[29097]: I0312 18:47:19.364908 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-16afb-default-external-api-0"] Mar 12 18:47:20.143360 master-0 kubenswrapper[29097]: I0312 18:47:20.143316 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s5w8c" event={"ID":"ae6f8a81-a597-4c6d-ae77-b60b36190af6","Type":"ContainerStarted","Data":"f4c20b4649b8d21c9d9008c6625538aa5acc7cdd6d0e0f75fe04f9905bd67c0c"} Mar 12 18:47:20.145175 master-0 kubenswrapper[29097]: I0312 18:47:20.145126 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-16afb-default-external-api-0" event={"ID":"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500","Type":"ContainerStarted","Data":"2a280dbc3d340c8a9788c20830e57acb6a9eeaf873b9b86c6b944bfdb4eecdc1"} Mar 12 18:47:20.145175 master-0 kubenswrapper[29097]: I0312 18:47:20.145169 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-16afb-default-external-api-0" event={"ID":"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500","Type":"ContainerStarted","Data":"7780af6d44fca17cf49ea36ce8bfc9c47aaa89af448f35da29e1893330bfe604"} Mar 12 18:47:20.216705 master-0 kubenswrapper[29097]: I0312 18:47:20.216145 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-16afb-default-external-api-0"] Mar 12 18:47:20.291556 master-0 kubenswrapper[29097]: I0312 18:47:20.283646 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-s5w8c" podStartSLOduration=3.8139991220000002 podStartE2EDuration="10.283623178s" podCreationTimestamp="2026-03-12 18:47:10 +0000 UTC" firstStartedPulling="2026-03-12 18:47:12.823454457 +0000 UTC m=+1072.377434554" lastFinishedPulling="2026-03-12 18:47:19.293078513 +0000 UTC m=+1078.847058610" observedRunningTime="2026-03-12 18:47:20.278612093 +0000 UTC m=+1079.832592210" watchObservedRunningTime="2026-03-12 18:47:20.283623178 +0000 UTC m=+1079.837603285" Mar 12 18:47:20.491299 master-0 kubenswrapper[29097]: I0312 18:47:20.489354 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-16afb-default-internal-api-0"] Mar 12 18:47:20.492161 master-0 kubenswrapper[29097]: E0312 18:47:20.492120 29097 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[glance], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-16afb-default-internal-api-0" podUID="31ae5bb6-311e-4c4f-8dd1-3841f6f821ba" Mar 12 18:47:20.492161 master-0 kubenswrapper[29097]: I0312 18:47:20.491193 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-15bfb998-3b5a-4ce6-84e0-d75d1f841c87\" (UniqueName: \"kubernetes.io/csi/topolvm.io^50ae7dd3-ed24-4f03-8717-dcd38b66edcc\") pod \"glance-16afb-default-internal-api-0\" (UID: \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:20.772527 master-0 kubenswrapper[29097]: I0312 18:47:20.772470 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pxk9z" Mar 12 18:47:20.855678 master-0 kubenswrapper[29097]: I0312 18:47:20.855421 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-fernet-keys\") pod \"a8cbcb3d-a29a-40f9-afe8-401b3db17fd0\" (UID: \"a8cbcb3d-a29a-40f9-afe8-401b3db17fd0\") " Mar 12 18:47:20.855678 master-0 kubenswrapper[29097]: I0312 18:47:20.855490 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-scripts\") pod \"a8cbcb3d-a29a-40f9-afe8-401b3db17fd0\" (UID: \"a8cbcb3d-a29a-40f9-afe8-401b3db17fd0\") " Mar 12 18:47:20.855678 master-0 kubenswrapper[29097]: I0312 18:47:20.855653 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n788\" (UniqueName: \"kubernetes.io/projected/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-kube-api-access-7n788\") pod \"a8cbcb3d-a29a-40f9-afe8-401b3db17fd0\" (UID: \"a8cbcb3d-a29a-40f9-afe8-401b3db17fd0\") " Mar 12 18:47:20.855873 master-0 kubenswrapper[29097]: I0312 18:47:20.855733 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-credential-keys\") pod \"a8cbcb3d-a29a-40f9-afe8-401b3db17fd0\" (UID: \"a8cbcb3d-a29a-40f9-afe8-401b3db17fd0\") " Mar 12 18:47:20.855873 master-0 kubenswrapper[29097]: I0312 18:47:20.855776 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-config-data\") pod \"a8cbcb3d-a29a-40f9-afe8-401b3db17fd0\" (UID: \"a8cbcb3d-a29a-40f9-afe8-401b3db17fd0\") " Mar 12 18:47:20.873562 master-0 kubenswrapper[29097]: I0312 18:47:20.872294 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-scripts" (OuterVolumeSpecName: "scripts") pod "a8cbcb3d-a29a-40f9-afe8-401b3db17fd0" (UID: "a8cbcb3d-a29a-40f9-afe8-401b3db17fd0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:20.879545 master-0 kubenswrapper[29097]: I0312 18:47:20.876259 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-kube-api-access-7n788" (OuterVolumeSpecName: "kube-api-access-7n788") pod "a8cbcb3d-a29a-40f9-afe8-401b3db17fd0" (UID: "a8cbcb3d-a29a-40f9-afe8-401b3db17fd0"). InnerVolumeSpecName "kube-api-access-7n788". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:47:20.879545 master-0 kubenswrapper[29097]: I0312 18:47:20.878482 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a8cbcb3d-a29a-40f9-afe8-401b3db17fd0" (UID: "a8cbcb3d-a29a-40f9-afe8-401b3db17fd0"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:20.879545 master-0 kubenswrapper[29097]: I0312 18:47:20.878893 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-combined-ca-bundle\") pod \"a8cbcb3d-a29a-40f9-afe8-401b3db17fd0\" (UID: \"a8cbcb3d-a29a-40f9-afe8-401b3db17fd0\") " Mar 12 18:47:20.884045 master-0 kubenswrapper[29097]: I0312 18:47:20.880314 29097 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:20.884045 master-0 kubenswrapper[29097]: I0312 18:47:20.880337 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n788\" (UniqueName: \"kubernetes.io/projected/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-kube-api-access-7n788\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:20.884045 master-0 kubenswrapper[29097]: I0312 18:47:20.880357 29097 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-credential-keys\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:20.884045 master-0 kubenswrapper[29097]: I0312 18:47:20.882872 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a8cbcb3d-a29a-40f9-afe8-401b3db17fd0" (UID: "a8cbcb3d-a29a-40f9-afe8-401b3db17fd0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:20.919917 master-0 kubenswrapper[29097]: I0312 18:47:20.919863 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-config-data" (OuterVolumeSpecName: "config-data") pod "a8cbcb3d-a29a-40f9-afe8-401b3db17fd0" (UID: "a8cbcb3d-a29a-40f9-afe8-401b3db17fd0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:20.920255 master-0 kubenswrapper[29097]: I0312 18:47:20.920230 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8cbcb3d-a29a-40f9-afe8-401b3db17fd0" (UID: "a8cbcb3d-a29a-40f9-afe8-401b3db17fd0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:20.983979 master-0 kubenswrapper[29097]: I0312 18:47:20.983933 29097 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:20.983979 master-0 kubenswrapper[29097]: I0312 18:47:20.983966 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:20.983979 master-0 kubenswrapper[29097]: I0312 18:47:20.983976 29097 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0-fernet-keys\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:21.155601 master-0 kubenswrapper[29097]: I0312 18:47:21.155552 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pxk9z" event={"ID":"a8cbcb3d-a29a-40f9-afe8-401b3db17fd0","Type":"ContainerDied","Data":"a33e03551302f73072c8ce86712dda4172045d05cc6f3eb0a861ecda2da6243f"} Mar 12 18:47:21.155601 master-0 kubenswrapper[29097]: I0312 18:47:21.155596 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a33e03551302f73072c8ce86712dda4172045d05cc6f3eb0a861ecda2da6243f" Mar 12 18:47:21.155841 master-0 kubenswrapper[29097]: I0312 18:47:21.155608 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pxk9z" Mar 12 18:47:21.158938 master-0 kubenswrapper[29097]: I0312 18:47:21.158898 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-16afb-default-external-api-0" event={"ID":"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500","Type":"ContainerStarted","Data":"2d405a5596272e729efd32e738956af18d9076e8841ff7afb10774b8a4545401"} Mar 12 18:47:21.159058 master-0 kubenswrapper[29097]: I0312 18:47:21.158992 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:21.159113 master-0 kubenswrapper[29097]: I0312 18:47:21.159055 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-16afb-default-external-api-0" podUID="fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500" containerName="glance-log" containerID="cri-o://2a280dbc3d340c8a9788c20830e57acb6a9eeaf873b9b86c6b944bfdb4eecdc1" gracePeriod=30 Mar 12 18:47:21.159210 master-0 kubenswrapper[29097]: I0312 18:47:21.159174 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-16afb-default-external-api-0" podUID="fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500" containerName="glance-httpd" containerID="cri-o://2d405a5596272e729efd32e738956af18d9076e8841ff7afb10774b8a4545401" gracePeriod=30 Mar 12 18:47:21.169488 master-0 kubenswrapper[29097]: I0312 18:47:21.169442 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:21.220807 master-0 kubenswrapper[29097]: E0312 18:47:21.220757 29097 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33feec78_4592_4343_965b_aa1b7044fcf3.slice/crio-26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Error finding container 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Status 404 returned error can't find the container with id 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547 Mar 12 18:47:21.221784 master-0 kubenswrapper[29097]: I0312 18:47:21.221750 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56c5578c7c-zjbch" Mar 12 18:47:21.289361 master-0 kubenswrapper[29097]: I0312 18:47:21.289304 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-logs\") pod \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\" (UID: \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\") " Mar 12 18:47:21.289633 master-0 kubenswrapper[29097]: I0312 18:47:21.289406 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-config-data\") pod \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\" (UID: \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\") " Mar 12 18:47:21.289633 master-0 kubenswrapper[29097]: I0312 18:47:21.289607 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^50ae7dd3-ed24-4f03-8717-dcd38b66edcc\") pod \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\" (UID: \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\") " Mar 12 18:47:21.290082 master-0 kubenswrapper[29097]: I0312 18:47:21.289632 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-scripts\") pod \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\" (UID: \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\") " Mar 12 18:47:21.290082 master-0 kubenswrapper[29097]: I0312 18:47:21.289704 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-httpd-run\") pod \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\" (UID: \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\") " Mar 12 18:47:21.290082 master-0 kubenswrapper[29097]: I0312 18:47:21.289752 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hkzf\" (UniqueName: \"kubernetes.io/projected/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-kube-api-access-8hkzf\") pod \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\" (UID: \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\") " Mar 12 18:47:21.290082 master-0 kubenswrapper[29097]: I0312 18:47:21.289725 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-logs" (OuterVolumeSpecName: "logs") pod "31ae5bb6-311e-4c4f-8dd1-3841f6f821ba" (UID: "31ae5bb6-311e-4c4f-8dd1-3841f6f821ba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:47:21.290082 master-0 kubenswrapper[29097]: I0312 18:47:21.289891 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-combined-ca-bundle\") pod \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\" (UID: \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\") " Mar 12 18:47:21.290082 master-0 kubenswrapper[29097]: I0312 18:47:21.289938 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-internal-tls-certs\") pod \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\" (UID: \"31ae5bb6-311e-4c4f-8dd1-3841f6f821ba\") " Mar 12 18:47:21.291089 master-0 kubenswrapper[29097]: I0312 18:47:21.290548 29097 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:21.291972 master-0 kubenswrapper[29097]: I0312 18:47:21.291676 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "31ae5bb6-311e-4c4f-8dd1-3841f6f821ba" (UID: "31ae5bb6-311e-4c4f-8dd1-3841f6f821ba"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:47:21.294366 master-0 kubenswrapper[29097]: I0312 18:47:21.294315 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-scripts" (OuterVolumeSpecName: "scripts") pod "31ae5bb6-311e-4c4f-8dd1-3841f6f821ba" (UID: "31ae5bb6-311e-4c4f-8dd1-3841f6f821ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:21.296506 master-0 kubenswrapper[29097]: I0312 18:47:21.294408 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-config-data" (OuterVolumeSpecName: "config-data") pod "31ae5bb6-311e-4c4f-8dd1-3841f6f821ba" (UID: "31ae5bb6-311e-4c4f-8dd1-3841f6f821ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:21.296506 master-0 kubenswrapper[29097]: I0312 18:47:21.295732 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-kube-api-access-8hkzf" (OuterVolumeSpecName: "kube-api-access-8hkzf") pod "31ae5bb6-311e-4c4f-8dd1-3841f6f821ba" (UID: "31ae5bb6-311e-4c4f-8dd1-3841f6f821ba"). InnerVolumeSpecName "kube-api-access-8hkzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:47:21.296506 master-0 kubenswrapper[29097]: I0312 18:47:21.296051 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31ae5bb6-311e-4c4f-8dd1-3841f6f821ba" (UID: "31ae5bb6-311e-4c4f-8dd1-3841f6f821ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:21.299383 master-0 kubenswrapper[29097]: I0312 18:47:21.298292 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "31ae5bb6-311e-4c4f-8dd1-3841f6f821ba" (UID: "31ae5bb6-311e-4c4f-8dd1-3841f6f821ba"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:21.315643 master-0 kubenswrapper[29097]: I0312 18:47:21.315602 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^50ae7dd3-ed24-4f03-8717-dcd38b66edcc" (OuterVolumeSpecName: "glance") pod "31ae5bb6-311e-4c4f-8dd1-3841f6f821ba" (UID: "31ae5bb6-311e-4c4f-8dd1-3841f6f821ba"). InnerVolumeSpecName "pvc-15bfb998-3b5a-4ce6-84e0-d75d1f841c87". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 18:47:21.392766 master-0 kubenswrapper[29097]: I0312 18:47:21.392328 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:21.392766 master-0 kubenswrapper[29097]: I0312 18:47:21.392378 29097 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:21.392766 master-0 kubenswrapper[29097]: I0312 18:47:21.392396 29097 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:21.392766 master-0 kubenswrapper[29097]: I0312 18:47:21.392444 29097 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-15bfb998-3b5a-4ce6-84e0-d75d1f841c87\" (UniqueName: \"kubernetes.io/csi/topolvm.io^50ae7dd3-ed24-4f03-8717-dcd38b66edcc\") on node \"master-0\" " Mar 12 18:47:21.392766 master-0 kubenswrapper[29097]: I0312 18:47:21.392463 29097 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:21.392766 master-0 kubenswrapper[29097]: I0312 18:47:21.392479 29097 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:21.392766 master-0 kubenswrapper[29097]: I0312 18:47:21.392495 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hkzf\" (UniqueName: \"kubernetes.io/projected/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba-kube-api-access-8hkzf\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:21.412677 master-0 kubenswrapper[29097]: I0312 18:47:21.412512 29097 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 18:47:21.412871 master-0 kubenswrapper[29097]: I0312 18:47:21.412722 29097 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-15bfb998-3b5a-4ce6-84e0-d75d1f841c87" (UniqueName: "kubernetes.io/csi/topolvm.io^50ae7dd3-ed24-4f03-8717-dcd38b66edcc") on node "master-0" Mar 12 18:47:21.494417 master-0 kubenswrapper[29097]: I0312 18:47:21.494368 29097 reconciler_common.go:293] "Volume detached for volume \"pvc-15bfb998-3b5a-4ce6-84e0-d75d1f841c87\" (UniqueName: \"kubernetes.io/csi/topolvm.io^50ae7dd3-ed24-4f03-8717-dcd38b66edcc\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:21.535760 master-0 kubenswrapper[29097]: I0312 18:47:21.535589 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-16afb-default-external-api-0" podStartSLOduration=11.535549814 podStartE2EDuration="11.535549814s" podCreationTimestamp="2026-03-12 18:47:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:47:21.522664482 +0000 UTC m=+1081.076644619" watchObservedRunningTime="2026-03-12 18:47:21.535549814 +0000 UTC m=+1081.089529941" Mar 12 18:47:22.177747 master-0 kubenswrapper[29097]: I0312 18:47:22.177577 29097 generic.go:334] "Generic (PLEG): container finished" podID="fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500" containerID="2d405a5596272e729efd32e738956af18d9076e8841ff7afb10774b8a4545401" exitCode=143 Mar 12 18:47:22.177747 master-0 kubenswrapper[29097]: I0312 18:47:22.177642 29097 generic.go:334] "Generic (PLEG): container finished" podID="fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500" containerID="2a280dbc3d340c8a9788c20830e57acb6a9eeaf873b9b86c6b944bfdb4eecdc1" exitCode=143 Mar 12 18:47:22.177747 master-0 kubenswrapper[29097]: I0312 18:47:22.177653 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-16afb-default-external-api-0" event={"ID":"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500","Type":"ContainerDied","Data":"2d405a5596272e729efd32e738956af18d9076e8841ff7afb10774b8a4545401"} Mar 12 18:47:22.177747 master-0 kubenswrapper[29097]: I0312 18:47:22.177690 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-16afb-default-external-api-0" event={"ID":"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500","Type":"ContainerDied","Data":"2a280dbc3d340c8a9788c20830e57acb6a9eeaf873b9b86c6b944bfdb4eecdc1"} Mar 12 18:47:22.177747 master-0 kubenswrapper[29097]: I0312 18:47:22.177712 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:28.328490 master-0 kubenswrapper[29097]: I0312 18:47:28.328424 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76986c7db5-59cjq"] Mar 12 18:47:28.329142 master-0 kubenswrapper[29097]: I0312 18:47:28.328914 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76986c7db5-59cjq" podUID="aa2283ab-4112-4a09-83e9-0d40cf04e864" containerName="dnsmasq-dns" containerID="cri-o://3f24109038d9b5a61c5e5d31d04b3c8ecf1da95fc9aad9ab64af18a7e69364da" gracePeriod=10 Mar 12 18:47:32.000310 master-0 kubenswrapper[29097]: I0312 18:47:32.000239 29097 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-76986c7db5-59cjq" podUID="aa2283ab-4112-4a09-83e9-0d40cf04e864" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.198:5353: connect: connection refused" Mar 12 18:47:33.452661 master-0 kubenswrapper[29097]: I0312 18:47:33.451045 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-16afb-default-internal-api-0"] Mar 12 18:47:33.558408 master-0 kubenswrapper[29097]: I0312 18:47:33.558331 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-16afb-default-internal-api-0"] Mar 12 18:47:33.568903 master-0 kubenswrapper[29097]: I0312 18:47:33.568826 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-pxk9z"] Mar 12 18:47:33.751204 master-0 kubenswrapper[29097]: I0312 18:47:33.749553 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-pxk9z"] Mar 12 18:47:33.760965 master-0 kubenswrapper[29097]: I0312 18:47:33.758043 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-sync-mzfh7"] Mar 12 18:47:33.760965 master-0 kubenswrapper[29097]: E0312 18:47:33.758492 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8cbcb3d-a29a-40f9-afe8-401b3db17fd0" containerName="keystone-bootstrap" Mar 12 18:47:33.760965 master-0 kubenswrapper[29097]: I0312 18:47:33.758505 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8cbcb3d-a29a-40f9-afe8-401b3db17fd0" containerName="keystone-bootstrap" Mar 12 18:47:33.760965 master-0 kubenswrapper[29097]: E0312 18:47:33.758561 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="784b2b8e-d340-4f65-8abb-ad196b08ed6f" containerName="mariadb-database-create" Mar 12 18:47:33.760965 master-0 kubenswrapper[29097]: I0312 18:47:33.758569 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="784b2b8e-d340-4f65-8abb-ad196b08ed6f" containerName="mariadb-database-create" Mar 12 18:47:33.760965 master-0 kubenswrapper[29097]: E0312 18:47:33.758603 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22939625-8570-4e99-9070-5031a539e183" containerName="mariadb-account-create-update" Mar 12 18:47:33.760965 master-0 kubenswrapper[29097]: I0312 18:47:33.758611 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="22939625-8570-4e99-9070-5031a539e183" containerName="mariadb-account-create-update" Mar 12 18:47:33.760965 master-0 kubenswrapper[29097]: I0312 18:47:33.758833 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="784b2b8e-d340-4f65-8abb-ad196b08ed6f" containerName="mariadb-database-create" Mar 12 18:47:33.760965 master-0 kubenswrapper[29097]: I0312 18:47:33.758868 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="22939625-8570-4e99-9070-5031a539e183" containerName="mariadb-account-create-update" Mar 12 18:47:33.760965 master-0 kubenswrapper[29097]: I0312 18:47:33.758893 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8cbcb3d-a29a-40f9-afe8-401b3db17fd0" containerName="keystone-bootstrap" Mar 12 18:47:33.760965 master-0 kubenswrapper[29097]: I0312 18:47:33.759860 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-mzfh7" Mar 12 18:47:33.763195 master-0 kubenswrapper[29097]: I0312 18:47:33.763115 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 12 18:47:33.763830 master-0 kubenswrapper[29097]: I0312 18:47:33.763369 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-scripts" Mar 12 18:47:33.763830 master-0 kubenswrapper[29097]: I0312 18:47:33.763374 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Mar 12 18:47:33.831242 master-0 kubenswrapper[29097]: I0312 18:47:33.831172 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-16afb-default-internal-api-0"] Mar 12 18:47:33.832983 master-0 kubenswrapper[29097]: I0312 18:47:33.832938 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:33.836358 master-0 kubenswrapper[29097]: I0312 18:47:33.836308 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 12 18:47:33.836822 master-0 kubenswrapper[29097]: I0312 18:47:33.836778 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-16afb-default-internal-config-data" Mar 12 18:47:33.909870 master-0 kubenswrapper[29097]: I0312 18:47:33.909789 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-mzfh7"] Mar 12 18:47:33.927343 master-0 kubenswrapper[29097]: I0312 18:47:33.927264 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-16afb-default-internal-api-0"] Mar 12 18:47:34.043619 master-0 kubenswrapper[29097]: I0312 18:47:34.041209 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-t9v2r"] Mar 12 18:47:34.043619 master-0 kubenswrapper[29097]: I0312 18:47:34.043270 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t9v2r" Mar 12 18:47:34.053708 master-0 kubenswrapper[29097]: I0312 18:47:34.051359 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 12 18:47:34.053708 master-0 kubenswrapper[29097]: I0312 18:47:34.052246 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 12 18:47:34.053708 master-0 kubenswrapper[29097]: I0312 18:47:34.052364 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 12 18:47:34.335270 master-0 kubenswrapper[29097]: I0312 18:47:34.335139 29097 generic.go:334] "Generic (PLEG): container finished" podID="ae6f8a81-a597-4c6d-ae77-b60b36190af6" containerID="f4c20b4649b8d21c9d9008c6625538aa5acc7cdd6d0e0f75fe04f9905bd67c0c" exitCode=0 Mar 12 18:47:34.335270 master-0 kubenswrapper[29097]: I0312 18:47:34.335202 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s5w8c" event={"ID":"ae6f8a81-a597-4c6d-ae77-b60b36190af6","Type":"ContainerDied","Data":"f4c20b4649b8d21c9d9008c6625538aa5acc7cdd6d0e0f75fe04f9905bd67c0c"} Mar 12 18:47:34.478695 master-0 kubenswrapper[29097]: I0312 18:47:34.478614 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-t9v2r"] Mar 12 18:47:34.720356 master-0 kubenswrapper[29097]: I0312 18:47:34.720191 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8tdl\" (UniqueName: \"kubernetes.io/projected/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-kube-api-access-m8tdl\") pod \"ironic-db-sync-mzfh7\" (UID: \"64b3a2fa-455e-45a6-a3b4-9763b68a8faa\") " pod="openstack/ironic-db-sync-mzfh7" Mar 12 18:47:34.720356 master-0 kubenswrapper[29097]: I0312 18:47:34.720316 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a97f2967-17f2-42cc-91b6-37f26b1a6964-config-data\") pod \"glance-16afb-default-internal-api-0\" (UID: \"a97f2967-17f2-42cc-91b6-37f26b1a6964\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:34.720356 master-0 kubenswrapper[29097]: I0312 18:47:34.720356 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c46351d-ae56-4f9f-ba28-1389bc23a289-config-data\") pod \"keystone-bootstrap-t9v2r\" (UID: \"9c46351d-ae56-4f9f-ba28-1389bc23a289\") " pod="openstack/keystone-bootstrap-t9v2r" Mar 12 18:47:34.720624 master-0 kubenswrapper[29097]: I0312 18:47:34.720404 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c46351d-ae56-4f9f-ba28-1389bc23a289-scripts\") pod \"keystone-bootstrap-t9v2r\" (UID: \"9c46351d-ae56-4f9f-ba28-1389bc23a289\") " pod="openstack/keystone-bootstrap-t9v2r" Mar 12 18:47:34.720624 master-0 kubenswrapper[29097]: I0312 18:47:34.720432 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-config-data-merged\") pod \"ironic-db-sync-mzfh7\" (UID: \"64b3a2fa-455e-45a6-a3b4-9763b68a8faa\") " pod="openstack/ironic-db-sync-mzfh7" Mar 12 18:47:34.720624 master-0 kubenswrapper[29097]: I0312 18:47:34.720460 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9c46351d-ae56-4f9f-ba28-1389bc23a289-fernet-keys\") pod \"keystone-bootstrap-t9v2r\" (UID: \"9c46351d-ae56-4f9f-ba28-1389bc23a289\") " pod="openstack/keystone-bootstrap-t9v2r" Mar 12 18:47:34.720624 master-0 kubenswrapper[29097]: I0312 18:47:34.720486 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a97f2967-17f2-42cc-91b6-37f26b1a6964-internal-tls-certs\") pod \"glance-16afb-default-internal-api-0\" (UID: \"a97f2967-17f2-42cc-91b6-37f26b1a6964\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:34.720624 master-0 kubenswrapper[29097]: I0312 18:47:34.720557 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prwlg\" (UniqueName: \"kubernetes.io/projected/a97f2967-17f2-42cc-91b6-37f26b1a6964-kube-api-access-prwlg\") pod \"glance-16afb-default-internal-api-0\" (UID: \"a97f2967-17f2-42cc-91b6-37f26b1a6964\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:34.720624 master-0 kubenswrapper[29097]: I0312 18:47:34.720606 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a97f2967-17f2-42cc-91b6-37f26b1a6964-combined-ca-bundle\") pod \"glance-16afb-default-internal-api-0\" (UID: \"a97f2967-17f2-42cc-91b6-37f26b1a6964\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:34.720811 master-0 kubenswrapper[29097]: I0312 18:47:34.720659 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a97f2967-17f2-42cc-91b6-37f26b1a6964-httpd-run\") pod \"glance-16afb-default-internal-api-0\" (UID: \"a97f2967-17f2-42cc-91b6-37f26b1a6964\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:34.720811 master-0 kubenswrapper[29097]: I0312 18:47:34.720698 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-combined-ca-bundle\") pod \"ironic-db-sync-mzfh7\" (UID: \"64b3a2fa-455e-45a6-a3b4-9763b68a8faa\") " pod="openstack/ironic-db-sync-mzfh7" Mar 12 18:47:34.720811 master-0 kubenswrapper[29097]: I0312 18:47:34.720787 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-etc-podinfo\") pod \"ironic-db-sync-mzfh7\" (UID: \"64b3a2fa-455e-45a6-a3b4-9763b68a8faa\") " pod="openstack/ironic-db-sync-mzfh7" Mar 12 18:47:34.720900 master-0 kubenswrapper[29097]: I0312 18:47:34.720836 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-15bfb998-3b5a-4ce6-84e0-d75d1f841c87\" (UniqueName: \"kubernetes.io/csi/topolvm.io^50ae7dd3-ed24-4f03-8717-dcd38b66edcc\") pod \"glance-16afb-default-internal-api-0\" (UID: \"a97f2967-17f2-42cc-91b6-37f26b1a6964\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:34.720900 master-0 kubenswrapper[29097]: I0312 18:47:34.720889 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a97f2967-17f2-42cc-91b6-37f26b1a6964-scripts\") pod \"glance-16afb-default-internal-api-0\" (UID: \"a97f2967-17f2-42cc-91b6-37f26b1a6964\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:34.720965 master-0 kubenswrapper[29097]: I0312 18:47:34.720919 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9c46351d-ae56-4f9f-ba28-1389bc23a289-credential-keys\") pod \"keystone-bootstrap-t9v2r\" (UID: \"9c46351d-ae56-4f9f-ba28-1389bc23a289\") " pod="openstack/keystone-bootstrap-t9v2r" Mar 12 18:47:34.720965 master-0 kubenswrapper[29097]: I0312 18:47:34.720945 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2km46\" (UniqueName: \"kubernetes.io/projected/9c46351d-ae56-4f9f-ba28-1389bc23a289-kube-api-access-2km46\") pod \"keystone-bootstrap-t9v2r\" (UID: \"9c46351d-ae56-4f9f-ba28-1389bc23a289\") " pod="openstack/keystone-bootstrap-t9v2r" Mar 12 18:47:34.721036 master-0 kubenswrapper[29097]: I0312 18:47:34.720972 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c46351d-ae56-4f9f-ba28-1389bc23a289-combined-ca-bundle\") pod \"keystone-bootstrap-t9v2r\" (UID: \"9c46351d-ae56-4f9f-ba28-1389bc23a289\") " pod="openstack/keystone-bootstrap-t9v2r" Mar 12 18:47:34.721036 master-0 kubenswrapper[29097]: I0312 18:47:34.721014 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-config-data\") pod \"ironic-db-sync-mzfh7\" (UID: \"64b3a2fa-455e-45a6-a3b4-9763b68a8faa\") " pod="openstack/ironic-db-sync-mzfh7" Mar 12 18:47:34.721101 master-0 kubenswrapper[29097]: I0312 18:47:34.721038 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a97f2967-17f2-42cc-91b6-37f26b1a6964-logs\") pod \"glance-16afb-default-internal-api-0\" (UID: \"a97f2967-17f2-42cc-91b6-37f26b1a6964\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:34.721101 master-0 kubenswrapper[29097]: I0312 18:47:34.721061 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-scripts\") pod \"ironic-db-sync-mzfh7\" (UID: \"64b3a2fa-455e-45a6-a3b4-9763b68a8faa\") " pod="openstack/ironic-db-sync-mzfh7" Mar 12 18:47:34.735965 master-0 kubenswrapper[29097]: I0312 18:47:34.735910 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31ae5bb6-311e-4c4f-8dd1-3841f6f821ba" path="/var/lib/kubelet/pods/31ae5bb6-311e-4c4f-8dd1-3841f6f821ba/volumes" Mar 12 18:47:34.736427 master-0 kubenswrapper[29097]: I0312 18:47:34.736356 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8cbcb3d-a29a-40f9-afe8-401b3db17fd0" path="/var/lib/kubelet/pods/a8cbcb3d-a29a-40f9-afe8-401b3db17fd0/volumes" Mar 12 18:47:34.822090 master-0 kubenswrapper[29097]: I0312 18:47:34.822006 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prwlg\" (UniqueName: \"kubernetes.io/projected/a97f2967-17f2-42cc-91b6-37f26b1a6964-kube-api-access-prwlg\") pod \"glance-16afb-default-internal-api-0\" (UID: \"a97f2967-17f2-42cc-91b6-37f26b1a6964\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:34.822316 master-0 kubenswrapper[29097]: I0312 18:47:34.822224 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a97f2967-17f2-42cc-91b6-37f26b1a6964-combined-ca-bundle\") pod \"glance-16afb-default-internal-api-0\" (UID: \"a97f2967-17f2-42cc-91b6-37f26b1a6964\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:34.822316 master-0 kubenswrapper[29097]: I0312 18:47:34.822271 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a97f2967-17f2-42cc-91b6-37f26b1a6964-httpd-run\") pod \"glance-16afb-default-internal-api-0\" (UID: \"a97f2967-17f2-42cc-91b6-37f26b1a6964\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:34.823390 master-0 kubenswrapper[29097]: I0312 18:47:34.822469 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-combined-ca-bundle\") pod \"ironic-db-sync-mzfh7\" (UID: \"64b3a2fa-455e-45a6-a3b4-9763b68a8faa\") " pod="openstack/ironic-db-sync-mzfh7" Mar 12 18:47:34.823390 master-0 kubenswrapper[29097]: I0312 18:47:34.822549 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-etc-podinfo\") pod \"ironic-db-sync-mzfh7\" (UID: \"64b3a2fa-455e-45a6-a3b4-9763b68a8faa\") " pod="openstack/ironic-db-sync-mzfh7" Mar 12 18:47:34.823390 master-0 kubenswrapper[29097]: I0312 18:47:34.822670 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a97f2967-17f2-42cc-91b6-37f26b1a6964-scripts\") pod \"glance-16afb-default-internal-api-0\" (UID: \"a97f2967-17f2-42cc-91b6-37f26b1a6964\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:34.823390 master-0 kubenswrapper[29097]: I0312 18:47:34.822704 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9c46351d-ae56-4f9f-ba28-1389bc23a289-credential-keys\") pod \"keystone-bootstrap-t9v2r\" (UID: \"9c46351d-ae56-4f9f-ba28-1389bc23a289\") " pod="openstack/keystone-bootstrap-t9v2r" Mar 12 18:47:34.823390 master-0 kubenswrapper[29097]: I0312 18:47:34.822731 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2km46\" (UniqueName: \"kubernetes.io/projected/9c46351d-ae56-4f9f-ba28-1389bc23a289-kube-api-access-2km46\") pod \"keystone-bootstrap-t9v2r\" (UID: \"9c46351d-ae56-4f9f-ba28-1389bc23a289\") " pod="openstack/keystone-bootstrap-t9v2r" Mar 12 18:47:34.823390 master-0 kubenswrapper[29097]: I0312 18:47:34.822756 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c46351d-ae56-4f9f-ba28-1389bc23a289-combined-ca-bundle\") pod \"keystone-bootstrap-t9v2r\" (UID: \"9c46351d-ae56-4f9f-ba28-1389bc23a289\") " pod="openstack/keystone-bootstrap-t9v2r" Mar 12 18:47:34.823390 master-0 kubenswrapper[29097]: I0312 18:47:34.822817 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-config-data\") pod \"ironic-db-sync-mzfh7\" (UID: \"64b3a2fa-455e-45a6-a3b4-9763b68a8faa\") " pod="openstack/ironic-db-sync-mzfh7" Mar 12 18:47:34.823390 master-0 kubenswrapper[29097]: I0312 18:47:34.822834 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a97f2967-17f2-42cc-91b6-37f26b1a6964-logs\") pod \"glance-16afb-default-internal-api-0\" (UID: \"a97f2967-17f2-42cc-91b6-37f26b1a6964\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:34.823390 master-0 kubenswrapper[29097]: I0312 18:47:34.822839 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a97f2967-17f2-42cc-91b6-37f26b1a6964-httpd-run\") pod \"glance-16afb-default-internal-api-0\" (UID: \"a97f2967-17f2-42cc-91b6-37f26b1a6964\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:34.823390 master-0 kubenswrapper[29097]: I0312 18:47:34.822851 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-scripts\") pod \"ironic-db-sync-mzfh7\" (UID: \"64b3a2fa-455e-45a6-a3b4-9763b68a8faa\") " pod="openstack/ironic-db-sync-mzfh7" Mar 12 18:47:34.823390 master-0 kubenswrapper[29097]: I0312 18:47:34.822882 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8tdl\" (UniqueName: \"kubernetes.io/projected/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-kube-api-access-m8tdl\") pod \"ironic-db-sync-mzfh7\" (UID: \"64b3a2fa-455e-45a6-a3b4-9763b68a8faa\") " pod="openstack/ironic-db-sync-mzfh7" Mar 12 18:47:34.823390 master-0 kubenswrapper[29097]: I0312 18:47:34.822938 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a97f2967-17f2-42cc-91b6-37f26b1a6964-config-data\") pod \"glance-16afb-default-internal-api-0\" (UID: \"a97f2967-17f2-42cc-91b6-37f26b1a6964\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:34.823390 master-0 kubenswrapper[29097]: I0312 18:47:34.822975 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c46351d-ae56-4f9f-ba28-1389bc23a289-config-data\") pod \"keystone-bootstrap-t9v2r\" (UID: \"9c46351d-ae56-4f9f-ba28-1389bc23a289\") " pod="openstack/keystone-bootstrap-t9v2r" Mar 12 18:47:34.823390 master-0 kubenswrapper[29097]: I0312 18:47:34.823007 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c46351d-ae56-4f9f-ba28-1389bc23a289-scripts\") pod \"keystone-bootstrap-t9v2r\" (UID: \"9c46351d-ae56-4f9f-ba28-1389bc23a289\") " pod="openstack/keystone-bootstrap-t9v2r" Mar 12 18:47:34.823390 master-0 kubenswrapper[29097]: I0312 18:47:34.823025 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-config-data-merged\") pod \"ironic-db-sync-mzfh7\" (UID: \"64b3a2fa-455e-45a6-a3b4-9763b68a8faa\") " pod="openstack/ironic-db-sync-mzfh7" Mar 12 18:47:34.823390 master-0 kubenswrapper[29097]: I0312 18:47:34.823056 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9c46351d-ae56-4f9f-ba28-1389bc23a289-fernet-keys\") pod \"keystone-bootstrap-t9v2r\" (UID: \"9c46351d-ae56-4f9f-ba28-1389bc23a289\") " pod="openstack/keystone-bootstrap-t9v2r" Mar 12 18:47:34.823390 master-0 kubenswrapper[29097]: I0312 18:47:34.823076 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a97f2967-17f2-42cc-91b6-37f26b1a6964-internal-tls-certs\") pod \"glance-16afb-default-internal-api-0\" (UID: \"a97f2967-17f2-42cc-91b6-37f26b1a6964\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:34.825887 master-0 kubenswrapper[29097]: I0312 18:47:34.825842 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a97f2967-17f2-42cc-91b6-37f26b1a6964-logs\") pod \"glance-16afb-default-internal-api-0\" (UID: \"a97f2967-17f2-42cc-91b6-37f26b1a6964\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:34.826116 master-0 kubenswrapper[29097]: I0312 18:47:34.825851 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-config-data-merged\") pod \"ironic-db-sync-mzfh7\" (UID: \"64b3a2fa-455e-45a6-a3b4-9763b68a8faa\") " pod="openstack/ironic-db-sync-mzfh7" Mar 12 18:47:34.826406 master-0 kubenswrapper[29097]: I0312 18:47:34.826373 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-etc-podinfo\") pod \"ironic-db-sync-mzfh7\" (UID: \"64b3a2fa-455e-45a6-a3b4-9763b68a8faa\") " pod="openstack/ironic-db-sync-mzfh7" Mar 12 18:47:34.826567 master-0 kubenswrapper[29097]: I0312 18:47:34.826536 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a97f2967-17f2-42cc-91b6-37f26b1a6964-combined-ca-bundle\") pod \"glance-16afb-default-internal-api-0\" (UID: \"a97f2967-17f2-42cc-91b6-37f26b1a6964\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:34.827047 master-0 kubenswrapper[29097]: I0312 18:47:34.827013 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a97f2967-17f2-42cc-91b6-37f26b1a6964-scripts\") pod \"glance-16afb-default-internal-api-0\" (UID: \"a97f2967-17f2-42cc-91b6-37f26b1a6964\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:34.828211 master-0 kubenswrapper[29097]: I0312 18:47:34.827251 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c46351d-ae56-4f9f-ba28-1389bc23a289-combined-ca-bundle\") pod \"keystone-bootstrap-t9v2r\" (UID: \"9c46351d-ae56-4f9f-ba28-1389bc23a289\") " pod="openstack/keystone-bootstrap-t9v2r" Mar 12 18:47:34.828211 master-0 kubenswrapper[29097]: I0312 18:47:34.827669 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-scripts\") pod \"ironic-db-sync-mzfh7\" (UID: \"64b3a2fa-455e-45a6-a3b4-9763b68a8faa\") " pod="openstack/ironic-db-sync-mzfh7" Mar 12 18:47:34.828211 master-0 kubenswrapper[29097]: I0312 18:47:34.827794 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a97f2967-17f2-42cc-91b6-37f26b1a6964-internal-tls-certs\") pod \"glance-16afb-default-internal-api-0\" (UID: \"a97f2967-17f2-42cc-91b6-37f26b1a6964\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:34.828584 master-0 kubenswrapper[29097]: I0312 18:47:34.828395 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-combined-ca-bundle\") pod \"ironic-db-sync-mzfh7\" (UID: \"64b3a2fa-455e-45a6-a3b4-9763b68a8faa\") " pod="openstack/ironic-db-sync-mzfh7" Mar 12 18:47:34.828824 master-0 kubenswrapper[29097]: I0312 18:47:34.828785 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-config-data\") pod \"ironic-db-sync-mzfh7\" (UID: \"64b3a2fa-455e-45a6-a3b4-9763b68a8faa\") " pod="openstack/ironic-db-sync-mzfh7" Mar 12 18:47:34.829486 master-0 kubenswrapper[29097]: I0312 18:47:34.829452 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c46351d-ae56-4f9f-ba28-1389bc23a289-config-data\") pod \"keystone-bootstrap-t9v2r\" (UID: \"9c46351d-ae56-4f9f-ba28-1389bc23a289\") " pod="openstack/keystone-bootstrap-t9v2r" Mar 12 18:47:34.829567 master-0 kubenswrapper[29097]: I0312 18:47:34.829498 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c46351d-ae56-4f9f-ba28-1389bc23a289-scripts\") pod \"keystone-bootstrap-t9v2r\" (UID: \"9c46351d-ae56-4f9f-ba28-1389bc23a289\") " pod="openstack/keystone-bootstrap-t9v2r" Mar 12 18:47:34.829611 master-0 kubenswrapper[29097]: I0312 18:47:34.829538 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9c46351d-ae56-4f9f-ba28-1389bc23a289-fernet-keys\") pod \"keystone-bootstrap-t9v2r\" (UID: \"9c46351d-ae56-4f9f-ba28-1389bc23a289\") " pod="openstack/keystone-bootstrap-t9v2r" Mar 12 18:47:34.832649 master-0 kubenswrapper[29097]: I0312 18:47:34.832599 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a97f2967-17f2-42cc-91b6-37f26b1a6964-config-data\") pod \"glance-16afb-default-internal-api-0\" (UID: \"a97f2967-17f2-42cc-91b6-37f26b1a6964\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:34.832732 master-0 kubenswrapper[29097]: I0312 18:47:34.832690 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9c46351d-ae56-4f9f-ba28-1389bc23a289-credential-keys\") pod \"keystone-bootstrap-t9v2r\" (UID: \"9c46351d-ae56-4f9f-ba28-1389bc23a289\") " pod="openstack/keystone-bootstrap-t9v2r" Mar 12 18:47:34.925182 master-0 kubenswrapper[29097]: I0312 18:47:34.925120 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-15bfb998-3b5a-4ce6-84e0-d75d1f841c87\" (UniqueName: \"kubernetes.io/csi/topolvm.io^50ae7dd3-ed24-4f03-8717-dcd38b66edcc\") pod \"glance-16afb-default-internal-api-0\" (UID: \"a97f2967-17f2-42cc-91b6-37f26b1a6964\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:34.926879 master-0 kubenswrapper[29097]: I0312 18:47:34.926840 29097 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 18:47:34.926959 master-0 kubenswrapper[29097]: I0312 18:47:34.926900 29097 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-15bfb998-3b5a-4ce6-84e0-d75d1f841c87\" (UniqueName: \"kubernetes.io/csi/topolvm.io^50ae7dd3-ed24-4f03-8717-dcd38b66edcc\") pod \"glance-16afb-default-internal-api-0\" (UID: \"a97f2967-17f2-42cc-91b6-37f26b1a6964\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/bdf06cc41b16558e5d4e2346226e79fd70cee97d9259625a849a3aa2d0277459/globalmount\"" pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:35.366694 master-0 kubenswrapper[29097]: I0312 18:47:35.360902 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2km46\" (UniqueName: \"kubernetes.io/projected/9c46351d-ae56-4f9f-ba28-1389bc23a289-kube-api-access-2km46\") pod \"keystone-bootstrap-t9v2r\" (UID: \"9c46351d-ae56-4f9f-ba28-1389bc23a289\") " pod="openstack/keystone-bootstrap-t9v2r" Mar 12 18:47:35.376207 master-0 kubenswrapper[29097]: I0312 18:47:35.376120 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8tdl\" (UniqueName: \"kubernetes.io/projected/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-kube-api-access-m8tdl\") pod \"ironic-db-sync-mzfh7\" (UID: \"64b3a2fa-455e-45a6-a3b4-9763b68a8faa\") " pod="openstack/ironic-db-sync-mzfh7" Mar 12 18:47:35.388280 master-0 kubenswrapper[29097]: I0312 18:47:35.388245 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prwlg\" (UniqueName: \"kubernetes.io/projected/a97f2967-17f2-42cc-91b6-37f26b1a6964-kube-api-access-prwlg\") pod \"glance-16afb-default-internal-api-0\" (UID: \"a97f2967-17f2-42cc-91b6-37f26b1a6964\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:35.465612 master-0 kubenswrapper[29097]: I0312 18:47:35.465570 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:35.537001 master-0 kubenswrapper[29097]: I0312 18:47:35.536960 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-logs\") pod \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\" (UID: \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\") " Mar 12 18:47:35.537683 master-0 kubenswrapper[29097]: I0312 18:47:35.537035 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-public-tls-certs\") pod \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\" (UID: \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\") " Mar 12 18:47:35.537683 master-0 kubenswrapper[29097]: I0312 18:47:35.537223 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^310fb28b-3797-432d-a3e8-4c4039b4350a\") pod \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\" (UID: \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\") " Mar 12 18:47:35.537683 master-0 kubenswrapper[29097]: I0312 18:47:35.537344 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr5hq\" (UniqueName: \"kubernetes.io/projected/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-kube-api-access-jr5hq\") pod \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\" (UID: \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\") " Mar 12 18:47:35.537683 master-0 kubenswrapper[29097]: I0312 18:47:35.537378 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-config-data\") pod \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\" (UID: \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\") " Mar 12 18:47:35.537683 master-0 kubenswrapper[29097]: I0312 18:47:35.537529 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-scripts\") pod \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\" (UID: \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\") " Mar 12 18:47:35.537683 master-0 kubenswrapper[29097]: I0312 18:47:35.537555 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-httpd-run\") pod \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\" (UID: \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\") " Mar 12 18:47:35.537683 master-0 kubenswrapper[29097]: I0312 18:47:35.537648 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-combined-ca-bundle\") pod \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\" (UID: \"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500\") " Mar 12 18:47:35.539141 master-0 kubenswrapper[29097]: I0312 18:47:35.538772 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-logs" (OuterVolumeSpecName: "logs") pod "fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500" (UID: "fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:47:35.539406 master-0 kubenswrapper[29097]: I0312 18:47:35.539313 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500" (UID: "fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:47:35.556135 master-0 kubenswrapper[29097]: I0312 18:47:35.556035 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-kube-api-access-jr5hq" (OuterVolumeSpecName: "kube-api-access-jr5hq") pod "fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500" (UID: "fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500"). InnerVolumeSpecName "kube-api-access-jr5hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:47:35.589656 master-0 kubenswrapper[29097]: I0312 18:47:35.563595 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t9v2r" Mar 12 18:47:35.589656 master-0 kubenswrapper[29097]: I0312 18:47:35.582249 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-mzfh7" Mar 12 18:47:35.589656 master-0 kubenswrapper[29097]: I0312 18:47:35.583716 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-scripts" (OuterVolumeSpecName: "scripts") pod "fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500" (UID: "fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:35.608566 master-0 kubenswrapper[29097]: I0312 18:47:35.603334 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500" (UID: "fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:35.642381 master-0 kubenswrapper[29097]: I0312 18:47:35.640624 29097 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:35.642381 master-0 kubenswrapper[29097]: I0312 18:47:35.640663 29097 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:35.642381 master-0 kubenswrapper[29097]: I0312 18:47:35.640674 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:35.642381 master-0 kubenswrapper[29097]: I0312 18:47:35.640684 29097 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:35.642381 master-0 kubenswrapper[29097]: I0312 18:47:35.640696 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jr5hq\" (UniqueName: \"kubernetes.io/projected/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-kube-api-access-jr5hq\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:35.645601 master-0 kubenswrapper[29097]: I0312 18:47:35.644500 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500" (UID: "fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:35.659547 master-0 kubenswrapper[29097]: I0312 18:47:35.657704 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-config-data" (OuterVolumeSpecName: "config-data") pod "fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500" (UID: "fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:35.734202 master-0 kubenswrapper[29097]: I0312 18:47:35.733817 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s5w8c" Mar 12 18:47:35.742318 master-0 kubenswrapper[29097]: I0312 18:47:35.742272 29097 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:35.742318 master-0 kubenswrapper[29097]: I0312 18:47:35.742306 29097 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:35.846313 master-0 kubenswrapper[29097]: I0312 18:47:35.846251 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae6f8a81-a597-4c6d-ae77-b60b36190af6-scripts\") pod \"ae6f8a81-a597-4c6d-ae77-b60b36190af6\" (UID: \"ae6f8a81-a597-4c6d-ae77-b60b36190af6\") " Mar 12 18:47:35.846561 master-0 kubenswrapper[29097]: I0312 18:47:35.846337 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae6f8a81-a597-4c6d-ae77-b60b36190af6-logs\") pod \"ae6f8a81-a597-4c6d-ae77-b60b36190af6\" (UID: \"ae6f8a81-a597-4c6d-ae77-b60b36190af6\") " Mar 12 18:47:35.846561 master-0 kubenswrapper[29097]: I0312 18:47:35.846540 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae6f8a81-a597-4c6d-ae77-b60b36190af6-config-data\") pod \"ae6f8a81-a597-4c6d-ae77-b60b36190af6\" (UID: \"ae6f8a81-a597-4c6d-ae77-b60b36190af6\") " Mar 12 18:47:35.846633 master-0 kubenswrapper[29097]: I0312 18:47:35.846597 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lc85z\" (UniqueName: \"kubernetes.io/projected/ae6f8a81-a597-4c6d-ae77-b60b36190af6-kube-api-access-lc85z\") pod \"ae6f8a81-a597-4c6d-ae77-b60b36190af6\" (UID: \"ae6f8a81-a597-4c6d-ae77-b60b36190af6\") " Mar 12 18:47:35.846892 master-0 kubenswrapper[29097]: I0312 18:47:35.846877 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae6f8a81-a597-4c6d-ae77-b60b36190af6-combined-ca-bundle\") pod \"ae6f8a81-a597-4c6d-ae77-b60b36190af6\" (UID: \"ae6f8a81-a597-4c6d-ae77-b60b36190af6\") " Mar 12 18:47:35.851438 master-0 kubenswrapper[29097]: I0312 18:47:35.851377 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae6f8a81-a597-4c6d-ae77-b60b36190af6-kube-api-access-lc85z" (OuterVolumeSpecName: "kube-api-access-lc85z") pod "ae6f8a81-a597-4c6d-ae77-b60b36190af6" (UID: "ae6f8a81-a597-4c6d-ae77-b60b36190af6"). InnerVolumeSpecName "kube-api-access-lc85z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:47:35.851558 master-0 kubenswrapper[29097]: I0312 18:47:35.851515 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae6f8a81-a597-4c6d-ae77-b60b36190af6-scripts" (OuterVolumeSpecName: "scripts") pod "ae6f8a81-a597-4c6d-ae77-b60b36190af6" (UID: "ae6f8a81-a597-4c6d-ae77-b60b36190af6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:35.852236 master-0 kubenswrapper[29097]: I0312 18:47:35.852190 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae6f8a81-a597-4c6d-ae77-b60b36190af6-logs" (OuterVolumeSpecName: "logs") pod "ae6f8a81-a597-4c6d-ae77-b60b36190af6" (UID: "ae6f8a81-a597-4c6d-ae77-b60b36190af6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:47:35.877453 master-0 kubenswrapper[29097]: I0312 18:47:35.877277 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae6f8a81-a597-4c6d-ae77-b60b36190af6-config-data" (OuterVolumeSpecName: "config-data") pod "ae6f8a81-a597-4c6d-ae77-b60b36190af6" (UID: "ae6f8a81-a597-4c6d-ae77-b60b36190af6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:35.891355 master-0 kubenswrapper[29097]: I0312 18:47:35.891276 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae6f8a81-a597-4c6d-ae77-b60b36190af6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae6f8a81-a597-4c6d-ae77-b60b36190af6" (UID: "ae6f8a81-a597-4c6d-ae77-b60b36190af6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:35.948504 master-0 kubenswrapper[29097]: I0312 18:47:35.948450 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae6f8a81-a597-4c6d-ae77-b60b36190af6-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:35.948504 master-0 kubenswrapper[29097]: I0312 18:47:35.948487 29097 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae6f8a81-a597-4c6d-ae77-b60b36190af6-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:35.948504 master-0 kubenswrapper[29097]: I0312 18:47:35.948498 29097 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae6f8a81-a597-4c6d-ae77-b60b36190af6-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:35.948504 master-0 kubenswrapper[29097]: I0312 18:47:35.948512 29097 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae6f8a81-a597-4c6d-ae77-b60b36190af6-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:35.948806 master-0 kubenswrapper[29097]: I0312 18:47:35.948537 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lc85z\" (UniqueName: \"kubernetes.io/projected/ae6f8a81-a597-4c6d-ae77-b60b36190af6-kube-api-access-lc85z\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:36.174362 master-0 kubenswrapper[29097]: I0312 18:47:36.174318 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76986c7db5-59cjq" Mar 12 18:47:36.183116 master-0 kubenswrapper[29097]: I0312 18:47:36.182986 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-15bfb998-3b5a-4ce6-84e0-d75d1f841c87\" (UniqueName: \"kubernetes.io/csi/topolvm.io^50ae7dd3-ed24-4f03-8717-dcd38b66edcc\") pod \"glance-16afb-default-internal-api-0\" (UID: \"a97f2967-17f2-42cc-91b6-37f26b1a6964\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:36.189557 master-0 kubenswrapper[29097]: I0312 18:47:36.189498 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^310fb28b-3797-432d-a3e8-4c4039b4350a" (OuterVolumeSpecName: "glance") pod "fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500" (UID: "fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500"). InnerVolumeSpecName "pvc-ec8aab56-f74f-48cf-9ada-d06acc37cf58". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 18:47:36.257251 master-0 kubenswrapper[29097]: I0312 18:47:36.257093 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:36.262479 master-0 kubenswrapper[29097]: I0312 18:47:36.262428 29097 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-ec8aab56-f74f-48cf-9ada-d06acc37cf58\" (UniqueName: \"kubernetes.io/csi/topolvm.io^310fb28b-3797-432d-a3e8-4c4039b4350a\") on node \"master-0\" " Mar 12 18:47:36.264370 master-0 kubenswrapper[29097]: I0312 18:47:36.264308 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-mzfh7"] Mar 12 18:47:36.268100 master-0 kubenswrapper[29097]: W0312 18:47:36.267960 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64b3a2fa_455e_45a6_a3b4_9763b68a8faa.slice/crio-f1c909b3dbae7961faba9e5a0d2f7b6da8b9a95ab844ed7f63b95ddc1c204ac6 WatchSource:0}: Error finding container f1c909b3dbae7961faba9e5a0d2f7b6da8b9a95ab844ed7f63b95ddc1c204ac6: Status 404 returned error can't find the container with id f1c909b3dbae7961faba9e5a0d2f7b6da8b9a95ab844ed7f63b95ddc1c204ac6 Mar 12 18:47:36.286640 master-0 kubenswrapper[29097]: I0312 18:47:36.286599 29097 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 18:47:36.286884 master-0 kubenswrapper[29097]: I0312 18:47:36.286749 29097 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-ec8aab56-f74f-48cf-9ada-d06acc37cf58" (UniqueName: "kubernetes.io/csi/topolvm.io^310fb28b-3797-432d-a3e8-4c4039b4350a") on node "master-0" Mar 12 18:47:36.361107 master-0 kubenswrapper[29097]: I0312 18:47:36.361039 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-mzfh7" event={"ID":"64b3a2fa-455e-45a6-a3b4-9763b68a8faa","Type":"ContainerStarted","Data":"f1c909b3dbae7961faba9e5a0d2f7b6da8b9a95ab844ed7f63b95ddc1c204ac6"} Mar 12 18:47:36.365008 master-0 kubenswrapper[29097]: I0312 18:47:36.363923 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa2283ab-4112-4a09-83e9-0d40cf04e864-ovsdbserver-nb\") pod \"aa2283ab-4112-4a09-83e9-0d40cf04e864\" (UID: \"aa2283ab-4112-4a09-83e9-0d40cf04e864\") " Mar 12 18:47:36.365008 master-0 kubenswrapper[29097]: I0312 18:47:36.363986 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa2283ab-4112-4a09-83e9-0d40cf04e864-config\") pod \"aa2283ab-4112-4a09-83e9-0d40cf04e864\" (UID: \"aa2283ab-4112-4a09-83e9-0d40cf04e864\") " Mar 12 18:47:36.365008 master-0 kubenswrapper[29097]: I0312 18:47:36.364100 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aa2283ab-4112-4a09-83e9-0d40cf04e864-dns-swift-storage-0\") pod \"aa2283ab-4112-4a09-83e9-0d40cf04e864\" (UID: \"aa2283ab-4112-4a09-83e9-0d40cf04e864\") " Mar 12 18:47:36.365008 master-0 kubenswrapper[29097]: I0312 18:47:36.364118 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa2283ab-4112-4a09-83e9-0d40cf04e864-dns-svc\") pod \"aa2283ab-4112-4a09-83e9-0d40cf04e864\" (UID: \"aa2283ab-4112-4a09-83e9-0d40cf04e864\") " Mar 12 18:47:36.365008 master-0 kubenswrapper[29097]: I0312 18:47:36.364142 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa2283ab-4112-4a09-83e9-0d40cf04e864-ovsdbserver-sb\") pod \"aa2283ab-4112-4a09-83e9-0d40cf04e864\" (UID: \"aa2283ab-4112-4a09-83e9-0d40cf04e864\") " Mar 12 18:47:36.365008 master-0 kubenswrapper[29097]: I0312 18:47:36.364227 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vpnq\" (UniqueName: \"kubernetes.io/projected/aa2283ab-4112-4a09-83e9-0d40cf04e864-kube-api-access-7vpnq\") pod \"aa2283ab-4112-4a09-83e9-0d40cf04e864\" (UID: \"aa2283ab-4112-4a09-83e9-0d40cf04e864\") " Mar 12 18:47:36.365008 master-0 kubenswrapper[29097]: I0312 18:47:36.364787 29097 reconciler_common.go:293] "Volume detached for volume \"pvc-ec8aab56-f74f-48cf-9ada-d06acc37cf58\" (UniqueName: \"kubernetes.io/csi/topolvm.io^310fb28b-3797-432d-a3e8-4c4039b4350a\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:36.389794 master-0 kubenswrapper[29097]: I0312 18:47:36.389545 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s5w8c" event={"ID":"ae6f8a81-a597-4c6d-ae77-b60b36190af6","Type":"ContainerDied","Data":"ce3cefb9cc08074338d1fd90e809169d3c44bc562d18b6a72f93f94b78a04843"} Mar 12 18:47:36.389794 master-0 kubenswrapper[29097]: I0312 18:47:36.389619 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce3cefb9cc08074338d1fd90e809169d3c44bc562d18b6a72f93f94b78a04843" Mar 12 18:47:36.389794 master-0 kubenswrapper[29097]: I0312 18:47:36.389743 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s5w8c" Mar 12 18:47:36.401458 master-0 kubenswrapper[29097]: I0312 18:47:36.401395 29097 generic.go:334] "Generic (PLEG): container finished" podID="aa2283ab-4112-4a09-83e9-0d40cf04e864" containerID="3f24109038d9b5a61c5e5d31d04b3c8ecf1da95fc9aad9ab64af18a7e69364da" exitCode=0 Mar 12 18:47:36.401630 master-0 kubenswrapper[29097]: I0312 18:47:36.401547 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76986c7db5-59cjq" event={"ID":"aa2283ab-4112-4a09-83e9-0d40cf04e864","Type":"ContainerDied","Data":"3f24109038d9b5a61c5e5d31d04b3c8ecf1da95fc9aad9ab64af18a7e69364da"} Mar 12 18:47:36.401630 master-0 kubenswrapper[29097]: I0312 18:47:36.401590 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76986c7db5-59cjq" event={"ID":"aa2283ab-4112-4a09-83e9-0d40cf04e864","Type":"ContainerDied","Data":"c753879d4cbffbfaac00a52f592732ca68924bc4746062a760b1bf6a5616a535"} Mar 12 18:47:36.401700 master-0 kubenswrapper[29097]: I0312 18:47:36.401629 29097 scope.go:117] "RemoveContainer" containerID="3f24109038d9b5a61c5e5d31d04b3c8ecf1da95fc9aad9ab64af18a7e69364da" Mar 12 18:47:36.403940 master-0 kubenswrapper[29097]: I0312 18:47:36.403908 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76986c7db5-59cjq" Mar 12 18:47:36.405644 master-0 kubenswrapper[29097]: I0312 18:47:36.405594 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa2283ab-4112-4a09-83e9-0d40cf04e864-kube-api-access-7vpnq" (OuterVolumeSpecName: "kube-api-access-7vpnq") pod "aa2283ab-4112-4a09-83e9-0d40cf04e864" (UID: "aa2283ab-4112-4a09-83e9-0d40cf04e864"). InnerVolumeSpecName "kube-api-access-7vpnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:47:36.424549 master-0 kubenswrapper[29097]: I0312 18:47:36.423281 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-16afb-default-external-api-0" event={"ID":"fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500","Type":"ContainerDied","Data":"7780af6d44fca17cf49ea36ce8bfc9c47aaa89af448f35da29e1893330bfe604"} Mar 12 18:47:36.424549 master-0 kubenswrapper[29097]: I0312 18:47:36.423450 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:36.448093 master-0 kubenswrapper[29097]: I0312 18:47:36.447645 29097 scope.go:117] "RemoveContainer" containerID="39e78e7380348f2aa48b190ad619b10858bc2b80440895be630cffc3336808d9" Mar 12 18:47:36.463712 master-0 kubenswrapper[29097]: I0312 18:47:36.463641 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa2283ab-4112-4a09-83e9-0d40cf04e864-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "aa2283ab-4112-4a09-83e9-0d40cf04e864" (UID: "aa2283ab-4112-4a09-83e9-0d40cf04e864"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:47:36.467267 master-0 kubenswrapper[29097]: I0312 18:47:36.467236 29097 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aa2283ab-4112-4a09-83e9-0d40cf04e864-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:36.474964 master-0 kubenswrapper[29097]: I0312 18:47:36.474911 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vpnq\" (UniqueName: \"kubernetes.io/projected/aa2283ab-4112-4a09-83e9-0d40cf04e864-kube-api-access-7vpnq\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:36.475187 master-0 kubenswrapper[29097]: I0312 18:47:36.467772 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-t9v2r"] Mar 12 18:47:36.490268 master-0 kubenswrapper[29097]: I0312 18:47:36.489774 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa2283ab-4112-4a09-83e9-0d40cf04e864-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aa2283ab-4112-4a09-83e9-0d40cf04e864" (UID: "aa2283ab-4112-4a09-83e9-0d40cf04e864"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:47:36.492184 master-0 kubenswrapper[29097]: W0312 18:47:36.492101 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c46351d_ae56_4f9f_ba28_1389bc23a289.slice/crio-caf76bd8daa29ed5bf3eeaa1750fad9908885630f786a1c0ba6e1f2d69c1817d WatchSource:0}: Error finding container caf76bd8daa29ed5bf3eeaa1750fad9908885630f786a1c0ba6e1f2d69c1817d: Status 404 returned error can't find the container with id caf76bd8daa29ed5bf3eeaa1750fad9908885630f786a1c0ba6e1f2d69c1817d Mar 12 18:47:36.492271 master-0 kubenswrapper[29097]: I0312 18:47:36.492180 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-16afb-default-external-api-0"] Mar 12 18:47:36.522642 master-0 kubenswrapper[29097]: I0312 18:47:36.522536 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa2283ab-4112-4a09-83e9-0d40cf04e864-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aa2283ab-4112-4a09-83e9-0d40cf04e864" (UID: "aa2283ab-4112-4a09-83e9-0d40cf04e864"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:47:36.528899 master-0 kubenswrapper[29097]: I0312 18:47:36.528798 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-16afb-default-external-api-0"] Mar 12 18:47:36.540232 master-0 kubenswrapper[29097]: I0312 18:47:36.540163 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa2283ab-4112-4a09-83e9-0d40cf04e864-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "aa2283ab-4112-4a09-83e9-0d40cf04e864" (UID: "aa2283ab-4112-4a09-83e9-0d40cf04e864"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:47:36.541919 master-0 kubenswrapper[29097]: I0312 18:47:36.541875 29097 scope.go:117] "RemoveContainer" containerID="3f24109038d9b5a61c5e5d31d04b3c8ecf1da95fc9aad9ab64af18a7e69364da" Mar 12 18:47:36.543584 master-0 kubenswrapper[29097]: E0312 18:47:36.543552 29097 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f24109038d9b5a61c5e5d31d04b3c8ecf1da95fc9aad9ab64af18a7e69364da\": container with ID starting with 3f24109038d9b5a61c5e5d31d04b3c8ecf1da95fc9aad9ab64af18a7e69364da not found: ID does not exist" containerID="3f24109038d9b5a61c5e5d31d04b3c8ecf1da95fc9aad9ab64af18a7e69364da" Mar 12 18:47:36.543816 master-0 kubenswrapper[29097]: I0312 18:47:36.543787 29097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f24109038d9b5a61c5e5d31d04b3c8ecf1da95fc9aad9ab64af18a7e69364da"} err="failed to get container status \"3f24109038d9b5a61c5e5d31d04b3c8ecf1da95fc9aad9ab64af18a7e69364da\": rpc error: code = NotFound desc = could not find container \"3f24109038d9b5a61c5e5d31d04b3c8ecf1da95fc9aad9ab64af18a7e69364da\": container with ID starting with 3f24109038d9b5a61c5e5d31d04b3c8ecf1da95fc9aad9ab64af18a7e69364da not found: ID does not exist" Mar 12 18:47:36.545395 master-0 kubenswrapper[29097]: I0312 18:47:36.545363 29097 scope.go:117] "RemoveContainer" containerID="39e78e7380348f2aa48b190ad619b10858bc2b80440895be630cffc3336808d9" Mar 12 18:47:36.552238 master-0 kubenswrapper[29097]: I0312 18:47:36.552162 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-16afb-default-external-api-0"] Mar 12 18:47:36.552582 master-0 kubenswrapper[29097]: E0312 18:47:36.552551 29097 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39e78e7380348f2aa48b190ad619b10858bc2b80440895be630cffc3336808d9\": container with ID starting with 39e78e7380348f2aa48b190ad619b10858bc2b80440895be630cffc3336808d9 not found: ID does not exist" containerID="39e78e7380348f2aa48b190ad619b10858bc2b80440895be630cffc3336808d9" Mar 12 18:47:36.552673 master-0 kubenswrapper[29097]: I0312 18:47:36.552589 29097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39e78e7380348f2aa48b190ad619b10858bc2b80440895be630cffc3336808d9"} err="failed to get container status \"39e78e7380348f2aa48b190ad619b10858bc2b80440895be630cffc3336808d9\": rpc error: code = NotFound desc = could not find container \"39e78e7380348f2aa48b190ad619b10858bc2b80440895be630cffc3336808d9\": container with ID starting with 39e78e7380348f2aa48b190ad619b10858bc2b80440895be630cffc3336808d9 not found: ID does not exist" Mar 12 18:47:36.552673 master-0 kubenswrapper[29097]: I0312 18:47:36.552615 29097 scope.go:117] "RemoveContainer" containerID="2d405a5596272e729efd32e738956af18d9076e8841ff7afb10774b8a4545401" Mar 12 18:47:36.552779 master-0 kubenswrapper[29097]: E0312 18:47:36.552753 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa2283ab-4112-4a09-83e9-0d40cf04e864" containerName="init" Mar 12 18:47:36.552779 master-0 kubenswrapper[29097]: I0312 18:47:36.552777 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa2283ab-4112-4a09-83e9-0d40cf04e864" containerName="init" Mar 12 18:47:36.552850 master-0 kubenswrapper[29097]: E0312 18:47:36.552798 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500" containerName="glance-httpd" Mar 12 18:47:36.552850 master-0 kubenswrapper[29097]: I0312 18:47:36.552806 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500" containerName="glance-httpd" Mar 12 18:47:36.552850 master-0 kubenswrapper[29097]: E0312 18:47:36.552827 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500" containerName="glance-log" Mar 12 18:47:36.552850 master-0 kubenswrapper[29097]: I0312 18:47:36.552833 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500" containerName="glance-log" Mar 12 18:47:36.552850 master-0 kubenswrapper[29097]: E0312 18:47:36.552850 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae6f8a81-a597-4c6d-ae77-b60b36190af6" containerName="placement-db-sync" Mar 12 18:47:36.552991 master-0 kubenswrapper[29097]: I0312 18:47:36.552858 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae6f8a81-a597-4c6d-ae77-b60b36190af6" containerName="placement-db-sync" Mar 12 18:47:36.552991 master-0 kubenswrapper[29097]: E0312 18:47:36.552881 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa2283ab-4112-4a09-83e9-0d40cf04e864" containerName="dnsmasq-dns" Mar 12 18:47:36.552991 master-0 kubenswrapper[29097]: I0312 18:47:36.552888 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa2283ab-4112-4a09-83e9-0d40cf04e864" containerName="dnsmasq-dns" Mar 12 18:47:36.553124 master-0 kubenswrapper[29097]: I0312 18:47:36.553085 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa2283ab-4112-4a09-83e9-0d40cf04e864" containerName="dnsmasq-dns" Mar 12 18:47:36.553170 master-0 kubenswrapper[29097]: I0312 18:47:36.553129 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500" containerName="glance-httpd" Mar 12 18:47:36.553170 master-0 kubenswrapper[29097]: I0312 18:47:36.553147 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500" containerName="glance-log" Mar 12 18:47:36.553170 master-0 kubenswrapper[29097]: I0312 18:47:36.553169 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae6f8a81-a597-4c6d-ae77-b60b36190af6" containerName="placement-db-sync" Mar 12 18:47:36.554336 master-0 kubenswrapper[29097]: I0312 18:47:36.554309 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:36.556886 master-0 kubenswrapper[29097]: I0312 18:47:36.556831 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa2283ab-4112-4a09-83e9-0d40cf04e864-config" (OuterVolumeSpecName: "config") pod "aa2283ab-4112-4a09-83e9-0d40cf04e864" (UID: "aa2283ab-4112-4a09-83e9-0d40cf04e864"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:47:36.557204 master-0 kubenswrapper[29097]: I0312 18:47:36.557179 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-16afb-default-external-config-data" Mar 12 18:47:36.557616 master-0 kubenswrapper[29097]: I0312 18:47:36.557597 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 12 18:47:36.565600 master-0 kubenswrapper[29097]: I0312 18:47:36.565547 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-16afb-default-external-api-0"] Mar 12 18:47:36.579801 master-0 kubenswrapper[29097]: I0312 18:47:36.579747 29097 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa2283ab-4112-4a09-83e9-0d40cf04e864-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:36.579986 master-0 kubenswrapper[29097]: I0312 18:47:36.579814 29097 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa2283ab-4112-4a09-83e9-0d40cf04e864-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:36.579986 master-0 kubenswrapper[29097]: I0312 18:47:36.579826 29097 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa2283ab-4112-4a09-83e9-0d40cf04e864-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:36.579986 master-0 kubenswrapper[29097]: I0312 18:47:36.579835 29097 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa2283ab-4112-4a09-83e9-0d40cf04e864-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:36.681753 master-0 kubenswrapper[29097]: I0312 18:47:36.681704 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c5395ae3-4fe1-4ada-85ee-30841c1ad513-httpd-run\") pod \"glance-16afb-default-external-api-0\" (UID: \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:36.682018 master-0 kubenswrapper[29097]: I0312 18:47:36.682001 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5395ae3-4fe1-4ada-85ee-30841c1ad513-logs\") pod \"glance-16afb-default-external-api-0\" (UID: \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:36.682149 master-0 kubenswrapper[29097]: I0312 18:47:36.682133 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5395ae3-4fe1-4ada-85ee-30841c1ad513-combined-ca-bundle\") pod \"glance-16afb-default-external-api-0\" (UID: \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:36.682223 master-0 kubenswrapper[29097]: I0312 18:47:36.682211 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ec8aab56-f74f-48cf-9ada-d06acc37cf58\" (UniqueName: \"kubernetes.io/csi/topolvm.io^310fb28b-3797-432d-a3e8-4c4039b4350a\") pod \"glance-16afb-default-external-api-0\" (UID: \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:36.682331 master-0 kubenswrapper[29097]: I0312 18:47:36.682318 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fbpx\" (UniqueName: \"kubernetes.io/projected/c5395ae3-4fe1-4ada-85ee-30841c1ad513-kube-api-access-4fbpx\") pod \"glance-16afb-default-external-api-0\" (UID: \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:36.682465 master-0 kubenswrapper[29097]: I0312 18:47:36.682447 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5395ae3-4fe1-4ada-85ee-30841c1ad513-scripts\") pod \"glance-16afb-default-external-api-0\" (UID: \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:36.682669 master-0 kubenswrapper[29097]: I0312 18:47:36.682649 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5395ae3-4fe1-4ada-85ee-30841c1ad513-config-data\") pod \"glance-16afb-default-external-api-0\" (UID: \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:36.683080 master-0 kubenswrapper[29097]: I0312 18:47:36.682754 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5395ae3-4fe1-4ada-85ee-30841c1ad513-public-tls-certs\") pod \"glance-16afb-default-external-api-0\" (UID: \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:36.687800 master-0 kubenswrapper[29097]: I0312 18:47:36.686983 29097 scope.go:117] "RemoveContainer" containerID="2a280dbc3d340c8a9788c20830e57acb6a9eeaf873b9b86c6b944bfdb4eecdc1" Mar 12 18:47:36.790684 master-0 kubenswrapper[29097]: I0312 18:47:36.786165 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fbpx\" (UniqueName: \"kubernetes.io/projected/c5395ae3-4fe1-4ada-85ee-30841c1ad513-kube-api-access-4fbpx\") pod \"glance-16afb-default-external-api-0\" (UID: \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:36.790684 master-0 kubenswrapper[29097]: I0312 18:47:36.786506 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5395ae3-4fe1-4ada-85ee-30841c1ad513-scripts\") pod \"glance-16afb-default-external-api-0\" (UID: \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:36.790684 master-0 kubenswrapper[29097]: I0312 18:47:36.786641 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5395ae3-4fe1-4ada-85ee-30841c1ad513-config-data\") pod \"glance-16afb-default-external-api-0\" (UID: \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:36.790684 master-0 kubenswrapper[29097]: I0312 18:47:36.786672 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5395ae3-4fe1-4ada-85ee-30841c1ad513-public-tls-certs\") pod \"glance-16afb-default-external-api-0\" (UID: \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:36.790684 master-0 kubenswrapper[29097]: I0312 18:47:36.786772 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c5395ae3-4fe1-4ada-85ee-30841c1ad513-httpd-run\") pod \"glance-16afb-default-external-api-0\" (UID: \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:36.790684 master-0 kubenswrapper[29097]: I0312 18:47:36.786829 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5395ae3-4fe1-4ada-85ee-30841c1ad513-logs\") pod \"glance-16afb-default-external-api-0\" (UID: \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:36.790684 master-0 kubenswrapper[29097]: I0312 18:47:36.786902 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5395ae3-4fe1-4ada-85ee-30841c1ad513-combined-ca-bundle\") pod \"glance-16afb-default-external-api-0\" (UID: \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:36.790684 master-0 kubenswrapper[29097]: I0312 18:47:36.786921 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ec8aab56-f74f-48cf-9ada-d06acc37cf58\" (UniqueName: \"kubernetes.io/csi/topolvm.io^310fb28b-3797-432d-a3e8-4c4039b4350a\") pod \"glance-16afb-default-external-api-0\" (UID: \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:36.790684 master-0 kubenswrapper[29097]: I0312 18:47:36.787337 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c5395ae3-4fe1-4ada-85ee-30841c1ad513-httpd-run\") pod \"glance-16afb-default-external-api-0\" (UID: \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:36.790684 master-0 kubenswrapper[29097]: I0312 18:47:36.787427 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5395ae3-4fe1-4ada-85ee-30841c1ad513-logs\") pod \"glance-16afb-default-external-api-0\" (UID: \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:36.791888 master-0 kubenswrapper[29097]: I0312 18:47:36.791853 29097 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 18:47:36.791955 master-0 kubenswrapper[29097]: I0312 18:47:36.791906 29097 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ec8aab56-f74f-48cf-9ada-d06acc37cf58\" (UniqueName: \"kubernetes.io/csi/topolvm.io^310fb28b-3797-432d-a3e8-4c4039b4350a\") pod \"glance-16afb-default-external-api-0\" (UID: \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/a675104d86a4ce743943f7962ef4d34dd002b87ad3cb26bbb0067dde16060ad0/globalmount\"" pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:36.793410 master-0 kubenswrapper[29097]: I0312 18:47:36.792690 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500" path="/var/lib/kubelet/pods/fdd5e66e-1e1a-42c6-a0ba-5ed4c31db500/volumes" Mar 12 18:47:36.797055 master-0 kubenswrapper[29097]: I0312 18:47:36.796854 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5395ae3-4fe1-4ada-85ee-30841c1ad513-combined-ca-bundle\") pod \"glance-16afb-default-external-api-0\" (UID: \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:36.797618 master-0 kubenswrapper[29097]: I0312 18:47:36.797568 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5395ae3-4fe1-4ada-85ee-30841c1ad513-public-tls-certs\") pod \"glance-16afb-default-external-api-0\" (UID: \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:36.798617 master-0 kubenswrapper[29097]: I0312 18:47:36.798590 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6b99b5d8f4-vzr8p"] Mar 12 18:47:36.802529 master-0 kubenswrapper[29097]: I0312 18:47:36.802058 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5395ae3-4fe1-4ada-85ee-30841c1ad513-scripts\") pod \"glance-16afb-default-external-api-0\" (UID: \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:36.803254 master-0 kubenswrapper[29097]: I0312 18:47:36.803176 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6b99b5d8f4-vzr8p"] Mar 12 18:47:36.803551 master-0 kubenswrapper[29097]: I0312 18:47:36.803539 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b99b5d8f4-vzr8p" Mar 12 18:47:36.810713 master-0 kubenswrapper[29097]: I0312 18:47:36.807499 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5395ae3-4fe1-4ada-85ee-30841c1ad513-config-data\") pod \"glance-16afb-default-external-api-0\" (UID: \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:36.810713 master-0 kubenswrapper[29097]: I0312 18:47:36.809433 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Mar 12 18:47:36.810713 master-0 kubenswrapper[29097]: I0312 18:47:36.809496 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 12 18:47:36.810713 master-0 kubenswrapper[29097]: I0312 18:47:36.809729 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 12 18:47:36.810713 master-0 kubenswrapper[29097]: I0312 18:47:36.809771 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Mar 12 18:47:36.811967 master-0 kubenswrapper[29097]: I0312 18:47:36.811380 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76986c7db5-59cjq"] Mar 12 18:47:36.817205 master-0 kubenswrapper[29097]: I0312 18:47:36.817144 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fbpx\" (UniqueName: \"kubernetes.io/projected/c5395ae3-4fe1-4ada-85ee-30841c1ad513-kube-api-access-4fbpx\") pod \"glance-16afb-default-external-api-0\" (UID: \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:36.890591 master-0 kubenswrapper[29097]: I0312 18:47:36.889002 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76986c7db5-59cjq"] Mar 12 18:47:36.890591 master-0 kubenswrapper[29097]: I0312 18:47:36.889111 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlqsv\" (UniqueName: \"kubernetes.io/projected/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-kube-api-access-hlqsv\") pod \"placement-6b99b5d8f4-vzr8p\" (UID: \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\") " pod="openstack/placement-6b99b5d8f4-vzr8p" Mar 12 18:47:36.890591 master-0 kubenswrapper[29097]: I0312 18:47:36.889256 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-config-data\") pod \"placement-6b99b5d8f4-vzr8p\" (UID: \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\") " pod="openstack/placement-6b99b5d8f4-vzr8p" Mar 12 18:47:36.890591 master-0 kubenswrapper[29097]: I0312 18:47:36.889980 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-combined-ca-bundle\") pod \"placement-6b99b5d8f4-vzr8p\" (UID: \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\") " pod="openstack/placement-6b99b5d8f4-vzr8p" Mar 12 18:47:36.890881 master-0 kubenswrapper[29097]: I0312 18:47:36.890592 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-public-tls-certs\") pod \"placement-6b99b5d8f4-vzr8p\" (UID: \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\") " pod="openstack/placement-6b99b5d8f4-vzr8p" Mar 12 18:47:36.890881 master-0 kubenswrapper[29097]: I0312 18:47:36.890646 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-logs\") pod \"placement-6b99b5d8f4-vzr8p\" (UID: \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\") " pod="openstack/placement-6b99b5d8f4-vzr8p" Mar 12 18:47:36.890881 master-0 kubenswrapper[29097]: I0312 18:47:36.890715 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-internal-tls-certs\") pod \"placement-6b99b5d8f4-vzr8p\" (UID: \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\") " pod="openstack/placement-6b99b5d8f4-vzr8p" Mar 12 18:47:36.893593 master-0 kubenswrapper[29097]: I0312 18:47:36.891413 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-scripts\") pod \"placement-6b99b5d8f4-vzr8p\" (UID: \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\") " pod="openstack/placement-6b99b5d8f4-vzr8p" Mar 12 18:47:36.946304 master-0 kubenswrapper[29097]: I0312 18:47:36.946252 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-16afb-default-internal-api-0"] Mar 12 18:47:36.992962 master-0 kubenswrapper[29097]: I0312 18:47:36.992895 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-combined-ca-bundle\") pod \"placement-6b99b5d8f4-vzr8p\" (UID: \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\") " pod="openstack/placement-6b99b5d8f4-vzr8p" Mar 12 18:47:36.993311 master-0 kubenswrapper[29097]: I0312 18:47:36.993269 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-public-tls-certs\") pod \"placement-6b99b5d8f4-vzr8p\" (UID: \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\") " pod="openstack/placement-6b99b5d8f4-vzr8p" Mar 12 18:47:36.993504 master-0 kubenswrapper[29097]: I0312 18:47:36.993483 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-logs\") pod \"placement-6b99b5d8f4-vzr8p\" (UID: \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\") " pod="openstack/placement-6b99b5d8f4-vzr8p" Mar 12 18:47:36.993739 master-0 kubenswrapper[29097]: I0312 18:47:36.993715 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-internal-tls-certs\") pod \"placement-6b99b5d8f4-vzr8p\" (UID: \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\") " pod="openstack/placement-6b99b5d8f4-vzr8p" Mar 12 18:47:36.994048 master-0 kubenswrapper[29097]: I0312 18:47:36.994019 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-logs\") pod \"placement-6b99b5d8f4-vzr8p\" (UID: \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\") " pod="openstack/placement-6b99b5d8f4-vzr8p" Mar 12 18:47:36.994189 master-0 kubenswrapper[29097]: I0312 18:47:36.994167 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-scripts\") pod \"placement-6b99b5d8f4-vzr8p\" (UID: \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\") " pod="openstack/placement-6b99b5d8f4-vzr8p" Mar 12 18:47:36.994770 master-0 kubenswrapper[29097]: I0312 18:47:36.994738 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlqsv\" (UniqueName: \"kubernetes.io/projected/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-kube-api-access-hlqsv\") pod \"placement-6b99b5d8f4-vzr8p\" (UID: \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\") " pod="openstack/placement-6b99b5d8f4-vzr8p" Mar 12 18:47:36.994921 master-0 kubenswrapper[29097]: I0312 18:47:36.994902 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-config-data\") pod \"placement-6b99b5d8f4-vzr8p\" (UID: \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\") " pod="openstack/placement-6b99b5d8f4-vzr8p" Mar 12 18:47:36.997858 master-0 kubenswrapper[29097]: I0312 18:47:36.996218 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-combined-ca-bundle\") pod \"placement-6b99b5d8f4-vzr8p\" (UID: \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\") " pod="openstack/placement-6b99b5d8f4-vzr8p" Mar 12 18:47:36.997858 master-0 kubenswrapper[29097]: I0312 18:47:36.996338 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-public-tls-certs\") pod \"placement-6b99b5d8f4-vzr8p\" (UID: \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\") " pod="openstack/placement-6b99b5d8f4-vzr8p" Mar 12 18:47:36.998322 master-0 kubenswrapper[29097]: I0312 18:47:36.998303 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-scripts\") pod \"placement-6b99b5d8f4-vzr8p\" (UID: \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\") " pod="openstack/placement-6b99b5d8f4-vzr8p" Mar 12 18:47:36.998672 master-0 kubenswrapper[29097]: I0312 18:47:36.998619 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-internal-tls-certs\") pod \"placement-6b99b5d8f4-vzr8p\" (UID: \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\") " pod="openstack/placement-6b99b5d8f4-vzr8p" Mar 12 18:47:37.003549 master-0 kubenswrapper[29097]: I0312 18:47:37.000670 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-config-data\") pod \"placement-6b99b5d8f4-vzr8p\" (UID: \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\") " pod="openstack/placement-6b99b5d8f4-vzr8p" Mar 12 18:47:37.016081 master-0 kubenswrapper[29097]: I0312 18:47:37.016043 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlqsv\" (UniqueName: \"kubernetes.io/projected/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-kube-api-access-hlqsv\") pod \"placement-6b99b5d8f4-vzr8p\" (UID: \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\") " pod="openstack/placement-6b99b5d8f4-vzr8p" Mar 12 18:47:37.076983 master-0 kubenswrapper[29097]: E0312 18:47:37.076920 29097 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa2283ab_4112_4a09_83e9_0d40cf04e864.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa2283ab_4112_4a09_83e9_0d40cf04e864.slice/crio-c753879d4cbffbfaac00a52f592732ca68924bc4746062a760b1bf6a5616a535\": RecentStats: unable to find data in memory cache]" Mar 12 18:47:37.192298 master-0 kubenswrapper[29097]: I0312 18:47:37.192230 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b99b5d8f4-vzr8p" Mar 12 18:47:37.445246 master-0 kubenswrapper[29097]: I0312 18:47:37.445188 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t9v2r" event={"ID":"9c46351d-ae56-4f9f-ba28-1389bc23a289","Type":"ContainerStarted","Data":"f47226072b7be0f3299ec26189e52658c1414a426eba48c157677885ea5eceec"} Mar 12 18:47:37.445246 master-0 kubenswrapper[29097]: I0312 18:47:37.445241 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t9v2r" event={"ID":"9c46351d-ae56-4f9f-ba28-1389bc23a289","Type":"ContainerStarted","Data":"caf76bd8daa29ed5bf3eeaa1750fad9908885630f786a1c0ba6e1f2d69c1817d"} Mar 12 18:47:37.456738 master-0 kubenswrapper[29097]: I0312 18:47:37.456653 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-db-sync-xn4dx" event={"ID":"d7af9e78-07ea-42c2-8d0a-d73fe46d8d36","Type":"ContainerStarted","Data":"4c19fada3cc7083d5e22f843b7d5c27c91ec9229d266d2163b09fcf6794d3ef8"} Mar 12 18:47:37.464941 master-0 kubenswrapper[29097]: I0312 18:47:37.464887 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-16afb-default-internal-api-0" event={"ID":"a97f2967-17f2-42cc-91b6-37f26b1a6964","Type":"ContainerStarted","Data":"123bb1dcae921cb17eb59b0cf5a71f66535bfa8753b6bb5c6a22eef7dc466288"} Mar 12 18:47:37.510551 master-0 kubenswrapper[29097]: I0312 18:47:37.506287 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-t9v2r" podStartSLOduration=4.506262871 podStartE2EDuration="4.506262871s" podCreationTimestamp="2026-03-12 18:47:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:47:37.477878432 +0000 UTC m=+1097.031858529" watchObservedRunningTime="2026-03-12 18:47:37.506262871 +0000 UTC m=+1097.060242968" Mar 12 18:47:37.510551 master-0 kubenswrapper[29097]: I0312 18:47:37.506912 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-fa62f-db-sync-xn4dx" podStartSLOduration=4.591345166 podStartE2EDuration="27.506906907s" podCreationTimestamp="2026-03-12 18:47:10 +0000 UTC" firstStartedPulling="2026-03-12 18:47:12.84723051 +0000 UTC m=+1072.401210607" lastFinishedPulling="2026-03-12 18:47:35.762792251 +0000 UTC m=+1095.316772348" observedRunningTime="2026-03-12 18:47:37.500610169 +0000 UTC m=+1097.054590286" watchObservedRunningTime="2026-03-12 18:47:37.506906907 +0000 UTC m=+1097.060887004" Mar 12 18:47:37.687172 master-0 kubenswrapper[29097]: I0312 18:47:37.678660 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6b99b5d8f4-vzr8p"] Mar 12 18:47:38.020065 master-0 kubenswrapper[29097]: I0312 18:47:38.014280 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ec8aab56-f74f-48cf-9ada-d06acc37cf58\" (UniqueName: \"kubernetes.io/csi/topolvm.io^310fb28b-3797-432d-a3e8-4c4039b4350a\") pod \"glance-16afb-default-external-api-0\" (UID: \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:38.110497 master-0 kubenswrapper[29097]: I0312 18:47:38.110412 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:38.492601 master-0 kubenswrapper[29097]: I0312 18:47:38.489670 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b99b5d8f4-vzr8p" event={"ID":"2a88a474-9ce7-41df-9e1b-b1d78fe4c476","Type":"ContainerStarted","Data":"d4e28b6422d75d9d0d2d71ea5a2c1bdb041052b43c3dee32cd9728e1a9f073e1"} Mar 12 18:47:38.492601 master-0 kubenswrapper[29097]: I0312 18:47:38.489723 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b99b5d8f4-vzr8p" event={"ID":"2a88a474-9ce7-41df-9e1b-b1d78fe4c476","Type":"ContainerStarted","Data":"c52fc3dc2c25aed0273af45409d12c36842ca54255b24b4af72e10290786131a"} Mar 12 18:47:38.492601 master-0 kubenswrapper[29097]: I0312 18:47:38.489732 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b99b5d8f4-vzr8p" event={"ID":"2a88a474-9ce7-41df-9e1b-b1d78fe4c476","Type":"ContainerStarted","Data":"e3f0c0762369d201942b1271c0083c0ab314c0806695dd6f381c0d7433bc04f1"} Mar 12 18:47:38.492601 master-0 kubenswrapper[29097]: I0312 18:47:38.490940 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6b99b5d8f4-vzr8p" Mar 12 18:47:38.492601 master-0 kubenswrapper[29097]: I0312 18:47:38.490967 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6b99b5d8f4-vzr8p" Mar 12 18:47:38.499564 master-0 kubenswrapper[29097]: I0312 18:47:38.498379 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-16afb-default-internal-api-0" event={"ID":"a97f2967-17f2-42cc-91b6-37f26b1a6964","Type":"ContainerStarted","Data":"8c43ebb930a5dde6a879f0f926d002f1286c3d2bef69ff7801797e8735bb677b"} Mar 12 18:47:38.499564 master-0 kubenswrapper[29097]: I0312 18:47:38.498434 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-16afb-default-internal-api-0" event={"ID":"a97f2967-17f2-42cc-91b6-37f26b1a6964","Type":"ContainerStarted","Data":"ae5de7d9bd60ff489b22e6c977fb1766db39f08cfc50c3c13d856d5ea7934c54"} Mar 12 18:47:38.535509 master-0 kubenswrapper[29097]: I0312 18:47:38.535426 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6b99b5d8f4-vzr8p" podStartSLOduration=2.535407668 podStartE2EDuration="2.535407668s" podCreationTimestamp="2026-03-12 18:47:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:47:38.512906826 +0000 UTC m=+1098.066886943" watchObservedRunningTime="2026-03-12 18:47:38.535407668 +0000 UTC m=+1098.089387765" Mar 12 18:47:38.552035 master-0 kubenswrapper[29097]: I0312 18:47:38.551929 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-16afb-default-internal-api-0" podStartSLOduration=5.551910689 podStartE2EDuration="5.551910689s" podCreationTimestamp="2026-03-12 18:47:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:47:38.543685184 +0000 UTC m=+1098.097665281" watchObservedRunningTime="2026-03-12 18:47:38.551910689 +0000 UTC m=+1098.105890796" Mar 12 18:47:38.762854 master-0 kubenswrapper[29097]: I0312 18:47:38.758175 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa2283ab-4112-4a09-83e9-0d40cf04e864" path="/var/lib/kubelet/pods/aa2283ab-4112-4a09-83e9-0d40cf04e864/volumes" Mar 12 18:47:38.762854 master-0 kubenswrapper[29097]: I0312 18:47:38.759405 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-16afb-default-external-api-0"] Mar 12 18:47:38.815006 master-0 kubenswrapper[29097]: E0312 18:47:38.814937 29097 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading blob sha256:26237e99a94e503c86c2e06bca4ac1be4a64af76f7952ad111abaf0c4aad685b: Digest did not match, expected sha256:26237e99a94e503c86c2e06bca4ac1be4a64af76f7952ad111abaf0c4aad685b, got sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" image="quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:0312c8ff4b98bfc1e0c9bb717adb3247305749e34533eff91099c88ed9a1ed7f" Mar 12 18:47:38.815254 master-0 kubenswrapper[29097]: E0312 18:47:38.815182 29097 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:0312c8ff4b98bfc1e0c9bb717adb3247305749e34533eff91099c88ed9a1ed7f,Command:[/bin/bash],Args:[-c /usr/local/bin/container-scripts/init.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:IronicPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:IronicPassword,Optional:nil,},},},EnvVar{Name:PodName,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:PodNamespace,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:ProvisionNetwork,Value:,ValueFrom:nil,},EnvVar{Name:DatabaseHost,Value:openstack.openstack.svc,ValueFrom:nil,},EnvVar{Name:DatabaseName,Value:ironic,ValueFrom:nil,},EnvVar{Name:DeployHTTPURL,Value:,ValueFrom:nil,},EnvVar{Name:IngressDomain,Value:,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-merged,ReadOnly:false,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-podinfo,ReadOnly:false,MountPath:/etc/podinfo,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m8tdl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-db-sync-mzfh7_openstack(64b3a2fa-455e-45a6-a3b4-9763b68a8faa): ErrImagePull: reading blob sha256:26237e99a94e503c86c2e06bca4ac1be4a64af76f7952ad111abaf0c4aad685b: Digest did not match, expected sha256:26237e99a94e503c86c2e06bca4ac1be4a64af76f7952ad111abaf0c4aad685b, got sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" logger="UnhandledError" Mar 12 18:47:38.816502 master-0 kubenswrapper[29097]: E0312 18:47:38.816432 29097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"reading blob sha256:26237e99a94e503c86c2e06bca4ac1be4a64af76f7952ad111abaf0c4aad685b: Digest did not match, expected sha256:26237e99a94e503c86c2e06bca4ac1be4a64af76f7952ad111abaf0c4aad685b, got sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855\"" pod="openstack/ironic-db-sync-mzfh7" podUID="64b3a2fa-455e-45a6-a3b4-9763b68a8faa" Mar 12 18:47:39.512636 master-0 kubenswrapper[29097]: I0312 18:47:39.511858 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-16afb-default-external-api-0" event={"ID":"c5395ae3-4fe1-4ada-85ee-30841c1ad513","Type":"ContainerStarted","Data":"cc2f30d353ab6bd817917a3b359d2e1b91746e6c161b42424dab9cef450ca08e"} Mar 12 18:47:39.512636 master-0 kubenswrapper[29097]: I0312 18:47:39.511923 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-16afb-default-external-api-0" event={"ID":"c5395ae3-4fe1-4ada-85ee-30841c1ad513","Type":"ContainerStarted","Data":"38f910c3a707c752a06e1f75b31c52b08092efae711d5ccb0f4fee701040ce09"} Mar 12 18:47:39.514749 master-0 kubenswrapper[29097]: E0312 18:47:39.514704 29097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:0312c8ff4b98bfc1e0c9bb717adb3247305749e34533eff91099c88ed9a1ed7f\\\"\"" pod="openstack/ironic-db-sync-mzfh7" podUID="64b3a2fa-455e-45a6-a3b4-9763b68a8faa" Mar 12 18:47:40.526672 master-0 kubenswrapper[29097]: I0312 18:47:40.526633 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-16afb-default-external-api-0" event={"ID":"c5395ae3-4fe1-4ada-85ee-30841c1ad513","Type":"ContainerStarted","Data":"f762a2f6987698bc02d141b61a07e1fd7ad2686796e1e73a12727367dc957574"} Mar 12 18:47:40.564784 master-0 kubenswrapper[29097]: I0312 18:47:40.564694 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-16afb-default-external-api-0" podStartSLOduration=4.5646717169999995 podStartE2EDuration="4.564671717s" podCreationTimestamp="2026-03-12 18:47:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:47:40.555730074 +0000 UTC m=+1100.109710191" watchObservedRunningTime="2026-03-12 18:47:40.564671717 +0000 UTC m=+1100.118651814" Mar 12 18:47:41.537407 master-0 kubenswrapper[29097]: I0312 18:47:41.537339 29097 generic.go:334] "Generic (PLEG): container finished" podID="9c46351d-ae56-4f9f-ba28-1389bc23a289" containerID="f47226072b7be0f3299ec26189e52658c1414a426eba48c157677885ea5eceec" exitCode=0 Mar 12 18:47:41.537926 master-0 kubenswrapper[29097]: I0312 18:47:41.537645 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t9v2r" event={"ID":"9c46351d-ae56-4f9f-ba28-1389bc23a289","Type":"ContainerDied","Data":"f47226072b7be0f3299ec26189e52658c1414a426eba48c157677885ea5eceec"} Mar 12 18:47:42.956020 master-0 kubenswrapper[29097]: I0312 18:47:42.955944 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t9v2r" Mar 12 18:47:43.042944 master-0 kubenswrapper[29097]: I0312 18:47:43.042868 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c46351d-ae56-4f9f-ba28-1389bc23a289-scripts\") pod \"9c46351d-ae56-4f9f-ba28-1389bc23a289\" (UID: \"9c46351d-ae56-4f9f-ba28-1389bc23a289\") " Mar 12 18:47:43.042944 master-0 kubenswrapper[29097]: I0312 18:47:43.042931 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c46351d-ae56-4f9f-ba28-1389bc23a289-combined-ca-bundle\") pod \"9c46351d-ae56-4f9f-ba28-1389bc23a289\" (UID: \"9c46351d-ae56-4f9f-ba28-1389bc23a289\") " Mar 12 18:47:43.043212 master-0 kubenswrapper[29097]: I0312 18:47:43.043054 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9c46351d-ae56-4f9f-ba28-1389bc23a289-fernet-keys\") pod \"9c46351d-ae56-4f9f-ba28-1389bc23a289\" (UID: \"9c46351d-ae56-4f9f-ba28-1389bc23a289\") " Mar 12 18:47:43.043212 master-0 kubenswrapper[29097]: I0312 18:47:43.043164 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9c46351d-ae56-4f9f-ba28-1389bc23a289-credential-keys\") pod \"9c46351d-ae56-4f9f-ba28-1389bc23a289\" (UID: \"9c46351d-ae56-4f9f-ba28-1389bc23a289\") " Mar 12 18:47:43.043310 master-0 kubenswrapper[29097]: I0312 18:47:43.043247 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c46351d-ae56-4f9f-ba28-1389bc23a289-config-data\") pod \"9c46351d-ae56-4f9f-ba28-1389bc23a289\" (UID: \"9c46351d-ae56-4f9f-ba28-1389bc23a289\") " Mar 12 18:47:43.043366 master-0 kubenswrapper[29097]: I0312 18:47:43.043312 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2km46\" (UniqueName: \"kubernetes.io/projected/9c46351d-ae56-4f9f-ba28-1389bc23a289-kube-api-access-2km46\") pod \"9c46351d-ae56-4f9f-ba28-1389bc23a289\" (UID: \"9c46351d-ae56-4f9f-ba28-1389bc23a289\") " Mar 12 18:47:43.052577 master-0 kubenswrapper[29097]: I0312 18:47:43.049124 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c46351d-ae56-4f9f-ba28-1389bc23a289-scripts" (OuterVolumeSpecName: "scripts") pod "9c46351d-ae56-4f9f-ba28-1389bc23a289" (UID: "9c46351d-ae56-4f9f-ba28-1389bc23a289"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:43.052577 master-0 kubenswrapper[29097]: I0312 18:47:43.049159 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c46351d-ae56-4f9f-ba28-1389bc23a289-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "9c46351d-ae56-4f9f-ba28-1389bc23a289" (UID: "9c46351d-ae56-4f9f-ba28-1389bc23a289"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:43.052577 master-0 kubenswrapper[29097]: I0312 18:47:43.049183 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c46351d-ae56-4f9f-ba28-1389bc23a289-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9c46351d-ae56-4f9f-ba28-1389bc23a289" (UID: "9c46351d-ae56-4f9f-ba28-1389bc23a289"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:43.066679 master-0 kubenswrapper[29097]: I0312 18:47:43.066597 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c46351d-ae56-4f9f-ba28-1389bc23a289-kube-api-access-2km46" (OuterVolumeSpecName: "kube-api-access-2km46") pod "9c46351d-ae56-4f9f-ba28-1389bc23a289" (UID: "9c46351d-ae56-4f9f-ba28-1389bc23a289"). InnerVolumeSpecName "kube-api-access-2km46". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:47:43.067933 master-0 kubenswrapper[29097]: I0312 18:47:43.067891 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c46351d-ae56-4f9f-ba28-1389bc23a289-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c46351d-ae56-4f9f-ba28-1389bc23a289" (UID: "9c46351d-ae56-4f9f-ba28-1389bc23a289"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:43.069737 master-0 kubenswrapper[29097]: I0312 18:47:43.069701 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c46351d-ae56-4f9f-ba28-1389bc23a289-config-data" (OuterVolumeSpecName: "config-data") pod "9c46351d-ae56-4f9f-ba28-1389bc23a289" (UID: "9c46351d-ae56-4f9f-ba28-1389bc23a289"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:43.146642 master-0 kubenswrapper[29097]: I0312 18:47:43.146087 29097 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c46351d-ae56-4f9f-ba28-1389bc23a289-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:43.146642 master-0 kubenswrapper[29097]: I0312 18:47:43.146142 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2km46\" (UniqueName: \"kubernetes.io/projected/9c46351d-ae56-4f9f-ba28-1389bc23a289-kube-api-access-2km46\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:43.146642 master-0 kubenswrapper[29097]: I0312 18:47:43.146156 29097 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c46351d-ae56-4f9f-ba28-1389bc23a289-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:43.146642 master-0 kubenswrapper[29097]: I0312 18:47:43.146167 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c46351d-ae56-4f9f-ba28-1389bc23a289-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:43.146642 master-0 kubenswrapper[29097]: I0312 18:47:43.146178 29097 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9c46351d-ae56-4f9f-ba28-1389bc23a289-fernet-keys\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:43.146642 master-0 kubenswrapper[29097]: I0312 18:47:43.146189 29097 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9c46351d-ae56-4f9f-ba28-1389bc23a289-credential-keys\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:43.563704 master-0 kubenswrapper[29097]: I0312 18:47:43.563631 29097 generic.go:334] "Generic (PLEG): container finished" podID="1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1" containerID="209247b4520b8861a60d0e6248bbe2bda42ad520bd33156189442acf885b604e" exitCode=0 Mar 12 18:47:43.563993 master-0 kubenswrapper[29097]: I0312 18:47:43.563745 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ptnfn" event={"ID":"1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1","Type":"ContainerDied","Data":"209247b4520b8861a60d0e6248bbe2bda42ad520bd33156189442acf885b604e"} Mar 12 18:47:43.566096 master-0 kubenswrapper[29097]: I0312 18:47:43.566033 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t9v2r" event={"ID":"9c46351d-ae56-4f9f-ba28-1389bc23a289","Type":"ContainerDied","Data":"caf76bd8daa29ed5bf3eeaa1750fad9908885630f786a1c0ba6e1f2d69c1817d"} Mar 12 18:47:43.566231 master-0 kubenswrapper[29097]: I0312 18:47:43.566089 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caf76bd8daa29ed5bf3eeaa1750fad9908885630f786a1c0ba6e1f2d69c1817d" Mar 12 18:47:43.566231 master-0 kubenswrapper[29097]: I0312 18:47:43.566122 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t9v2r" Mar 12 18:47:43.705542 master-0 kubenswrapper[29097]: I0312 18:47:43.702495 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-798795c956-754f2"] Mar 12 18:47:43.705542 master-0 kubenswrapper[29097]: E0312 18:47:43.705155 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c46351d-ae56-4f9f-ba28-1389bc23a289" containerName="keystone-bootstrap" Mar 12 18:47:43.705542 master-0 kubenswrapper[29097]: I0312 18:47:43.705177 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c46351d-ae56-4f9f-ba28-1389bc23a289" containerName="keystone-bootstrap" Mar 12 18:47:43.705542 master-0 kubenswrapper[29097]: I0312 18:47:43.705392 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c46351d-ae56-4f9f-ba28-1389bc23a289" containerName="keystone-bootstrap" Mar 12 18:47:43.713551 master-0 kubenswrapper[29097]: I0312 18:47:43.706151 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-798795c956-754f2" Mar 12 18:47:43.713551 master-0 kubenswrapper[29097]: I0312 18:47:43.711805 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 12 18:47:43.713551 master-0 kubenswrapper[29097]: I0312 18:47:43.712037 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 12 18:47:43.713551 master-0 kubenswrapper[29097]: I0312 18:47:43.712154 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 12 18:47:43.713551 master-0 kubenswrapper[29097]: I0312 18:47:43.712249 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 12 18:47:43.713551 master-0 kubenswrapper[29097]: I0312 18:47:43.712391 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 12 18:47:43.725167 master-0 kubenswrapper[29097]: I0312 18:47:43.723357 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-798795c956-754f2"] Mar 12 18:47:43.775550 master-0 kubenswrapper[29097]: I0312 18:47:43.760279 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e8872d55-c0fd-45fd-9060-2f29e85e8f5d-credential-keys\") pod \"keystone-798795c956-754f2\" (UID: \"e8872d55-c0fd-45fd-9060-2f29e85e8f5d\") " pod="openstack/keystone-798795c956-754f2" Mar 12 18:47:43.775550 master-0 kubenswrapper[29097]: I0312 18:47:43.760357 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5nhb\" (UniqueName: \"kubernetes.io/projected/e8872d55-c0fd-45fd-9060-2f29e85e8f5d-kube-api-access-c5nhb\") pod \"keystone-798795c956-754f2\" (UID: \"e8872d55-c0fd-45fd-9060-2f29e85e8f5d\") " pod="openstack/keystone-798795c956-754f2" Mar 12 18:47:43.775550 master-0 kubenswrapper[29097]: I0312 18:47:43.760437 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8872d55-c0fd-45fd-9060-2f29e85e8f5d-scripts\") pod \"keystone-798795c956-754f2\" (UID: \"e8872d55-c0fd-45fd-9060-2f29e85e8f5d\") " pod="openstack/keystone-798795c956-754f2" Mar 12 18:47:43.775550 master-0 kubenswrapper[29097]: I0312 18:47:43.760452 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8872d55-c0fd-45fd-9060-2f29e85e8f5d-internal-tls-certs\") pod \"keystone-798795c956-754f2\" (UID: \"e8872d55-c0fd-45fd-9060-2f29e85e8f5d\") " pod="openstack/keystone-798795c956-754f2" Mar 12 18:47:43.775550 master-0 kubenswrapper[29097]: I0312 18:47:43.760472 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8872d55-c0fd-45fd-9060-2f29e85e8f5d-public-tls-certs\") pod \"keystone-798795c956-754f2\" (UID: \"e8872d55-c0fd-45fd-9060-2f29e85e8f5d\") " pod="openstack/keystone-798795c956-754f2" Mar 12 18:47:43.775550 master-0 kubenswrapper[29097]: I0312 18:47:43.760490 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8872d55-c0fd-45fd-9060-2f29e85e8f5d-combined-ca-bundle\") pod \"keystone-798795c956-754f2\" (UID: \"e8872d55-c0fd-45fd-9060-2f29e85e8f5d\") " pod="openstack/keystone-798795c956-754f2" Mar 12 18:47:43.775550 master-0 kubenswrapper[29097]: I0312 18:47:43.760560 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8872d55-c0fd-45fd-9060-2f29e85e8f5d-config-data\") pod \"keystone-798795c956-754f2\" (UID: \"e8872d55-c0fd-45fd-9060-2f29e85e8f5d\") " pod="openstack/keystone-798795c956-754f2" Mar 12 18:47:43.775550 master-0 kubenswrapper[29097]: I0312 18:47:43.760653 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e8872d55-c0fd-45fd-9060-2f29e85e8f5d-fernet-keys\") pod \"keystone-798795c956-754f2\" (UID: \"e8872d55-c0fd-45fd-9060-2f29e85e8f5d\") " pod="openstack/keystone-798795c956-754f2" Mar 12 18:47:43.862115 master-0 kubenswrapper[29097]: I0312 18:47:43.861990 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e8872d55-c0fd-45fd-9060-2f29e85e8f5d-fernet-keys\") pod \"keystone-798795c956-754f2\" (UID: \"e8872d55-c0fd-45fd-9060-2f29e85e8f5d\") " pod="openstack/keystone-798795c956-754f2" Mar 12 18:47:43.862115 master-0 kubenswrapper[29097]: I0312 18:47:43.862075 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e8872d55-c0fd-45fd-9060-2f29e85e8f5d-credential-keys\") pod \"keystone-798795c956-754f2\" (UID: \"e8872d55-c0fd-45fd-9060-2f29e85e8f5d\") " pod="openstack/keystone-798795c956-754f2" Mar 12 18:47:43.862115 master-0 kubenswrapper[29097]: I0312 18:47:43.862109 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5nhb\" (UniqueName: \"kubernetes.io/projected/e8872d55-c0fd-45fd-9060-2f29e85e8f5d-kube-api-access-c5nhb\") pod \"keystone-798795c956-754f2\" (UID: \"e8872d55-c0fd-45fd-9060-2f29e85e8f5d\") " pod="openstack/keystone-798795c956-754f2" Mar 12 18:47:43.862408 master-0 kubenswrapper[29097]: I0312 18:47:43.862165 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8872d55-c0fd-45fd-9060-2f29e85e8f5d-scripts\") pod \"keystone-798795c956-754f2\" (UID: \"e8872d55-c0fd-45fd-9060-2f29e85e8f5d\") " pod="openstack/keystone-798795c956-754f2" Mar 12 18:47:43.862408 master-0 kubenswrapper[29097]: I0312 18:47:43.862190 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8872d55-c0fd-45fd-9060-2f29e85e8f5d-internal-tls-certs\") pod \"keystone-798795c956-754f2\" (UID: \"e8872d55-c0fd-45fd-9060-2f29e85e8f5d\") " pod="openstack/keystone-798795c956-754f2" Mar 12 18:47:43.862408 master-0 kubenswrapper[29097]: I0312 18:47:43.862211 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8872d55-c0fd-45fd-9060-2f29e85e8f5d-public-tls-certs\") pod \"keystone-798795c956-754f2\" (UID: \"e8872d55-c0fd-45fd-9060-2f29e85e8f5d\") " pod="openstack/keystone-798795c956-754f2" Mar 12 18:47:43.862408 master-0 kubenswrapper[29097]: I0312 18:47:43.862232 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8872d55-c0fd-45fd-9060-2f29e85e8f5d-combined-ca-bundle\") pod \"keystone-798795c956-754f2\" (UID: \"e8872d55-c0fd-45fd-9060-2f29e85e8f5d\") " pod="openstack/keystone-798795c956-754f2" Mar 12 18:47:43.862408 master-0 kubenswrapper[29097]: I0312 18:47:43.862273 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8872d55-c0fd-45fd-9060-2f29e85e8f5d-config-data\") pod \"keystone-798795c956-754f2\" (UID: \"e8872d55-c0fd-45fd-9060-2f29e85e8f5d\") " pod="openstack/keystone-798795c956-754f2" Mar 12 18:47:43.865925 master-0 kubenswrapper[29097]: I0312 18:47:43.865756 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8872d55-c0fd-45fd-9060-2f29e85e8f5d-config-data\") pod \"keystone-798795c956-754f2\" (UID: \"e8872d55-c0fd-45fd-9060-2f29e85e8f5d\") " pod="openstack/keystone-798795c956-754f2" Mar 12 18:47:43.867098 master-0 kubenswrapper[29097]: I0312 18:47:43.867064 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8872d55-c0fd-45fd-9060-2f29e85e8f5d-scripts\") pod \"keystone-798795c956-754f2\" (UID: \"e8872d55-c0fd-45fd-9060-2f29e85e8f5d\") " pod="openstack/keystone-798795c956-754f2" Mar 12 18:47:43.867974 master-0 kubenswrapper[29097]: I0312 18:47:43.867931 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8872d55-c0fd-45fd-9060-2f29e85e8f5d-combined-ca-bundle\") pod \"keystone-798795c956-754f2\" (UID: \"e8872d55-c0fd-45fd-9060-2f29e85e8f5d\") " pod="openstack/keystone-798795c956-754f2" Mar 12 18:47:43.868906 master-0 kubenswrapper[29097]: I0312 18:47:43.868846 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8872d55-c0fd-45fd-9060-2f29e85e8f5d-public-tls-certs\") pod \"keystone-798795c956-754f2\" (UID: \"e8872d55-c0fd-45fd-9060-2f29e85e8f5d\") " pod="openstack/keystone-798795c956-754f2" Mar 12 18:47:43.871789 master-0 kubenswrapper[29097]: I0312 18:47:43.871747 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8872d55-c0fd-45fd-9060-2f29e85e8f5d-internal-tls-certs\") pod \"keystone-798795c956-754f2\" (UID: \"e8872d55-c0fd-45fd-9060-2f29e85e8f5d\") " pod="openstack/keystone-798795c956-754f2" Mar 12 18:47:43.871934 master-0 kubenswrapper[29097]: I0312 18:47:43.871905 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e8872d55-c0fd-45fd-9060-2f29e85e8f5d-fernet-keys\") pod \"keystone-798795c956-754f2\" (UID: \"e8872d55-c0fd-45fd-9060-2f29e85e8f5d\") " pod="openstack/keystone-798795c956-754f2" Mar 12 18:47:43.875661 master-0 kubenswrapper[29097]: I0312 18:47:43.875530 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e8872d55-c0fd-45fd-9060-2f29e85e8f5d-credential-keys\") pod \"keystone-798795c956-754f2\" (UID: \"e8872d55-c0fd-45fd-9060-2f29e85e8f5d\") " pod="openstack/keystone-798795c956-754f2" Mar 12 18:47:43.886481 master-0 kubenswrapper[29097]: I0312 18:47:43.885849 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5nhb\" (UniqueName: \"kubernetes.io/projected/e8872d55-c0fd-45fd-9060-2f29e85e8f5d-kube-api-access-c5nhb\") pod \"keystone-798795c956-754f2\" (UID: \"e8872d55-c0fd-45fd-9060-2f29e85e8f5d\") " pod="openstack/keystone-798795c956-754f2" Mar 12 18:47:44.042695 master-0 kubenswrapper[29097]: I0312 18:47:44.042640 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-798795c956-754f2" Mar 12 18:47:44.497395 master-0 kubenswrapper[29097]: I0312 18:47:44.497331 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-798795c956-754f2"] Mar 12 18:47:44.527816 master-0 kubenswrapper[29097]: W0312 18:47:44.527758 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8872d55_c0fd_45fd_9060_2f29e85e8f5d.slice/crio-0aaaf8944ebd6e3a462bbe9a0d36c517cda4380770cc2d753ae6ce339b21047b WatchSource:0}: Error finding container 0aaaf8944ebd6e3a462bbe9a0d36c517cda4380770cc2d753ae6ce339b21047b: Status 404 returned error can't find the container with id 0aaaf8944ebd6e3a462bbe9a0d36c517cda4380770cc2d753ae6ce339b21047b Mar 12 18:47:44.579669 master-0 kubenswrapper[29097]: I0312 18:47:44.579612 29097 generic.go:334] "Generic (PLEG): container finished" podID="d7af9e78-07ea-42c2-8d0a-d73fe46d8d36" containerID="4c19fada3cc7083d5e22f843b7d5c27c91ec9229d266d2163b09fcf6794d3ef8" exitCode=0 Mar 12 18:47:44.579882 master-0 kubenswrapper[29097]: I0312 18:47:44.579668 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-db-sync-xn4dx" event={"ID":"d7af9e78-07ea-42c2-8d0a-d73fe46d8d36","Type":"ContainerDied","Data":"4c19fada3cc7083d5e22f843b7d5c27c91ec9229d266d2163b09fcf6794d3ef8"} Mar 12 18:47:44.580861 master-0 kubenswrapper[29097]: I0312 18:47:44.580839 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-798795c956-754f2" event={"ID":"e8872d55-c0fd-45fd-9060-2f29e85e8f5d","Type":"ContainerStarted","Data":"0aaaf8944ebd6e3a462bbe9a0d36c517cda4380770cc2d753ae6ce339b21047b"} Mar 12 18:47:45.056934 master-0 kubenswrapper[29097]: I0312 18:47:45.056494 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ptnfn" Mar 12 18:47:45.085432 master-0 kubenswrapper[29097]: I0312 18:47:45.085370 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wvlf\" (UniqueName: \"kubernetes.io/projected/1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1-kube-api-access-4wvlf\") pod \"1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1\" (UID: \"1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1\") " Mar 12 18:47:45.085723 master-0 kubenswrapper[29097]: I0312 18:47:45.085689 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1-combined-ca-bundle\") pod \"1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1\" (UID: \"1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1\") " Mar 12 18:47:45.085863 master-0 kubenswrapper[29097]: I0312 18:47:45.085832 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1-config\") pod \"1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1\" (UID: \"1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1\") " Mar 12 18:47:45.102436 master-0 kubenswrapper[29097]: I0312 18:47:45.098299 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1-kube-api-access-4wvlf" (OuterVolumeSpecName: "kube-api-access-4wvlf") pod "1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1" (UID: "1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1"). InnerVolumeSpecName "kube-api-access-4wvlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:47:45.135135 master-0 kubenswrapper[29097]: I0312 18:47:45.135088 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1" (UID: "1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:45.138682 master-0 kubenswrapper[29097]: I0312 18:47:45.138628 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1-config" (OuterVolumeSpecName: "config") pod "1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1" (UID: "1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:45.188059 master-0 kubenswrapper[29097]: I0312 18:47:45.188003 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:45.188059 master-0 kubenswrapper[29097]: I0312 18:47:45.188038 29097 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:45.188059 master-0 kubenswrapper[29097]: I0312 18:47:45.188048 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wvlf\" (UniqueName: \"kubernetes.io/projected/1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1-kube-api-access-4wvlf\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:45.591684 master-0 kubenswrapper[29097]: I0312 18:47:45.591635 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ptnfn" Mar 12 18:47:45.591684 master-0 kubenswrapper[29097]: I0312 18:47:45.591641 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ptnfn" event={"ID":"1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1","Type":"ContainerDied","Data":"f548747709560e7576bb3c4efba89c954af8d216e65043165428fef1ee6cd2e6"} Mar 12 18:47:45.591948 master-0 kubenswrapper[29097]: I0312 18:47:45.591715 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f548747709560e7576bb3c4efba89c954af8d216e65043165428fef1ee6cd2e6" Mar 12 18:47:45.593805 master-0 kubenswrapper[29097]: I0312 18:47:45.593779 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-798795c956-754f2" event={"ID":"e8872d55-c0fd-45fd-9060-2f29e85e8f5d","Type":"ContainerStarted","Data":"b38f0dd5dad968dd204b072361e795e85d72a5059b97719d0f191ffb8cbe02b6"} Mar 12 18:47:45.593878 master-0 kubenswrapper[29097]: I0312 18:47:45.593822 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-798795c956-754f2" Mar 12 18:47:45.634535 master-0 kubenswrapper[29097]: I0312 18:47:45.628483 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-798795c956-754f2" podStartSLOduration=2.628444828 podStartE2EDuration="2.628444828s" podCreationTimestamp="2026-03-12 18:47:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:47:45.61410018 +0000 UTC m=+1105.168080297" watchObservedRunningTime="2026-03-12 18:47:45.628444828 +0000 UTC m=+1105.182424925" Mar 12 18:47:46.016928 master-0 kubenswrapper[29097]: I0312 18:47:46.016842 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5df4f6b69c-glv8m"] Mar 12 18:47:46.022451 master-0 kubenswrapper[29097]: E0312 18:47:46.018114 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1" containerName="neutron-db-sync" Mar 12 18:47:46.022451 master-0 kubenswrapper[29097]: I0312 18:47:46.018142 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1" containerName="neutron-db-sync" Mar 12 18:47:46.022451 master-0 kubenswrapper[29097]: I0312 18:47:46.018400 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1" containerName="neutron-db-sync" Mar 12 18:47:46.022451 master-0 kubenswrapper[29097]: I0312 18:47:46.019764 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5df4f6b69c-glv8m" Mar 12 18:47:46.052264 master-0 kubenswrapper[29097]: I0312 18:47:46.052206 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5df4f6b69c-glv8m"] Mar 12 18:47:46.080072 master-0 kubenswrapper[29097]: I0312 18:47:46.078871 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-ffbf57c88-5pgzn"] Mar 12 18:47:46.080726 master-0 kubenswrapper[29097]: I0312 18:47:46.080620 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ffbf57c88-5pgzn" Mar 12 18:47:46.085148 master-0 kubenswrapper[29097]: I0312 18:47:46.085113 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Mar 12 18:47:46.085370 master-0 kubenswrapper[29097]: I0312 18:47:46.085355 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 12 18:47:46.085722 master-0 kubenswrapper[29097]: I0312 18:47:46.085684 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 12 18:47:46.094466 master-0 kubenswrapper[29097]: I0312 18:47:46.094190 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-ffbf57c88-5pgzn"] Mar 12 18:47:46.119616 master-0 kubenswrapper[29097]: I0312 18:47:46.117407 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp9ms\" (UniqueName: \"kubernetes.io/projected/185eee46-61f6-4cb6-8bed-9d63f1d448cc-kube-api-access-kp9ms\") pod \"neutron-ffbf57c88-5pgzn\" (UID: \"185eee46-61f6-4cb6-8bed-9d63f1d448cc\") " pod="openstack/neutron-ffbf57c88-5pgzn" Mar 12 18:47:46.119616 master-0 kubenswrapper[29097]: I0312 18:47:46.117572 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-dns-svc\") pod \"dnsmasq-dns-5df4f6b69c-glv8m\" (UID: \"68a827b1-7f2d-4bce-8374-8ed4bc46b22b\") " pod="openstack/dnsmasq-dns-5df4f6b69c-glv8m" Mar 12 18:47:46.119616 master-0 kubenswrapper[29097]: I0312 18:47:46.117606 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtz2q\" (UniqueName: \"kubernetes.io/projected/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-kube-api-access-vtz2q\") pod \"dnsmasq-dns-5df4f6b69c-glv8m\" (UID: \"68a827b1-7f2d-4bce-8374-8ed4bc46b22b\") " pod="openstack/dnsmasq-dns-5df4f6b69c-glv8m" Mar 12 18:47:46.119616 master-0 kubenswrapper[29097]: I0312 18:47:46.117644 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/185eee46-61f6-4cb6-8bed-9d63f1d448cc-ovndb-tls-certs\") pod \"neutron-ffbf57c88-5pgzn\" (UID: \"185eee46-61f6-4cb6-8bed-9d63f1d448cc\") " pod="openstack/neutron-ffbf57c88-5pgzn" Mar 12 18:47:46.119616 master-0 kubenswrapper[29097]: I0312 18:47:46.117873 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-ovsdbserver-nb\") pod \"dnsmasq-dns-5df4f6b69c-glv8m\" (UID: \"68a827b1-7f2d-4bce-8374-8ed4bc46b22b\") " pod="openstack/dnsmasq-dns-5df4f6b69c-glv8m" Mar 12 18:47:46.119616 master-0 kubenswrapper[29097]: I0312 18:47:46.117902 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-dns-swift-storage-0\") pod \"dnsmasq-dns-5df4f6b69c-glv8m\" (UID: \"68a827b1-7f2d-4bce-8374-8ed4bc46b22b\") " pod="openstack/dnsmasq-dns-5df4f6b69c-glv8m" Mar 12 18:47:46.119616 master-0 kubenswrapper[29097]: I0312 18:47:46.118050 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-ovsdbserver-sb\") pod \"dnsmasq-dns-5df4f6b69c-glv8m\" (UID: \"68a827b1-7f2d-4bce-8374-8ed4bc46b22b\") " pod="openstack/dnsmasq-dns-5df4f6b69c-glv8m" Mar 12 18:47:46.119616 master-0 kubenswrapper[29097]: I0312 18:47:46.118154 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/185eee46-61f6-4cb6-8bed-9d63f1d448cc-httpd-config\") pod \"neutron-ffbf57c88-5pgzn\" (UID: \"185eee46-61f6-4cb6-8bed-9d63f1d448cc\") " pod="openstack/neutron-ffbf57c88-5pgzn" Mar 12 18:47:46.119616 master-0 kubenswrapper[29097]: I0312 18:47:46.118289 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/185eee46-61f6-4cb6-8bed-9d63f1d448cc-config\") pod \"neutron-ffbf57c88-5pgzn\" (UID: \"185eee46-61f6-4cb6-8bed-9d63f1d448cc\") " pod="openstack/neutron-ffbf57c88-5pgzn" Mar 12 18:47:46.119616 master-0 kubenswrapper[29097]: I0312 18:47:46.118460 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-config\") pod \"dnsmasq-dns-5df4f6b69c-glv8m\" (UID: \"68a827b1-7f2d-4bce-8374-8ed4bc46b22b\") " pod="openstack/dnsmasq-dns-5df4f6b69c-glv8m" Mar 12 18:47:46.119616 master-0 kubenswrapper[29097]: I0312 18:47:46.118510 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/185eee46-61f6-4cb6-8bed-9d63f1d448cc-combined-ca-bundle\") pod \"neutron-ffbf57c88-5pgzn\" (UID: \"185eee46-61f6-4cb6-8bed-9d63f1d448cc\") " pod="openstack/neutron-ffbf57c88-5pgzn" Mar 12 18:47:46.162419 master-0 kubenswrapper[29097]: I0312 18:47:46.162378 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fa62f-db-sync-xn4dx" Mar 12 18:47:46.220474 master-0 kubenswrapper[29097]: I0312 18:47:46.220335 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-scripts\") pod \"d7af9e78-07ea-42c2-8d0a-d73fe46d8d36\" (UID: \"d7af9e78-07ea-42c2-8d0a-d73fe46d8d36\") " Mar 12 18:47:46.220771 master-0 kubenswrapper[29097]: I0312 18:47:46.220474 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-combined-ca-bundle\") pod \"d7af9e78-07ea-42c2-8d0a-d73fe46d8d36\" (UID: \"d7af9e78-07ea-42c2-8d0a-d73fe46d8d36\") " Mar 12 18:47:46.220771 master-0 kubenswrapper[29097]: I0312 18:47:46.220547 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-config-data\") pod \"d7af9e78-07ea-42c2-8d0a-d73fe46d8d36\" (UID: \"d7af9e78-07ea-42c2-8d0a-d73fe46d8d36\") " Mar 12 18:47:46.220771 master-0 kubenswrapper[29097]: I0312 18:47:46.220640 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnfhw\" (UniqueName: \"kubernetes.io/projected/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-kube-api-access-xnfhw\") pod \"d7af9e78-07ea-42c2-8d0a-d73fe46d8d36\" (UID: \"d7af9e78-07ea-42c2-8d0a-d73fe46d8d36\") " Mar 12 18:47:46.220771 master-0 kubenswrapper[29097]: I0312 18:47:46.220698 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-etc-machine-id\") pod \"d7af9e78-07ea-42c2-8d0a-d73fe46d8d36\" (UID: \"d7af9e78-07ea-42c2-8d0a-d73fe46d8d36\") " Mar 12 18:47:46.220903 master-0 kubenswrapper[29097]: I0312 18:47:46.220818 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-db-sync-config-data\") pod \"d7af9e78-07ea-42c2-8d0a-d73fe46d8d36\" (UID: \"d7af9e78-07ea-42c2-8d0a-d73fe46d8d36\") " Mar 12 18:47:46.221467 master-0 kubenswrapper[29097]: I0312 18:47:46.221198 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/185eee46-61f6-4cb6-8bed-9d63f1d448cc-config\") pod \"neutron-ffbf57c88-5pgzn\" (UID: \"185eee46-61f6-4cb6-8bed-9d63f1d448cc\") " pod="openstack/neutron-ffbf57c88-5pgzn" Mar 12 18:47:46.221467 master-0 kubenswrapper[29097]: I0312 18:47:46.221250 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-config\") pod \"dnsmasq-dns-5df4f6b69c-glv8m\" (UID: \"68a827b1-7f2d-4bce-8374-8ed4bc46b22b\") " pod="openstack/dnsmasq-dns-5df4f6b69c-glv8m" Mar 12 18:47:46.221467 master-0 kubenswrapper[29097]: I0312 18:47:46.221279 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/185eee46-61f6-4cb6-8bed-9d63f1d448cc-combined-ca-bundle\") pod \"neutron-ffbf57c88-5pgzn\" (UID: \"185eee46-61f6-4cb6-8bed-9d63f1d448cc\") " pod="openstack/neutron-ffbf57c88-5pgzn" Mar 12 18:47:46.221467 master-0 kubenswrapper[29097]: I0312 18:47:46.221308 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp9ms\" (UniqueName: \"kubernetes.io/projected/185eee46-61f6-4cb6-8bed-9d63f1d448cc-kube-api-access-kp9ms\") pod \"neutron-ffbf57c88-5pgzn\" (UID: \"185eee46-61f6-4cb6-8bed-9d63f1d448cc\") " pod="openstack/neutron-ffbf57c88-5pgzn" Mar 12 18:47:46.221467 master-0 kubenswrapper[29097]: I0312 18:47:46.221368 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-dns-svc\") pod \"dnsmasq-dns-5df4f6b69c-glv8m\" (UID: \"68a827b1-7f2d-4bce-8374-8ed4bc46b22b\") " pod="openstack/dnsmasq-dns-5df4f6b69c-glv8m" Mar 12 18:47:46.221467 master-0 kubenswrapper[29097]: I0312 18:47:46.221395 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtz2q\" (UniqueName: \"kubernetes.io/projected/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-kube-api-access-vtz2q\") pod \"dnsmasq-dns-5df4f6b69c-glv8m\" (UID: \"68a827b1-7f2d-4bce-8374-8ed4bc46b22b\") " pod="openstack/dnsmasq-dns-5df4f6b69c-glv8m" Mar 12 18:47:46.221467 master-0 kubenswrapper[29097]: I0312 18:47:46.221443 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/185eee46-61f6-4cb6-8bed-9d63f1d448cc-ovndb-tls-certs\") pod \"neutron-ffbf57c88-5pgzn\" (UID: \"185eee46-61f6-4cb6-8bed-9d63f1d448cc\") " pod="openstack/neutron-ffbf57c88-5pgzn" Mar 12 18:47:46.221467 master-0 kubenswrapper[29097]: I0312 18:47:46.221467 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-ovsdbserver-nb\") pod \"dnsmasq-dns-5df4f6b69c-glv8m\" (UID: \"68a827b1-7f2d-4bce-8374-8ed4bc46b22b\") " pod="openstack/dnsmasq-dns-5df4f6b69c-glv8m" Mar 12 18:47:46.221759 master-0 kubenswrapper[29097]: I0312 18:47:46.221490 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-dns-swift-storage-0\") pod \"dnsmasq-dns-5df4f6b69c-glv8m\" (UID: \"68a827b1-7f2d-4bce-8374-8ed4bc46b22b\") " pod="openstack/dnsmasq-dns-5df4f6b69c-glv8m" Mar 12 18:47:46.221759 master-0 kubenswrapper[29097]: I0312 18:47:46.221525 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-ovsdbserver-sb\") pod \"dnsmasq-dns-5df4f6b69c-glv8m\" (UID: \"68a827b1-7f2d-4bce-8374-8ed4bc46b22b\") " pod="openstack/dnsmasq-dns-5df4f6b69c-glv8m" Mar 12 18:47:46.221759 master-0 kubenswrapper[29097]: I0312 18:47:46.221557 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/185eee46-61f6-4cb6-8bed-9d63f1d448cc-httpd-config\") pod \"neutron-ffbf57c88-5pgzn\" (UID: \"185eee46-61f6-4cb6-8bed-9d63f1d448cc\") " pod="openstack/neutron-ffbf57c88-5pgzn" Mar 12 18:47:46.224102 master-0 kubenswrapper[29097]: I0312 18:47:46.223755 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-scripts" (OuterVolumeSpecName: "scripts") pod "d7af9e78-07ea-42c2-8d0a-d73fe46d8d36" (UID: "d7af9e78-07ea-42c2-8d0a-d73fe46d8d36"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:46.224439 master-0 kubenswrapper[29097]: I0312 18:47:46.224297 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d7af9e78-07ea-42c2-8d0a-d73fe46d8d36" (UID: "d7af9e78-07ea-42c2-8d0a-d73fe46d8d36"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:47:46.225692 master-0 kubenswrapper[29097]: I0312 18:47:46.225560 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-config\") pod \"dnsmasq-dns-5df4f6b69c-glv8m\" (UID: \"68a827b1-7f2d-4bce-8374-8ed4bc46b22b\") " pod="openstack/dnsmasq-dns-5df4f6b69c-glv8m" Mar 12 18:47:46.226006 master-0 kubenswrapper[29097]: I0312 18:47:46.225959 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-dns-swift-storage-0\") pod \"dnsmasq-dns-5df4f6b69c-glv8m\" (UID: \"68a827b1-7f2d-4bce-8374-8ed4bc46b22b\") " pod="openstack/dnsmasq-dns-5df4f6b69c-glv8m" Mar 12 18:47:46.226284 master-0 kubenswrapper[29097]: I0312 18:47:46.226205 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-ovsdbserver-sb\") pod \"dnsmasq-dns-5df4f6b69c-glv8m\" (UID: \"68a827b1-7f2d-4bce-8374-8ed4bc46b22b\") " pod="openstack/dnsmasq-dns-5df4f6b69c-glv8m" Mar 12 18:47:46.227454 master-0 kubenswrapper[29097]: I0312 18:47:46.227424 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/185eee46-61f6-4cb6-8bed-9d63f1d448cc-httpd-config\") pod \"neutron-ffbf57c88-5pgzn\" (UID: \"185eee46-61f6-4cb6-8bed-9d63f1d448cc\") " pod="openstack/neutron-ffbf57c88-5pgzn" Mar 12 18:47:46.227724 master-0 kubenswrapper[29097]: I0312 18:47:46.227685 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-kube-api-access-xnfhw" (OuterVolumeSpecName: "kube-api-access-xnfhw") pod "d7af9e78-07ea-42c2-8d0a-d73fe46d8d36" (UID: "d7af9e78-07ea-42c2-8d0a-d73fe46d8d36"). InnerVolumeSpecName "kube-api-access-xnfhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:47:46.231964 master-0 kubenswrapper[29097]: I0312 18:47:46.231922 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/185eee46-61f6-4cb6-8bed-9d63f1d448cc-combined-ca-bundle\") pod \"neutron-ffbf57c88-5pgzn\" (UID: \"185eee46-61f6-4cb6-8bed-9d63f1d448cc\") " pod="openstack/neutron-ffbf57c88-5pgzn" Mar 12 18:47:46.232565 master-0 kubenswrapper[29097]: I0312 18:47:46.232503 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d7af9e78-07ea-42c2-8d0a-d73fe46d8d36" (UID: "d7af9e78-07ea-42c2-8d0a-d73fe46d8d36"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:46.234535 master-0 kubenswrapper[29097]: I0312 18:47:46.234456 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-dns-svc\") pod \"dnsmasq-dns-5df4f6b69c-glv8m\" (UID: \"68a827b1-7f2d-4bce-8374-8ed4bc46b22b\") " pod="openstack/dnsmasq-dns-5df4f6b69c-glv8m" Mar 12 18:47:46.236113 master-0 kubenswrapper[29097]: I0312 18:47:46.236076 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/185eee46-61f6-4cb6-8bed-9d63f1d448cc-ovndb-tls-certs\") pod \"neutron-ffbf57c88-5pgzn\" (UID: \"185eee46-61f6-4cb6-8bed-9d63f1d448cc\") " pod="openstack/neutron-ffbf57c88-5pgzn" Mar 12 18:47:46.236551 master-0 kubenswrapper[29097]: I0312 18:47:46.236374 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-ovsdbserver-nb\") pod \"dnsmasq-dns-5df4f6b69c-glv8m\" (UID: \"68a827b1-7f2d-4bce-8374-8ed4bc46b22b\") " pod="openstack/dnsmasq-dns-5df4f6b69c-glv8m" Mar 12 18:47:46.240624 master-0 kubenswrapper[29097]: I0312 18:47:46.240508 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp9ms\" (UniqueName: \"kubernetes.io/projected/185eee46-61f6-4cb6-8bed-9d63f1d448cc-kube-api-access-kp9ms\") pod \"neutron-ffbf57c88-5pgzn\" (UID: \"185eee46-61f6-4cb6-8bed-9d63f1d448cc\") " pod="openstack/neutron-ffbf57c88-5pgzn" Mar 12 18:47:46.251584 master-0 kubenswrapper[29097]: I0312 18:47:46.247751 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtz2q\" (UniqueName: \"kubernetes.io/projected/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-kube-api-access-vtz2q\") pod \"dnsmasq-dns-5df4f6b69c-glv8m\" (UID: \"68a827b1-7f2d-4bce-8374-8ed4bc46b22b\") " pod="openstack/dnsmasq-dns-5df4f6b69c-glv8m" Mar 12 18:47:46.251584 master-0 kubenswrapper[29097]: I0312 18:47:46.248383 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/185eee46-61f6-4cb6-8bed-9d63f1d448cc-config\") pod \"neutron-ffbf57c88-5pgzn\" (UID: \"185eee46-61f6-4cb6-8bed-9d63f1d448cc\") " pod="openstack/neutron-ffbf57c88-5pgzn" Mar 12 18:47:46.255102 master-0 kubenswrapper[29097]: I0312 18:47:46.255059 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7af9e78-07ea-42c2-8d0a-d73fe46d8d36" (UID: "d7af9e78-07ea-42c2-8d0a-d73fe46d8d36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:46.257754 master-0 kubenswrapper[29097]: I0312 18:47:46.257717 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:46.257821 master-0 kubenswrapper[29097]: I0312 18:47:46.257767 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:46.289056 master-0 kubenswrapper[29097]: I0312 18:47:46.289013 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-config-data" (OuterVolumeSpecName: "config-data") pod "d7af9e78-07ea-42c2-8d0a-d73fe46d8d36" (UID: "d7af9e78-07ea-42c2-8d0a-d73fe46d8d36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:46.299021 master-0 kubenswrapper[29097]: I0312 18:47:46.297351 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:46.315276 master-0 kubenswrapper[29097]: I0312 18:47:46.315224 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:46.329532 master-0 kubenswrapper[29097]: I0312 18:47:46.325867 29097 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:46.329532 master-0 kubenswrapper[29097]: I0312 18:47:46.325913 29097 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:46.329532 master-0 kubenswrapper[29097]: I0312 18:47:46.325926 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:46.329532 master-0 kubenswrapper[29097]: I0312 18:47:46.325938 29097 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:46.329532 master-0 kubenswrapper[29097]: I0312 18:47:46.325950 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnfhw\" (UniqueName: \"kubernetes.io/projected/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-kube-api-access-xnfhw\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:46.329532 master-0 kubenswrapper[29097]: I0312 18:47:46.325960 29097 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d7af9e78-07ea-42c2-8d0a-d73fe46d8d36-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:46.459962 master-0 kubenswrapper[29097]: I0312 18:47:46.459909 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5df4f6b69c-glv8m" Mar 12 18:47:46.476734 master-0 kubenswrapper[29097]: I0312 18:47:46.476613 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ffbf57c88-5pgzn" Mar 12 18:47:46.671537 master-0 kubenswrapper[29097]: I0312 18:47:46.660179 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-db-sync-xn4dx" event={"ID":"d7af9e78-07ea-42c2-8d0a-d73fe46d8d36","Type":"ContainerDied","Data":"111769e3cb10fdb21c0d1ed8552e1c8c26831b7e7d381203afd7c6d2bc4321c1"} Mar 12 18:47:46.671537 master-0 kubenswrapper[29097]: I0312 18:47:46.660231 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="111769e3cb10fdb21c0d1ed8552e1c8c26831b7e7d381203afd7c6d2bc4321c1" Mar 12 18:47:46.671537 master-0 kubenswrapper[29097]: I0312 18:47:46.660300 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fa62f-db-sync-xn4dx" Mar 12 18:47:46.671537 master-0 kubenswrapper[29097]: I0312 18:47:46.661672 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:46.671537 master-0 kubenswrapper[29097]: I0312 18:47:46.662620 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:47.035606 master-0 kubenswrapper[29097]: I0312 18:47:47.032304 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-fa62f-scheduler-0"] Mar 12 18:47:47.035606 master-0 kubenswrapper[29097]: E0312 18:47:47.033119 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7af9e78-07ea-42c2-8d0a-d73fe46d8d36" containerName="cinder-fa62f-db-sync" Mar 12 18:47:47.035606 master-0 kubenswrapper[29097]: I0312 18:47:47.033204 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7af9e78-07ea-42c2-8d0a-d73fe46d8d36" containerName="cinder-fa62f-db-sync" Mar 12 18:47:47.035606 master-0 kubenswrapper[29097]: I0312 18:47:47.033884 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7af9e78-07ea-42c2-8d0a-d73fe46d8d36" containerName="cinder-fa62f-db-sync" Mar 12 18:47:47.035606 master-0 kubenswrapper[29097]: I0312 18:47:47.035460 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:47:47.050568 master-0 kubenswrapper[29097]: I0312 18:47:47.047137 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-fa62f-config-data" Mar 12 18:47:47.050568 master-0 kubenswrapper[29097]: I0312 18:47:47.047306 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-fa62f-scheduler-config-data" Mar 12 18:47:47.050568 master-0 kubenswrapper[29097]: I0312 18:47:47.047415 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-fa62f-scripts" Mar 12 18:47:47.072153 master-0 kubenswrapper[29097]: I0312 18:47:47.062744 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncgwf\" (UniqueName: \"kubernetes.io/projected/4da9fa10-24ef-4383-9bc0-1f4872023810-kube-api-access-ncgwf\") pod \"cinder-fa62f-scheduler-0\" (UID: \"4da9fa10-24ef-4383-9bc0-1f4872023810\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:47:47.072153 master-0 kubenswrapper[29097]: I0312 18:47:47.062805 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4da9fa10-24ef-4383-9bc0-1f4872023810-config-data\") pod \"cinder-fa62f-scheduler-0\" (UID: \"4da9fa10-24ef-4383-9bc0-1f4872023810\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:47:47.072153 master-0 kubenswrapper[29097]: I0312 18:47:47.062872 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4da9fa10-24ef-4383-9bc0-1f4872023810-etc-machine-id\") pod \"cinder-fa62f-scheduler-0\" (UID: \"4da9fa10-24ef-4383-9bc0-1f4872023810\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:47:47.072153 master-0 kubenswrapper[29097]: I0312 18:47:47.062888 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4da9fa10-24ef-4383-9bc0-1f4872023810-scripts\") pod \"cinder-fa62f-scheduler-0\" (UID: \"4da9fa10-24ef-4383-9bc0-1f4872023810\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:47:47.072153 master-0 kubenswrapper[29097]: I0312 18:47:47.062947 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4da9fa10-24ef-4383-9bc0-1f4872023810-combined-ca-bundle\") pod \"cinder-fa62f-scheduler-0\" (UID: \"4da9fa10-24ef-4383-9bc0-1f4872023810\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:47:47.072153 master-0 kubenswrapper[29097]: I0312 18:47:47.062983 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4da9fa10-24ef-4383-9bc0-1f4872023810-config-data-custom\") pod \"cinder-fa62f-scheduler-0\" (UID: \"4da9fa10-24ef-4383-9bc0-1f4872023810\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:47:47.118418 master-0 kubenswrapper[29097]: I0312 18:47:47.117916 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-fa62f-volume-lvm-iscsi-0"] Mar 12 18:47:47.124639 master-0 kubenswrapper[29097]: I0312 18:47:47.119951 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.124639 master-0 kubenswrapper[29097]: I0312 18:47:47.123863 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-fa62f-volume-lvm-iscsi-config-data" Mar 12 18:47:47.143578 master-0 kubenswrapper[29097]: I0312 18:47:47.142403 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-fa62f-scheduler-0"] Mar 12 18:47:47.171341 master-0 kubenswrapper[29097]: I0312 18:47:47.171151 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4da9fa10-24ef-4383-9bc0-1f4872023810-scripts\") pod \"cinder-fa62f-scheduler-0\" (UID: \"4da9fa10-24ef-4383-9bc0-1f4872023810\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:47:47.171341 master-0 kubenswrapper[29097]: I0312 18:47:47.171216 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c63671d0-70ab-4728-b069-2e44bd47570f-combined-ca-bundle\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.171341 master-0 kubenswrapper[29097]: I0312 18:47:47.171289 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-run\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.171341 master-0 kubenswrapper[29097]: I0312 18:47:47.171336 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c63671d0-70ab-4728-b069-2e44bd47570f-config-data\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.171341 master-0 kubenswrapper[29097]: I0312 18:47:47.171353 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-etc-nvme\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.171696 master-0 kubenswrapper[29097]: I0312 18:47:47.171436 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4da9fa10-24ef-4383-9bc0-1f4872023810-combined-ca-bundle\") pod \"cinder-fa62f-scheduler-0\" (UID: \"4da9fa10-24ef-4383-9bc0-1f4872023810\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:47:47.171696 master-0 kubenswrapper[29097]: I0312 18:47:47.171459 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c63671d0-70ab-4728-b069-2e44bd47570f-config-data-custom\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.171696 master-0 kubenswrapper[29097]: I0312 18:47:47.171481 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-var-lib-cinder\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.171696 master-0 kubenswrapper[29097]: I0312 18:47:47.171519 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdfgw\" (UniqueName: \"kubernetes.io/projected/c63671d0-70ab-4728-b069-2e44bd47570f-kube-api-access-kdfgw\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.171696 master-0 kubenswrapper[29097]: I0312 18:47:47.171537 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c63671d0-70ab-4728-b069-2e44bd47570f-scripts\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.174602 master-0 kubenswrapper[29097]: I0312 18:47:47.171566 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-lib-modules\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.174720 master-0 kubenswrapper[29097]: I0312 18:47:47.174650 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4da9fa10-24ef-4383-9bc0-1f4872023810-config-data-custom\") pod \"cinder-fa62f-scheduler-0\" (UID: \"4da9fa10-24ef-4383-9bc0-1f4872023810\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:47:47.174766 master-0 kubenswrapper[29097]: I0312 18:47:47.174743 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-var-locks-cinder\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.174865 master-0 kubenswrapper[29097]: I0312 18:47:47.174849 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-etc-machine-id\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.175086 master-0 kubenswrapper[29097]: I0312 18:47:47.175066 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-dev\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.175149 master-0 kubenswrapper[29097]: I0312 18:47:47.175134 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncgwf\" (UniqueName: \"kubernetes.io/projected/4da9fa10-24ef-4383-9bc0-1f4872023810-kube-api-access-ncgwf\") pod \"cinder-fa62f-scheduler-0\" (UID: \"4da9fa10-24ef-4383-9bc0-1f4872023810\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:47:47.175713 master-0 kubenswrapper[29097]: I0312 18:47:47.175164 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-var-locks-brick\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.175713 master-0 kubenswrapper[29097]: I0312 18:47:47.175185 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-sys\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.175713 master-0 kubenswrapper[29097]: I0312 18:47:47.175221 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4da9fa10-24ef-4383-9bc0-1f4872023810-config-data\") pod \"cinder-fa62f-scheduler-0\" (UID: \"4da9fa10-24ef-4383-9bc0-1f4872023810\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:47:47.175713 master-0 kubenswrapper[29097]: I0312 18:47:47.175313 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-etc-iscsi\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.175713 master-0 kubenswrapper[29097]: I0312 18:47:47.175362 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4da9fa10-24ef-4383-9bc0-1f4872023810-etc-machine-id\") pod \"cinder-fa62f-scheduler-0\" (UID: \"4da9fa10-24ef-4383-9bc0-1f4872023810\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:47:47.175713 master-0 kubenswrapper[29097]: I0312 18:47:47.175496 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4da9fa10-24ef-4383-9bc0-1f4872023810-etc-machine-id\") pod \"cinder-fa62f-scheduler-0\" (UID: \"4da9fa10-24ef-4383-9bc0-1f4872023810\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:47:47.186235 master-0 kubenswrapper[29097]: I0312 18:47:47.182215 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4da9fa10-24ef-4383-9bc0-1f4872023810-combined-ca-bundle\") pod \"cinder-fa62f-scheduler-0\" (UID: \"4da9fa10-24ef-4383-9bc0-1f4872023810\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:47:47.190098 master-0 kubenswrapper[29097]: I0312 18:47:47.189567 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4da9fa10-24ef-4383-9bc0-1f4872023810-config-data-custom\") pod \"cinder-fa62f-scheduler-0\" (UID: \"4da9fa10-24ef-4383-9bc0-1f4872023810\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:47:47.214540 master-0 kubenswrapper[29097]: I0312 18:47:47.213172 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4da9fa10-24ef-4383-9bc0-1f4872023810-scripts\") pod \"cinder-fa62f-scheduler-0\" (UID: \"4da9fa10-24ef-4383-9bc0-1f4872023810\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:47:47.252040 master-0 kubenswrapper[29097]: I0312 18:47:47.246252 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncgwf\" (UniqueName: \"kubernetes.io/projected/4da9fa10-24ef-4383-9bc0-1f4872023810-kube-api-access-ncgwf\") pod \"cinder-fa62f-scheduler-0\" (UID: \"4da9fa10-24ef-4383-9bc0-1f4872023810\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:47:47.252040 master-0 kubenswrapper[29097]: I0312 18:47:47.246854 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4da9fa10-24ef-4383-9bc0-1f4872023810-config-data\") pod \"cinder-fa62f-scheduler-0\" (UID: \"4da9fa10-24ef-4383-9bc0-1f4872023810\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:47:47.291031 master-0 kubenswrapper[29097]: I0312 18:47:47.288754 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:47:47.291031 master-0 kubenswrapper[29097]: I0312 18:47:47.290836 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5df4f6b69c-glv8m"] Mar 12 18:47:47.295725 master-0 kubenswrapper[29097]: I0312 18:47:47.293304 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-run\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.295725 master-0 kubenswrapper[29097]: I0312 18:47:47.293347 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c63671d0-70ab-4728-b069-2e44bd47570f-config-data\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.295725 master-0 kubenswrapper[29097]: I0312 18:47:47.293366 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-etc-nvme\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.295725 master-0 kubenswrapper[29097]: I0312 18:47:47.293407 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c63671d0-70ab-4728-b069-2e44bd47570f-config-data-custom\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.295725 master-0 kubenswrapper[29097]: I0312 18:47:47.293432 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-var-lib-cinder\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.295725 master-0 kubenswrapper[29097]: I0312 18:47:47.293452 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdfgw\" (UniqueName: \"kubernetes.io/projected/c63671d0-70ab-4728-b069-2e44bd47570f-kube-api-access-kdfgw\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.295725 master-0 kubenswrapper[29097]: I0312 18:47:47.293482 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c63671d0-70ab-4728-b069-2e44bd47570f-scripts\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.295725 master-0 kubenswrapper[29097]: I0312 18:47:47.293505 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-lib-modules\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.295725 master-0 kubenswrapper[29097]: I0312 18:47:47.294858 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-var-locks-cinder\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.295725 master-0 kubenswrapper[29097]: I0312 18:47:47.294927 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-etc-machine-id\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.295725 master-0 kubenswrapper[29097]: I0312 18:47:47.294948 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-dev\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.295725 master-0 kubenswrapper[29097]: I0312 18:47:47.295014 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-var-locks-brick\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.295725 master-0 kubenswrapper[29097]: I0312 18:47:47.295298 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-sys\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.295725 master-0 kubenswrapper[29097]: I0312 18:47:47.295415 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-etc-iscsi\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.295725 master-0 kubenswrapper[29097]: I0312 18:47:47.295475 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c63671d0-70ab-4728-b069-2e44bd47570f-combined-ca-bundle\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.303813 master-0 kubenswrapper[29097]: I0312 18:47:47.297635 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-lib-modules\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.303813 master-0 kubenswrapper[29097]: I0312 18:47:47.297637 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-etc-machine-id\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.303813 master-0 kubenswrapper[29097]: I0312 18:47:47.297669 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-dev\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.303813 master-0 kubenswrapper[29097]: I0312 18:47:47.301571 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-run\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.303813 master-0 kubenswrapper[29097]: I0312 18:47:47.301736 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-etc-nvme\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.303813 master-0 kubenswrapper[29097]: I0312 18:47:47.302507 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-sys\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.303813 master-0 kubenswrapper[29097]: I0312 18:47:47.302599 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-etc-iscsi\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.306396 master-0 kubenswrapper[29097]: I0312 18:47:47.305864 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-var-locks-cinder\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.310829 master-0 kubenswrapper[29097]: I0312 18:47:47.307984 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-var-lib-cinder\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.310829 master-0 kubenswrapper[29097]: I0312 18:47:47.308693 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c63671d0-70ab-4728-b069-2e44bd47570f-combined-ca-bundle\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.312722 master-0 kubenswrapper[29097]: I0312 18:47:47.312640 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c63671d0-70ab-4728-b069-2e44bd47570f-scripts\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.316568 master-0 kubenswrapper[29097]: I0312 18:47:47.316527 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c63671d0-70ab-4728-b069-2e44bd47570f-config-data\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.317341 master-0 kubenswrapper[29097]: I0312 18:47:47.316969 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c63671d0-70ab-4728-b069-2e44bd47570f-config-data-custom\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.324647 master-0 kubenswrapper[29097]: I0312 18:47:47.324608 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-var-locks-brick\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.330175 master-0 kubenswrapper[29097]: I0312 18:47:47.330140 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-fa62f-volume-lvm-iscsi-0"] Mar 12 18:47:47.330910 master-0 kubenswrapper[29097]: I0312 18:47:47.330894 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdfgw\" (UniqueName: \"kubernetes.io/projected/c63671d0-70ab-4728-b069-2e44bd47570f-kube-api-access-kdfgw\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.351907 master-0 kubenswrapper[29097]: I0312 18:47:47.351852 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5df4f6b69c-glv8m"] Mar 12 18:47:47.407192 master-0 kubenswrapper[29097]: I0312 18:47:47.406620 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-fa62f-backup-0"] Mar 12 18:47:47.419726 master-0 kubenswrapper[29097]: I0312 18:47:47.416361 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.419726 master-0 kubenswrapper[29097]: I0312 18:47:47.418776 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-fa62f-backup-config-data" Mar 12 18:47:47.425986 master-0 kubenswrapper[29097]: I0312 18:47:47.425936 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-fa62f-backup-0"] Mar 12 18:47:47.442425 master-0 kubenswrapper[29097]: I0312 18:47:47.442372 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848b9c6b49-l9j7w"] Mar 12 18:47:47.456311 master-0 kubenswrapper[29097]: I0312 18:47:47.456089 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848b9c6b49-l9j7w" Mar 12 18:47:47.494704 master-0 kubenswrapper[29097]: I0312 18:47:47.494586 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848b9c6b49-l9j7w"] Mar 12 18:47:47.515802 master-0 kubenswrapper[29097]: I0312 18:47:47.515692 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-fa62f-api-0"] Mar 12 18:47:47.518881 master-0 kubenswrapper[29097]: I0312 18:47:47.518740 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-lib-modules\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.518881 master-0 kubenswrapper[29097]: I0312 18:47:47.518804 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7c09427-e3b8-48df-9082-0520d2fc9b23-combined-ca-bundle\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.518881 master-0 kubenswrapper[29097]: I0312 18:47:47.518839 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-etc-machine-id\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.519037 master-0 kubenswrapper[29097]: I0312 18:47:47.518961 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7c09427-e3b8-48df-9082-0520d2fc9b23-scripts\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.519075 master-0 kubenswrapper[29097]: I0312 18:47:47.519047 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-var-locks-brick\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.519673 master-0 kubenswrapper[29097]: I0312 18:47:47.519154 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7c09427-e3b8-48df-9082-0520d2fc9b23-config-data-custom\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.519673 master-0 kubenswrapper[29097]: I0312 18:47:47.519265 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/030364f8-b8e4-43b2-9597-5ca376e5f1a6-config\") pod \"dnsmasq-dns-848b9c6b49-l9j7w\" (UID: \"030364f8-b8e4-43b2-9597-5ca376e5f1a6\") " pod="openstack/dnsmasq-dns-848b9c6b49-l9j7w" Mar 12 18:47:47.519673 master-0 kubenswrapper[29097]: I0312 18:47:47.519303 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-var-locks-cinder\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.519673 master-0 kubenswrapper[29097]: I0312 18:47:47.519345 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lq4d\" (UniqueName: \"kubernetes.io/projected/e7c09427-e3b8-48df-9082-0520d2fc9b23-kube-api-access-2lq4d\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.519673 master-0 kubenswrapper[29097]: I0312 18:47:47.519401 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-run\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.519673 master-0 kubenswrapper[29097]: I0312 18:47:47.519571 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/030364f8-b8e4-43b2-9597-5ca376e5f1a6-dns-swift-storage-0\") pod \"dnsmasq-dns-848b9c6b49-l9j7w\" (UID: \"030364f8-b8e4-43b2-9597-5ca376e5f1a6\") " pod="openstack/dnsmasq-dns-848b9c6b49-l9j7w" Mar 12 18:47:47.519673 master-0 kubenswrapper[29097]: I0312 18:47:47.519610 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-etc-nvme\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.519673 master-0 kubenswrapper[29097]: I0312 18:47:47.519671 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-974mw\" (UniqueName: \"kubernetes.io/projected/030364f8-b8e4-43b2-9597-5ca376e5f1a6-kube-api-access-974mw\") pod \"dnsmasq-dns-848b9c6b49-l9j7w\" (UID: \"030364f8-b8e4-43b2-9597-5ca376e5f1a6\") " pod="openstack/dnsmasq-dns-848b9c6b49-l9j7w" Mar 12 18:47:47.519953 master-0 kubenswrapper[29097]: I0312 18:47:47.519702 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7c09427-e3b8-48df-9082-0520d2fc9b23-config-data\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.519953 master-0 kubenswrapper[29097]: I0312 18:47:47.519739 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-etc-iscsi\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.519953 master-0 kubenswrapper[29097]: I0312 18:47:47.519768 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-var-lib-cinder\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.519953 master-0 kubenswrapper[29097]: I0312 18:47:47.519838 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/030364f8-b8e4-43b2-9597-5ca376e5f1a6-dns-svc\") pod \"dnsmasq-dns-848b9c6b49-l9j7w\" (UID: \"030364f8-b8e4-43b2-9597-5ca376e5f1a6\") " pod="openstack/dnsmasq-dns-848b9c6b49-l9j7w" Mar 12 18:47:47.519953 master-0 kubenswrapper[29097]: I0312 18:47:47.519867 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/030364f8-b8e4-43b2-9597-5ca376e5f1a6-ovsdbserver-nb\") pod \"dnsmasq-dns-848b9c6b49-l9j7w\" (UID: \"030364f8-b8e4-43b2-9597-5ca376e5f1a6\") " pod="openstack/dnsmasq-dns-848b9c6b49-l9j7w" Mar 12 18:47:47.520133 master-0 kubenswrapper[29097]: I0312 18:47:47.519972 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-sys\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.520133 master-0 kubenswrapper[29097]: I0312 18:47:47.520013 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/030364f8-b8e4-43b2-9597-5ca376e5f1a6-ovsdbserver-sb\") pod \"dnsmasq-dns-848b9c6b49-l9j7w\" (UID: \"030364f8-b8e4-43b2-9597-5ca376e5f1a6\") " pod="openstack/dnsmasq-dns-848b9c6b49-l9j7w" Mar 12 18:47:47.520133 master-0 kubenswrapper[29097]: I0312 18:47:47.520077 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-dev\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.520224 master-0 kubenswrapper[29097]: I0312 18:47:47.520144 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:47.524077 master-0 kubenswrapper[29097]: I0312 18:47:47.523066 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-fa62f-api-config-data" Mar 12 18:47:47.577961 master-0 kubenswrapper[29097]: I0312 18:47:47.577885 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-fa62f-api-0"] Mar 12 18:47:47.640268 master-0 kubenswrapper[29097]: I0312 18:47:47.640205 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:47.650346 master-0 kubenswrapper[29097]: I0312 18:47:47.647813 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-974mw\" (UniqueName: \"kubernetes.io/projected/030364f8-b8e4-43b2-9597-5ca376e5f1a6-kube-api-access-974mw\") pod \"dnsmasq-dns-848b9c6b49-l9j7w\" (UID: \"030364f8-b8e4-43b2-9597-5ca376e5f1a6\") " pod="openstack/dnsmasq-dns-848b9c6b49-l9j7w" Mar 12 18:47:47.650346 master-0 kubenswrapper[29097]: I0312 18:47:47.647885 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7c09427-e3b8-48df-9082-0520d2fc9b23-config-data\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.650346 master-0 kubenswrapper[29097]: I0312 18:47:47.647936 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-var-lib-cinder\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.650346 master-0 kubenswrapper[29097]: I0312 18:47:47.647956 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-etc-iscsi\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.650346 master-0 kubenswrapper[29097]: I0312 18:47:47.647991 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b6ed87b-3b42-4d02-8edd-7d614923f172-config-data\") pod \"cinder-fa62f-api-0\" (UID: \"9b6ed87b-3b42-4d02-8edd-7d614923f172\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:47.650346 master-0 kubenswrapper[29097]: I0312 18:47:47.648041 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/030364f8-b8e4-43b2-9597-5ca376e5f1a6-ovsdbserver-nb\") pod \"dnsmasq-dns-848b9c6b49-l9j7w\" (UID: \"030364f8-b8e4-43b2-9597-5ca376e5f1a6\") " pod="openstack/dnsmasq-dns-848b9c6b49-l9j7w" Mar 12 18:47:47.650346 master-0 kubenswrapper[29097]: I0312 18:47:47.648061 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/030364f8-b8e4-43b2-9597-5ca376e5f1a6-dns-svc\") pod \"dnsmasq-dns-848b9c6b49-l9j7w\" (UID: \"030364f8-b8e4-43b2-9597-5ca376e5f1a6\") " pod="openstack/dnsmasq-dns-848b9c6b49-l9j7w" Mar 12 18:47:47.650346 master-0 kubenswrapper[29097]: I0312 18:47:47.648113 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b6ed87b-3b42-4d02-8edd-7d614923f172-config-data-custom\") pod \"cinder-fa62f-api-0\" (UID: \"9b6ed87b-3b42-4d02-8edd-7d614923f172\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:47.650346 master-0 kubenswrapper[29097]: I0312 18:47:47.648148 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-sys\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.650346 master-0 kubenswrapper[29097]: I0312 18:47:47.648170 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/030364f8-b8e4-43b2-9597-5ca376e5f1a6-ovsdbserver-sb\") pod \"dnsmasq-dns-848b9c6b49-l9j7w\" (UID: \"030364f8-b8e4-43b2-9597-5ca376e5f1a6\") " pod="openstack/dnsmasq-dns-848b9c6b49-l9j7w" Mar 12 18:47:47.650346 master-0 kubenswrapper[29097]: I0312 18:47:47.648227 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-dev\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.650346 master-0 kubenswrapper[29097]: I0312 18:47:47.648269 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b6ed87b-3b42-4d02-8edd-7d614923f172-combined-ca-bundle\") pod \"cinder-fa62f-api-0\" (UID: \"9b6ed87b-3b42-4d02-8edd-7d614923f172\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:47.650346 master-0 kubenswrapper[29097]: I0312 18:47:47.648352 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-lib-modules\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.650346 master-0 kubenswrapper[29097]: I0312 18:47:47.648376 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7c09427-e3b8-48df-9082-0520d2fc9b23-combined-ca-bundle\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.650346 master-0 kubenswrapper[29097]: I0312 18:47:47.648400 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b6ed87b-3b42-4d02-8edd-7d614923f172-logs\") pod \"cinder-fa62f-api-0\" (UID: \"9b6ed87b-3b42-4d02-8edd-7d614923f172\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:47.650346 master-0 kubenswrapper[29097]: I0312 18:47:47.648421 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-etc-machine-id\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.650346 master-0 kubenswrapper[29097]: I0312 18:47:47.648440 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7c09427-e3b8-48df-9082-0520d2fc9b23-scripts\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.650346 master-0 kubenswrapper[29097]: I0312 18:47:47.648454 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-var-locks-brick\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.650346 master-0 kubenswrapper[29097]: I0312 18:47:47.648493 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7c09427-e3b8-48df-9082-0520d2fc9b23-config-data-custom\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.650346 master-0 kubenswrapper[29097]: I0312 18:47:47.648543 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b6ed87b-3b42-4d02-8edd-7d614923f172-etc-machine-id\") pod \"cinder-fa62f-api-0\" (UID: \"9b6ed87b-3b42-4d02-8edd-7d614923f172\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:47.650346 master-0 kubenswrapper[29097]: I0312 18:47:47.648585 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/030364f8-b8e4-43b2-9597-5ca376e5f1a6-config\") pod \"dnsmasq-dns-848b9c6b49-l9j7w\" (UID: \"030364f8-b8e4-43b2-9597-5ca376e5f1a6\") " pod="openstack/dnsmasq-dns-848b9c6b49-l9j7w" Mar 12 18:47:47.650346 master-0 kubenswrapper[29097]: I0312 18:47:47.648703 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b6ed87b-3b42-4d02-8edd-7d614923f172-scripts\") pod \"cinder-fa62f-api-0\" (UID: \"9b6ed87b-3b42-4d02-8edd-7d614923f172\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:47.650346 master-0 kubenswrapper[29097]: I0312 18:47:47.648739 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-var-locks-cinder\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.650346 master-0 kubenswrapper[29097]: I0312 18:47:47.648775 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lq4d\" (UniqueName: \"kubernetes.io/projected/e7c09427-e3b8-48df-9082-0520d2fc9b23-kube-api-access-2lq4d\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.650346 master-0 kubenswrapper[29097]: I0312 18:47:47.648813 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvmdk\" (UniqueName: \"kubernetes.io/projected/9b6ed87b-3b42-4d02-8edd-7d614923f172-kube-api-access-dvmdk\") pod \"cinder-fa62f-api-0\" (UID: \"9b6ed87b-3b42-4d02-8edd-7d614923f172\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:47.650346 master-0 kubenswrapper[29097]: I0312 18:47:47.648841 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-run\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.650346 master-0 kubenswrapper[29097]: I0312 18:47:47.648958 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/030364f8-b8e4-43b2-9597-5ca376e5f1a6-dns-swift-storage-0\") pod \"dnsmasq-dns-848b9c6b49-l9j7w\" (UID: \"030364f8-b8e4-43b2-9597-5ca376e5f1a6\") " pod="openstack/dnsmasq-dns-848b9c6b49-l9j7w" Mar 12 18:47:47.650346 master-0 kubenswrapper[29097]: I0312 18:47:47.649365 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-etc-nvme\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.650346 master-0 kubenswrapper[29097]: I0312 18:47:47.649914 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-etc-nvme\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.650346 master-0 kubenswrapper[29097]: I0312 18:47:47.650239 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/030364f8-b8e4-43b2-9597-5ca376e5f1a6-config\") pod \"dnsmasq-dns-848b9c6b49-l9j7w\" (UID: \"030364f8-b8e4-43b2-9597-5ca376e5f1a6\") " pod="openstack/dnsmasq-dns-848b9c6b49-l9j7w" Mar 12 18:47:47.650346 master-0 kubenswrapper[29097]: I0312 18:47:47.650242 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/030364f8-b8e4-43b2-9597-5ca376e5f1a6-dns-swift-storage-0\") pod \"dnsmasq-dns-848b9c6b49-l9j7w\" (UID: \"030364f8-b8e4-43b2-9597-5ca376e5f1a6\") " pod="openstack/dnsmasq-dns-848b9c6b49-l9j7w" Mar 12 18:47:47.652972 master-0 kubenswrapper[29097]: I0312 18:47:47.652941 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-var-locks-cinder\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.653431 master-0 kubenswrapper[29097]: I0312 18:47:47.653394 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-run\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.653492 master-0 kubenswrapper[29097]: I0312 18:47:47.653442 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-var-lib-cinder\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.655590 master-0 kubenswrapper[29097]: I0312 18:47:47.654987 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-lib-modules\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.655927 master-0 kubenswrapper[29097]: I0312 18:47:47.655771 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-etc-machine-id\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.661852 master-0 kubenswrapper[29097]: I0312 18:47:47.657457 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/030364f8-b8e4-43b2-9597-5ca376e5f1a6-ovsdbserver-nb\") pod \"dnsmasq-dns-848b9c6b49-l9j7w\" (UID: \"030364f8-b8e4-43b2-9597-5ca376e5f1a6\") " pod="openstack/dnsmasq-dns-848b9c6b49-l9j7w" Mar 12 18:47:47.661852 master-0 kubenswrapper[29097]: I0312 18:47:47.657562 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-etc-iscsi\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.661852 master-0 kubenswrapper[29097]: I0312 18:47:47.658335 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/030364f8-b8e4-43b2-9597-5ca376e5f1a6-dns-svc\") pod \"dnsmasq-dns-848b9c6b49-l9j7w\" (UID: \"030364f8-b8e4-43b2-9597-5ca376e5f1a6\") " pod="openstack/dnsmasq-dns-848b9c6b49-l9j7w" Mar 12 18:47:47.661852 master-0 kubenswrapper[29097]: I0312 18:47:47.658369 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-sys\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.661852 master-0 kubenswrapper[29097]: I0312 18:47:47.659247 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/030364f8-b8e4-43b2-9597-5ca376e5f1a6-ovsdbserver-sb\") pod \"dnsmasq-dns-848b9c6b49-l9j7w\" (UID: \"030364f8-b8e4-43b2-9597-5ca376e5f1a6\") " pod="openstack/dnsmasq-dns-848b9c6b49-l9j7w" Mar 12 18:47:47.661852 master-0 kubenswrapper[29097]: I0312 18:47:47.659276 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-dev\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.661852 master-0 kubenswrapper[29097]: I0312 18:47:47.659679 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-var-locks-brick\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.675659 master-0 kubenswrapper[29097]: I0312 18:47:47.675235 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7c09427-e3b8-48df-9082-0520d2fc9b23-scripts\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.675911 master-0 kubenswrapper[29097]: I0312 18:47:47.675484 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7c09427-e3b8-48df-9082-0520d2fc9b23-config-data-custom\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.684409 master-0 kubenswrapper[29097]: I0312 18:47:47.684349 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5df4f6b69c-glv8m" event={"ID":"68a827b1-7f2d-4bce-8374-8ed4bc46b22b","Type":"ContainerStarted","Data":"4bef3ee6bd2cfb5004bdee7fc4210c59c34c64330060bd87e5733dea0c5938a3"} Mar 12 18:47:47.685950 master-0 kubenswrapper[29097]: I0312 18:47:47.685630 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ffbf57c88-5pgzn" event={"ID":"185eee46-61f6-4cb6-8bed-9d63f1d448cc","Type":"ContainerStarted","Data":"ca69568f14594891a3b272a51be2d6bb51e3ca5a768c2a3c5791d106872ef000"} Mar 12 18:47:47.703336 master-0 kubenswrapper[29097]: I0312 18:47:47.702132 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7c09427-e3b8-48df-9082-0520d2fc9b23-config-data\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.703336 master-0 kubenswrapper[29097]: I0312 18:47:47.702353 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-ffbf57c88-5pgzn"] Mar 12 18:47:47.750718 master-0 kubenswrapper[29097]: I0312 18:47:47.748308 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7c09427-e3b8-48df-9082-0520d2fc9b23-combined-ca-bundle\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.752342 master-0 kubenswrapper[29097]: I0312 18:47:47.751600 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b6ed87b-3b42-4d02-8edd-7d614923f172-etc-machine-id\") pod \"cinder-fa62f-api-0\" (UID: \"9b6ed87b-3b42-4d02-8edd-7d614923f172\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:47.752342 master-0 kubenswrapper[29097]: I0312 18:47:47.751649 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b6ed87b-3b42-4d02-8edd-7d614923f172-scripts\") pod \"cinder-fa62f-api-0\" (UID: \"9b6ed87b-3b42-4d02-8edd-7d614923f172\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:47.752342 master-0 kubenswrapper[29097]: I0312 18:47:47.751682 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvmdk\" (UniqueName: \"kubernetes.io/projected/9b6ed87b-3b42-4d02-8edd-7d614923f172-kube-api-access-dvmdk\") pod \"cinder-fa62f-api-0\" (UID: \"9b6ed87b-3b42-4d02-8edd-7d614923f172\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:47.752342 master-0 kubenswrapper[29097]: I0312 18:47:47.751752 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b6ed87b-3b42-4d02-8edd-7d614923f172-config-data\") pod \"cinder-fa62f-api-0\" (UID: \"9b6ed87b-3b42-4d02-8edd-7d614923f172\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:47.752342 master-0 kubenswrapper[29097]: I0312 18:47:47.751785 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b6ed87b-3b42-4d02-8edd-7d614923f172-config-data-custom\") pod \"cinder-fa62f-api-0\" (UID: \"9b6ed87b-3b42-4d02-8edd-7d614923f172\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:47.752342 master-0 kubenswrapper[29097]: I0312 18:47:47.751840 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b6ed87b-3b42-4d02-8edd-7d614923f172-combined-ca-bundle\") pod \"cinder-fa62f-api-0\" (UID: \"9b6ed87b-3b42-4d02-8edd-7d614923f172\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:47.752342 master-0 kubenswrapper[29097]: I0312 18:47:47.751889 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b6ed87b-3b42-4d02-8edd-7d614923f172-logs\") pod \"cinder-fa62f-api-0\" (UID: \"9b6ed87b-3b42-4d02-8edd-7d614923f172\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:47.752342 master-0 kubenswrapper[29097]: I0312 18:47:47.751997 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b6ed87b-3b42-4d02-8edd-7d614923f172-etc-machine-id\") pod \"cinder-fa62f-api-0\" (UID: \"9b6ed87b-3b42-4d02-8edd-7d614923f172\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:47.757174 master-0 kubenswrapper[29097]: I0312 18:47:47.756869 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b6ed87b-3b42-4d02-8edd-7d614923f172-config-data\") pod \"cinder-fa62f-api-0\" (UID: \"9b6ed87b-3b42-4d02-8edd-7d614923f172\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:47.758712 master-0 kubenswrapper[29097]: I0312 18:47:47.758665 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b6ed87b-3b42-4d02-8edd-7d614923f172-logs\") pod \"cinder-fa62f-api-0\" (UID: \"9b6ed87b-3b42-4d02-8edd-7d614923f172\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:47.760119 master-0 kubenswrapper[29097]: I0312 18:47:47.760075 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b6ed87b-3b42-4d02-8edd-7d614923f172-config-data-custom\") pod \"cinder-fa62f-api-0\" (UID: \"9b6ed87b-3b42-4d02-8edd-7d614923f172\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:47.761414 master-0 kubenswrapper[29097]: I0312 18:47:47.761340 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b6ed87b-3b42-4d02-8edd-7d614923f172-combined-ca-bundle\") pod \"cinder-fa62f-api-0\" (UID: \"9b6ed87b-3b42-4d02-8edd-7d614923f172\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:47.772082 master-0 kubenswrapper[29097]: I0312 18:47:47.770260 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b6ed87b-3b42-4d02-8edd-7d614923f172-scripts\") pod \"cinder-fa62f-api-0\" (UID: \"9b6ed87b-3b42-4d02-8edd-7d614923f172\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:47.986788 master-0 kubenswrapper[29097]: I0312 18:47:47.986734 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-974mw\" (UniqueName: \"kubernetes.io/projected/030364f8-b8e4-43b2-9597-5ca376e5f1a6-kube-api-access-974mw\") pod \"dnsmasq-dns-848b9c6b49-l9j7w\" (UID: \"030364f8-b8e4-43b2-9597-5ca376e5f1a6\") " pod="openstack/dnsmasq-dns-848b9c6b49-l9j7w" Mar 12 18:47:47.987834 master-0 kubenswrapper[29097]: I0312 18:47:47.987639 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lq4d\" (UniqueName: \"kubernetes.io/projected/e7c09427-e3b8-48df-9082-0520d2fc9b23-kube-api-access-2lq4d\") pod \"cinder-fa62f-backup-0\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:47.990220 master-0 kubenswrapper[29097]: I0312 18:47:47.990106 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-fa62f-scheduler-0"] Mar 12 18:47:48.014464 master-0 kubenswrapper[29097]: I0312 18:47:48.014294 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvmdk\" (UniqueName: \"kubernetes.io/projected/9b6ed87b-3b42-4d02-8edd-7d614923f172-kube-api-access-dvmdk\") pod \"cinder-fa62f-api-0\" (UID: \"9b6ed87b-3b42-4d02-8edd-7d614923f172\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:48.052670 master-0 kubenswrapper[29097]: I0312 18:47:48.051960 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:48.071528 master-0 kubenswrapper[29097]: I0312 18:47:48.070790 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:48.082853 master-0 kubenswrapper[29097]: I0312 18:47:48.082748 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848b9c6b49-l9j7w" Mar 12 18:47:48.113913 master-0 kubenswrapper[29097]: I0312 18:47:48.113772 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:48.113913 master-0 kubenswrapper[29097]: I0312 18:47:48.113833 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:48.200876 master-0 kubenswrapper[29097]: I0312 18:47:48.200819 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:48.201270 master-0 kubenswrapper[29097]: I0312 18:47:48.201045 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:48.256935 master-0 kubenswrapper[29097]: I0312 18:47:48.256777 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-fa62f-volume-lvm-iscsi-0"] Mar 12 18:47:48.706858 master-0 kubenswrapper[29097]: I0312 18:47:48.706740 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-scheduler-0" event={"ID":"4da9fa10-24ef-4383-9bc0-1f4872023810","Type":"ContainerStarted","Data":"9272b6a0852dd9ab45b30a56570e70b9dbd4d8fb33054eec35d16ab0e1a78c95"} Mar 12 18:47:48.709750 master-0 kubenswrapper[29097]: I0312 18:47:48.709637 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5df4f6b69c-glv8m" event={"ID":"68a827b1-7f2d-4bce-8374-8ed4bc46b22b","Type":"ContainerDied","Data":"9a28f145155441166cf4b148c727a4f5d4aa22908620c18afedae6cdc1eea971"} Mar 12 18:47:48.710091 master-0 kubenswrapper[29097]: I0312 18:47:48.710056 29097 generic.go:334] "Generic (PLEG): container finished" podID="68a827b1-7f2d-4bce-8374-8ed4bc46b22b" containerID="9a28f145155441166cf4b148c727a4f5d4aa22908620c18afedae6cdc1eea971" exitCode=0 Mar 12 18:47:48.715717 master-0 kubenswrapper[29097]: I0312 18:47:48.715614 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ffbf57c88-5pgzn" event={"ID":"185eee46-61f6-4cb6-8bed-9d63f1d448cc","Type":"ContainerStarted","Data":"478103112f5e27d1ea066ae34a0d3308c8a174e07663a9b84a472fc5f0e2f9a2"} Mar 12 18:47:48.715717 master-0 kubenswrapper[29097]: I0312 18:47:48.715690 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ffbf57c88-5pgzn" event={"ID":"185eee46-61f6-4cb6-8bed-9d63f1d448cc","Type":"ContainerStarted","Data":"7a00d292c2fc00c24eed8e5a385b815ce49f8dc2e1a7b9536068c0e3ac017e0c"} Mar 12 18:47:48.716680 master-0 kubenswrapper[29097]: I0312 18:47:48.715970 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-ffbf57c88-5pgzn" Mar 12 18:47:48.717785 master-0 kubenswrapper[29097]: I0312 18:47:48.717627 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" event={"ID":"c63671d0-70ab-4728-b069-2e44bd47570f","Type":"ContainerStarted","Data":"38411be8da2e08a2881f028f8e15f34fde419c11f4775b3b609fcfa853b939c8"} Mar 12 18:47:48.718279 master-0 kubenswrapper[29097]: I0312 18:47:48.718001 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:48.718279 master-0 kubenswrapper[29097]: I0312 18:47:48.718042 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:48.797067 master-0 kubenswrapper[29097]: I0312 18:47:48.793293 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-ffbf57c88-5pgzn" podStartSLOduration=3.79327331 podStartE2EDuration="3.79327331s" podCreationTimestamp="2026-03-12 18:47:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:47:48.772469681 +0000 UTC m=+1108.326449778" watchObservedRunningTime="2026-03-12 18:47:48.79327331 +0000 UTC m=+1108.347253407" Mar 12 18:47:48.896839 master-0 kubenswrapper[29097]: I0312 18:47:48.896706 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-fa62f-backup-0"] Mar 12 18:47:48.924779 master-0 kubenswrapper[29097]: W0312 18:47:48.924721 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7c09427_e3b8_48df_9082_0520d2fc9b23.slice/crio-34d4697af4e495be980cfc94fff7f8f17c9984c3dd647215185a1b0ea0700b08 WatchSource:0}: Error finding container 34d4697af4e495be980cfc94fff7f8f17c9984c3dd647215185a1b0ea0700b08: Status 404 returned error can't find the container with id 34d4697af4e495be980cfc94fff7f8f17c9984c3dd647215185a1b0ea0700b08 Mar 12 18:47:49.365342 master-0 kubenswrapper[29097]: I0312 18:47:49.365257 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-fa62f-api-0"] Mar 12 18:47:49.567527 master-0 kubenswrapper[29097]: I0312 18:47:49.567185 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848b9c6b49-l9j7w"] Mar 12 18:47:49.588725 master-0 kubenswrapper[29097]: I0312 18:47:49.588674 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5df4f6b69c-glv8m" Mar 12 18:47:49.710595 master-0 kubenswrapper[29097]: I0312 18:47:49.709353 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-ovsdbserver-nb\") pod \"68a827b1-7f2d-4bce-8374-8ed4bc46b22b\" (UID: \"68a827b1-7f2d-4bce-8374-8ed4bc46b22b\") " Mar 12 18:47:49.710595 master-0 kubenswrapper[29097]: I0312 18:47:49.709437 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtz2q\" (UniqueName: \"kubernetes.io/projected/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-kube-api-access-vtz2q\") pod \"68a827b1-7f2d-4bce-8374-8ed4bc46b22b\" (UID: \"68a827b1-7f2d-4bce-8374-8ed4bc46b22b\") " Mar 12 18:47:49.710595 master-0 kubenswrapper[29097]: I0312 18:47:49.709466 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-dns-svc\") pod \"68a827b1-7f2d-4bce-8374-8ed4bc46b22b\" (UID: \"68a827b1-7f2d-4bce-8374-8ed4bc46b22b\") " Mar 12 18:47:49.710595 master-0 kubenswrapper[29097]: I0312 18:47:49.709501 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-ovsdbserver-sb\") pod \"68a827b1-7f2d-4bce-8374-8ed4bc46b22b\" (UID: \"68a827b1-7f2d-4bce-8374-8ed4bc46b22b\") " Mar 12 18:47:49.710595 master-0 kubenswrapper[29097]: I0312 18:47:49.709541 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-dns-swift-storage-0\") pod \"68a827b1-7f2d-4bce-8374-8ed4bc46b22b\" (UID: \"68a827b1-7f2d-4bce-8374-8ed4bc46b22b\") " Mar 12 18:47:49.710595 master-0 kubenswrapper[29097]: I0312 18:47:49.709638 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-config\") pod \"68a827b1-7f2d-4bce-8374-8ed4bc46b22b\" (UID: \"68a827b1-7f2d-4bce-8374-8ed4bc46b22b\") " Mar 12 18:47:49.724096 master-0 kubenswrapper[29097]: I0312 18:47:49.724047 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-kube-api-access-vtz2q" (OuterVolumeSpecName: "kube-api-access-vtz2q") pod "68a827b1-7f2d-4bce-8374-8ed4bc46b22b" (UID: "68a827b1-7f2d-4bce-8374-8ed4bc46b22b"). InnerVolumeSpecName "kube-api-access-vtz2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:47:49.758025 master-0 kubenswrapper[29097]: I0312 18:47:49.753185 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "68a827b1-7f2d-4bce-8374-8ed4bc46b22b" (UID: "68a827b1-7f2d-4bce-8374-8ed4bc46b22b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:47:49.765099 master-0 kubenswrapper[29097]: I0312 18:47:49.764931 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "68a827b1-7f2d-4bce-8374-8ed4bc46b22b" (UID: "68a827b1-7f2d-4bce-8374-8ed4bc46b22b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:47:49.784569 master-0 kubenswrapper[29097]: I0312 18:47:49.784424 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "68a827b1-7f2d-4bce-8374-8ed4bc46b22b" (UID: "68a827b1-7f2d-4bce-8374-8ed4bc46b22b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:47:49.788588 master-0 kubenswrapper[29097]: I0312 18:47:49.788522 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-config" (OuterVolumeSpecName: "config") pod "68a827b1-7f2d-4bce-8374-8ed4bc46b22b" (UID: "68a827b1-7f2d-4bce-8374-8ed4bc46b22b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:47:49.792732 master-0 kubenswrapper[29097]: I0312 18:47:49.792683 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "68a827b1-7f2d-4bce-8374-8ed4bc46b22b" (UID: "68a827b1-7f2d-4bce-8374-8ed4bc46b22b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:47:49.794653 master-0 kubenswrapper[29097]: I0312 18:47:49.794623 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5df4f6b69c-glv8m" Mar 12 18:47:49.794761 master-0 kubenswrapper[29097]: I0312 18:47:49.794723 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5df4f6b69c-glv8m" event={"ID":"68a827b1-7f2d-4bce-8374-8ed4bc46b22b","Type":"ContainerDied","Data":"4bef3ee6bd2cfb5004bdee7fc4210c59c34c64330060bd87e5733dea0c5938a3"} Mar 12 18:47:49.794800 master-0 kubenswrapper[29097]: I0312 18:47:49.794776 29097 scope.go:117] "RemoveContainer" containerID="9a28f145155441166cf4b148c727a4f5d4aa22908620c18afedae6cdc1eea971" Mar 12 18:47:49.811964 master-0 kubenswrapper[29097]: I0312 18:47:49.811500 29097 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:49.811964 master-0 kubenswrapper[29097]: I0312 18:47:49.811545 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtz2q\" (UniqueName: \"kubernetes.io/projected/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-kube-api-access-vtz2q\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:49.811964 master-0 kubenswrapper[29097]: I0312 18:47:49.811556 29097 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:49.811964 master-0 kubenswrapper[29097]: I0312 18:47:49.811564 29097 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:49.811964 master-0 kubenswrapper[29097]: I0312 18:47:49.811573 29097 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:49.811964 master-0 kubenswrapper[29097]: I0312 18:47:49.811582 29097 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68a827b1-7f2d-4bce-8374-8ed4bc46b22b-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:49.814182 master-0 kubenswrapper[29097]: I0312 18:47:49.814144 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-api-0" event={"ID":"9b6ed87b-3b42-4d02-8edd-7d614923f172","Type":"ContainerStarted","Data":"353c86b380d94a8c4f145823bafe898340f0244cad121851c525eb855abd9c1d"} Mar 12 18:47:49.819580 master-0 kubenswrapper[29097]: I0312 18:47:49.817191 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-backup-0" event={"ID":"e7c09427-e3b8-48df-9082-0520d2fc9b23","Type":"ContainerStarted","Data":"34d4697af4e495be980cfc94fff7f8f17c9984c3dd647215185a1b0ea0700b08"} Mar 12 18:47:49.822217 master-0 kubenswrapper[29097]: I0312 18:47:49.822100 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848b9c6b49-l9j7w" event={"ID":"030364f8-b8e4-43b2-9597-5ca376e5f1a6","Type":"ContainerStarted","Data":"36d472a0f3653d05a5cbb507b1824c23da21bc519e01a36c3f944970e976b553"} Mar 12 18:47:49.957963 master-0 kubenswrapper[29097]: I0312 18:47:49.956627 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5df4f6b69c-glv8m"] Mar 12 18:47:49.980694 master-0 kubenswrapper[29097]: I0312 18:47:49.980630 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5df4f6b69c-glv8m"] Mar 12 18:47:50.423078 master-0 kubenswrapper[29097]: I0312 18:47:50.423017 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-fa62f-api-0"] Mar 12 18:47:50.470576 master-0 kubenswrapper[29097]: I0312 18:47:50.470274 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:50.470576 master-0 kubenswrapper[29097]: I0312 18:47:50.470387 29097 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 18:47:50.632380 master-0 kubenswrapper[29097]: I0312 18:47:50.629133 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:47:50.837485 master-0 kubenswrapper[29097]: I0312 18:47:50.834156 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68a827b1-7f2d-4bce-8374-8ed4bc46b22b" path="/var/lib/kubelet/pods/68a827b1-7f2d-4bce-8374-8ed4bc46b22b/volumes" Mar 12 18:47:50.840853 master-0 kubenswrapper[29097]: I0312 18:47:50.840736 29097 generic.go:334] "Generic (PLEG): container finished" podID="030364f8-b8e4-43b2-9597-5ca376e5f1a6" containerID="eaf553f5c68e673c3e90222fc6c2b13f76305b342109dd80b6a2578d3f8b9d93" exitCode=0 Mar 12 18:47:50.840853 master-0 kubenswrapper[29097]: I0312 18:47:50.840784 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848b9c6b49-l9j7w" event={"ID":"030364f8-b8e4-43b2-9597-5ca376e5f1a6","Type":"ContainerDied","Data":"eaf553f5c68e673c3e90222fc6c2b13f76305b342109dd80b6a2578d3f8b9d93"} Mar 12 18:47:50.857087 master-0 kubenswrapper[29097]: I0312 18:47:50.852482 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-api-0" event={"ID":"9b6ed87b-3b42-4d02-8edd-7d614923f172","Type":"ContainerStarted","Data":"0e16b039dac5210d3086b313b7c3e3de7dd97bd918189d5340825f80cb9896c9"} Mar 12 18:47:50.865059 master-0 kubenswrapper[29097]: I0312 18:47:50.860426 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-scheduler-0" event={"ID":"4da9fa10-24ef-4383-9bc0-1f4872023810","Type":"ContainerStarted","Data":"1d37547257ae9b62c0150d21d8c4df16f776ab4f2e74f9a7368014c223f280b3"} Mar 12 18:47:50.902731 master-0 kubenswrapper[29097]: I0312 18:47:50.899676 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" event={"ID":"c63671d0-70ab-4728-b069-2e44bd47570f","Type":"ContainerStarted","Data":"085c61cb3700432dc93c244c65631c7bd0b9f38f7c1681dc49e764c765a87128"} Mar 12 18:47:50.910820 master-0 kubenswrapper[29097]: I0312 18:47:50.907734 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-backup-0" event={"ID":"e7c09427-e3b8-48df-9082-0520d2fc9b23","Type":"ContainerStarted","Data":"8482d0d8b6612ea6d2febb1b09a99798553f765614e3291bfb4f5587a0357362"} Mar 12 18:47:51.314070 master-0 kubenswrapper[29097]: I0312 18:47:51.314018 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:51.314384 master-0 kubenswrapper[29097]: I0312 18:47:51.314131 29097 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 18:47:51.318330 master-0 kubenswrapper[29097]: I0312 18:47:51.318288 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:47:51.939990 master-0 kubenswrapper[29097]: I0312 18:47:51.939894 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" event={"ID":"c63671d0-70ab-4728-b069-2e44bd47570f","Type":"ContainerStarted","Data":"81983d927714fe74b062336fc87e248c805034440ee13429e5fb7372d00e802b"} Mar 12 18:47:51.958578 master-0 kubenswrapper[29097]: I0312 18:47:51.958540 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-backup-0" event={"ID":"e7c09427-e3b8-48df-9082-0520d2fc9b23","Type":"ContainerStarted","Data":"8c0b0831b33cc256833646550454b098834b105b836510fce944e693959a7cae"} Mar 12 18:47:51.965149 master-0 kubenswrapper[29097]: I0312 18:47:51.965116 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848b9c6b49-l9j7w" event={"ID":"030364f8-b8e4-43b2-9597-5ca376e5f1a6","Type":"ContainerStarted","Data":"d182439ec51902c12039297a878281f6f761bf7d7413d6aed470ccfae7ba8632"} Mar 12 18:47:51.965794 master-0 kubenswrapper[29097]: I0312 18:47:51.965767 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848b9c6b49-l9j7w" Mar 12 18:47:51.979786 master-0 kubenswrapper[29097]: I0312 18:47:51.979729 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-api-0" event={"ID":"9b6ed87b-3b42-4d02-8edd-7d614923f172","Type":"ContainerStarted","Data":"66c1e55caba9e502b5c09ee1af19482247ba3824eeb43e80be67d9f2f4302b40"} Mar 12 18:47:51.980000 master-0 kubenswrapper[29097]: I0312 18:47:51.979957 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-fa62f-api-0" podUID="9b6ed87b-3b42-4d02-8edd-7d614923f172" containerName="cinder-fa62f-api-log" containerID="cri-o://0e16b039dac5210d3086b313b7c3e3de7dd97bd918189d5340825f80cb9896c9" gracePeriod=30 Mar 12 18:47:51.980361 master-0 kubenswrapper[29097]: I0312 18:47:51.980327 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:51.980436 master-0 kubenswrapper[29097]: I0312 18:47:51.980394 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-fa62f-api-0" podUID="9b6ed87b-3b42-4d02-8edd-7d614923f172" containerName="cinder-api" containerID="cri-o://66c1e55caba9e502b5c09ee1af19482247ba3824eeb43e80be67d9f2f4302b40" gracePeriod=30 Mar 12 18:47:51.989798 master-0 kubenswrapper[29097]: I0312 18:47:51.989764 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-scheduler-0" event={"ID":"4da9fa10-24ef-4383-9bc0-1f4872023810","Type":"ContainerStarted","Data":"72a6c88dbe937bf87ed200b6f065ad5bbcf29b0740482acf8f95dbc467597eba"} Mar 12 18:47:52.283617 master-0 kubenswrapper[29097]: I0312 18:47:52.271049 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" podStartSLOduration=4.4717234470000005 podStartE2EDuration="6.271025179s" podCreationTimestamp="2026-03-12 18:47:46 +0000 UTC" firstStartedPulling="2026-03-12 18:47:48.364771279 +0000 UTC m=+1107.918751376" lastFinishedPulling="2026-03-12 18:47:50.164073011 +0000 UTC m=+1109.718053108" observedRunningTime="2026-03-12 18:47:52.269251925 +0000 UTC m=+1111.823232042" watchObservedRunningTime="2026-03-12 18:47:52.271025179 +0000 UTC m=+1111.825005276" Mar 12 18:47:52.297551 master-0 kubenswrapper[29097]: I0312 18:47:52.295273 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:47:52.388598 master-0 kubenswrapper[29097]: I0312 18:47:52.387234 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-fa62f-scheduler-0" podStartSLOduration=4.8984215330000005 podStartE2EDuration="6.387196828s" podCreationTimestamp="2026-03-12 18:47:46 +0000 UTC" firstStartedPulling="2026-03-12 18:47:48.068947188 +0000 UTC m=+1107.622927285" lastFinishedPulling="2026-03-12 18:47:49.557722483 +0000 UTC m=+1109.111702580" observedRunningTime="2026-03-12 18:47:52.361978768 +0000 UTC m=+1111.915958875" watchObservedRunningTime="2026-03-12 18:47:52.387196828 +0000 UTC m=+1111.941176925" Mar 12 18:47:52.410476 master-0 kubenswrapper[29097]: I0312 18:47:52.409969 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-fa62f-api-0" podStartSLOduration=5.409942265 podStartE2EDuration="5.409942265s" podCreationTimestamp="2026-03-12 18:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:47:52.398663854 +0000 UTC m=+1111.952643951" watchObservedRunningTime="2026-03-12 18:47:52.409942265 +0000 UTC m=+1111.963922362" Mar 12 18:47:52.445224 master-0 kubenswrapper[29097]: I0312 18:47:52.443130 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-fa62f-backup-0" podStartSLOduration=4.150885112 podStartE2EDuration="5.443112843s" podCreationTimestamp="2026-03-12 18:47:47 +0000 UTC" firstStartedPulling="2026-03-12 18:47:48.928111614 +0000 UTC m=+1108.482091711" lastFinishedPulling="2026-03-12 18:47:50.220339345 +0000 UTC m=+1109.774319442" observedRunningTime="2026-03-12 18:47:52.44101111 +0000 UTC m=+1111.994991207" watchObservedRunningTime="2026-03-12 18:47:52.443112843 +0000 UTC m=+1111.997092950" Mar 12 18:47:52.480422 master-0 kubenswrapper[29097]: I0312 18:47:52.480339 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848b9c6b49-l9j7w" podStartSLOduration=5.480322511 podStartE2EDuration="5.480322511s" podCreationTimestamp="2026-03-12 18:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:47:52.479915441 +0000 UTC m=+1112.033895538" watchObservedRunningTime="2026-03-12 18:47:52.480322511 +0000 UTC m=+1112.034302608" Mar 12 18:47:52.648676 master-0 kubenswrapper[29097]: I0312 18:47:52.648634 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:52.976542 master-0 kubenswrapper[29097]: I0312 18:47:52.974900 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8655bff577-lrzbz"] Mar 12 18:47:52.976542 master-0 kubenswrapper[29097]: E0312 18:47:52.975397 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68a827b1-7f2d-4bce-8374-8ed4bc46b22b" containerName="init" Mar 12 18:47:52.976542 master-0 kubenswrapper[29097]: I0312 18:47:52.975410 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="68a827b1-7f2d-4bce-8374-8ed4bc46b22b" containerName="init" Mar 12 18:47:52.976542 master-0 kubenswrapper[29097]: I0312 18:47:52.975673 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="68a827b1-7f2d-4bce-8374-8ed4bc46b22b" containerName="init" Mar 12 18:47:52.977261 master-0 kubenswrapper[29097]: I0312 18:47:52.976717 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8655bff577-lrzbz" Mar 12 18:47:52.985364 master-0 kubenswrapper[29097]: I0312 18:47:52.983989 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Mar 12 18:47:52.985364 master-0 kubenswrapper[29097]: I0312 18:47:52.984206 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Mar 12 18:47:53.036011 master-0 kubenswrapper[29097]: I0312 18:47:53.034081 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8655bff577-lrzbz"] Mar 12 18:47:53.052900 master-0 kubenswrapper[29097]: I0312 18:47:53.052851 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:53.061580 master-0 kubenswrapper[29097]: I0312 18:47:53.061455 29097 generic.go:334] "Generic (PLEG): container finished" podID="9b6ed87b-3b42-4d02-8edd-7d614923f172" containerID="66c1e55caba9e502b5c09ee1af19482247ba3824eeb43e80be67d9f2f4302b40" exitCode=0 Mar 12 18:47:53.061580 master-0 kubenswrapper[29097]: I0312 18:47:53.061555 29097 generic.go:334] "Generic (PLEG): container finished" podID="9b6ed87b-3b42-4d02-8edd-7d614923f172" containerID="0e16b039dac5210d3086b313b7c3e3de7dd97bd918189d5340825f80cb9896c9" exitCode=143 Mar 12 18:47:53.063631 master-0 kubenswrapper[29097]: I0312 18:47:53.061840 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-api-0" event={"ID":"9b6ed87b-3b42-4d02-8edd-7d614923f172","Type":"ContainerDied","Data":"66c1e55caba9e502b5c09ee1af19482247ba3824eeb43e80be67d9f2f4302b40"} Mar 12 18:47:53.063631 master-0 kubenswrapper[29097]: I0312 18:47:53.061888 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-api-0" event={"ID":"9b6ed87b-3b42-4d02-8edd-7d614923f172","Type":"ContainerDied","Data":"0e16b039dac5210d3086b313b7c3e3de7dd97bd918189d5340825f80cb9896c9"} Mar 12 18:47:53.068316 master-0 kubenswrapper[29097]: I0312 18:47:53.068288 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef9a9c97-1ce8-42ef-b4de-de87dbf5524a-combined-ca-bundle\") pod \"neutron-8655bff577-lrzbz\" (UID: \"ef9a9c97-1ce8-42ef-b4de-de87dbf5524a\") " pod="openstack/neutron-8655bff577-lrzbz" Mar 12 18:47:53.068594 master-0 kubenswrapper[29097]: I0312 18:47:53.068576 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ef9a9c97-1ce8-42ef-b4de-de87dbf5524a-config\") pod \"neutron-8655bff577-lrzbz\" (UID: \"ef9a9c97-1ce8-42ef-b4de-de87dbf5524a\") " pod="openstack/neutron-8655bff577-lrzbz" Mar 12 18:47:53.068718 master-0 kubenswrapper[29097]: I0312 18:47:53.068704 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef9a9c97-1ce8-42ef-b4de-de87dbf5524a-ovndb-tls-certs\") pod \"neutron-8655bff577-lrzbz\" (UID: \"ef9a9c97-1ce8-42ef-b4de-de87dbf5524a\") " pod="openstack/neutron-8655bff577-lrzbz" Mar 12 18:47:53.068864 master-0 kubenswrapper[29097]: I0312 18:47:53.068830 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef9a9c97-1ce8-42ef-b4de-de87dbf5524a-public-tls-certs\") pod \"neutron-8655bff577-lrzbz\" (UID: \"ef9a9c97-1ce8-42ef-b4de-de87dbf5524a\") " pod="openstack/neutron-8655bff577-lrzbz" Mar 12 18:47:53.068915 master-0 kubenswrapper[29097]: I0312 18:47:53.068872 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bxb7\" (UniqueName: \"kubernetes.io/projected/ef9a9c97-1ce8-42ef-b4de-de87dbf5524a-kube-api-access-6bxb7\") pod \"neutron-8655bff577-lrzbz\" (UID: \"ef9a9c97-1ce8-42ef-b4de-de87dbf5524a\") " pod="openstack/neutron-8655bff577-lrzbz" Mar 12 18:47:53.069059 master-0 kubenswrapper[29097]: I0312 18:47:53.069010 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef9a9c97-1ce8-42ef-b4de-de87dbf5524a-internal-tls-certs\") pod \"neutron-8655bff577-lrzbz\" (UID: \"ef9a9c97-1ce8-42ef-b4de-de87dbf5524a\") " pod="openstack/neutron-8655bff577-lrzbz" Mar 12 18:47:53.069059 master-0 kubenswrapper[29097]: I0312 18:47:53.069059 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ef9a9c97-1ce8-42ef-b4de-de87dbf5524a-httpd-config\") pod \"neutron-8655bff577-lrzbz\" (UID: \"ef9a9c97-1ce8-42ef-b4de-de87dbf5524a\") " pod="openstack/neutron-8655bff577-lrzbz" Mar 12 18:47:53.171939 master-0 kubenswrapper[29097]: I0312 18:47:53.171541 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef9a9c97-1ce8-42ef-b4de-de87dbf5524a-ovndb-tls-certs\") pod \"neutron-8655bff577-lrzbz\" (UID: \"ef9a9c97-1ce8-42ef-b4de-de87dbf5524a\") " pod="openstack/neutron-8655bff577-lrzbz" Mar 12 18:47:53.172060 master-0 kubenswrapper[29097]: I0312 18:47:53.171966 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef9a9c97-1ce8-42ef-b4de-de87dbf5524a-public-tls-certs\") pod \"neutron-8655bff577-lrzbz\" (UID: \"ef9a9c97-1ce8-42ef-b4de-de87dbf5524a\") " pod="openstack/neutron-8655bff577-lrzbz" Mar 12 18:47:53.172060 master-0 kubenswrapper[29097]: I0312 18:47:53.172044 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bxb7\" (UniqueName: \"kubernetes.io/projected/ef9a9c97-1ce8-42ef-b4de-de87dbf5524a-kube-api-access-6bxb7\") pod \"neutron-8655bff577-lrzbz\" (UID: \"ef9a9c97-1ce8-42ef-b4de-de87dbf5524a\") " pod="openstack/neutron-8655bff577-lrzbz" Mar 12 18:47:53.172452 master-0 kubenswrapper[29097]: I0312 18:47:53.172358 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ef9a9c97-1ce8-42ef-b4de-de87dbf5524a-httpd-config\") pod \"neutron-8655bff577-lrzbz\" (UID: \"ef9a9c97-1ce8-42ef-b4de-de87dbf5524a\") " pod="openstack/neutron-8655bff577-lrzbz" Mar 12 18:47:53.172452 master-0 kubenswrapper[29097]: I0312 18:47:53.172389 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef9a9c97-1ce8-42ef-b4de-de87dbf5524a-internal-tls-certs\") pod \"neutron-8655bff577-lrzbz\" (UID: \"ef9a9c97-1ce8-42ef-b4de-de87dbf5524a\") " pod="openstack/neutron-8655bff577-lrzbz" Mar 12 18:47:53.172679 master-0 kubenswrapper[29097]: I0312 18:47:53.172653 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef9a9c97-1ce8-42ef-b4de-de87dbf5524a-combined-ca-bundle\") pod \"neutron-8655bff577-lrzbz\" (UID: \"ef9a9c97-1ce8-42ef-b4de-de87dbf5524a\") " pod="openstack/neutron-8655bff577-lrzbz" Mar 12 18:47:53.172746 master-0 kubenswrapper[29097]: I0312 18:47:53.172697 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ef9a9c97-1ce8-42ef-b4de-de87dbf5524a-config\") pod \"neutron-8655bff577-lrzbz\" (UID: \"ef9a9c97-1ce8-42ef-b4de-de87dbf5524a\") " pod="openstack/neutron-8655bff577-lrzbz" Mar 12 18:47:53.179343 master-0 kubenswrapper[29097]: I0312 18:47:53.179298 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef9a9c97-1ce8-42ef-b4de-de87dbf5524a-ovndb-tls-certs\") pod \"neutron-8655bff577-lrzbz\" (UID: \"ef9a9c97-1ce8-42ef-b4de-de87dbf5524a\") " pod="openstack/neutron-8655bff577-lrzbz" Mar 12 18:47:53.181259 master-0 kubenswrapper[29097]: I0312 18:47:53.181199 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef9a9c97-1ce8-42ef-b4de-de87dbf5524a-public-tls-certs\") pod \"neutron-8655bff577-lrzbz\" (UID: \"ef9a9c97-1ce8-42ef-b4de-de87dbf5524a\") " pod="openstack/neutron-8655bff577-lrzbz" Mar 12 18:47:53.187174 master-0 kubenswrapper[29097]: I0312 18:47:53.184991 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ef9a9c97-1ce8-42ef-b4de-de87dbf5524a-config\") pod \"neutron-8655bff577-lrzbz\" (UID: \"ef9a9c97-1ce8-42ef-b4de-de87dbf5524a\") " pod="openstack/neutron-8655bff577-lrzbz" Mar 12 18:47:53.187174 master-0 kubenswrapper[29097]: I0312 18:47:53.186097 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ef9a9c97-1ce8-42ef-b4de-de87dbf5524a-httpd-config\") pod \"neutron-8655bff577-lrzbz\" (UID: \"ef9a9c97-1ce8-42ef-b4de-de87dbf5524a\") " pod="openstack/neutron-8655bff577-lrzbz" Mar 12 18:47:53.188284 master-0 kubenswrapper[29097]: I0312 18:47:53.188254 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef9a9c97-1ce8-42ef-b4de-de87dbf5524a-combined-ca-bundle\") pod \"neutron-8655bff577-lrzbz\" (UID: \"ef9a9c97-1ce8-42ef-b4de-de87dbf5524a\") " pod="openstack/neutron-8655bff577-lrzbz" Mar 12 18:47:53.188350 master-0 kubenswrapper[29097]: I0312 18:47:53.188294 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef9a9c97-1ce8-42ef-b4de-de87dbf5524a-internal-tls-certs\") pod \"neutron-8655bff577-lrzbz\" (UID: \"ef9a9c97-1ce8-42ef-b4de-de87dbf5524a\") " pod="openstack/neutron-8655bff577-lrzbz" Mar 12 18:47:53.200393 master-0 kubenswrapper[29097]: I0312 18:47:53.200346 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bxb7\" (UniqueName: \"kubernetes.io/projected/ef9a9c97-1ce8-42ef-b4de-de87dbf5524a-kube-api-access-6bxb7\") pod \"neutron-8655bff577-lrzbz\" (UID: \"ef9a9c97-1ce8-42ef-b4de-de87dbf5524a\") " pod="openstack/neutron-8655bff577-lrzbz" Mar 12 18:47:53.297228 master-0 kubenswrapper[29097]: I0312 18:47:53.296934 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:53.361050 master-0 kubenswrapper[29097]: I0312 18:47:53.360981 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8655bff577-lrzbz" Mar 12 18:47:53.381728 master-0 kubenswrapper[29097]: I0312 18:47:53.380493 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b6ed87b-3b42-4d02-8edd-7d614923f172-logs\") pod \"9b6ed87b-3b42-4d02-8edd-7d614923f172\" (UID: \"9b6ed87b-3b42-4d02-8edd-7d614923f172\") " Mar 12 18:47:53.381728 master-0 kubenswrapper[29097]: I0312 18:47:53.380670 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b6ed87b-3b42-4d02-8edd-7d614923f172-combined-ca-bundle\") pod \"9b6ed87b-3b42-4d02-8edd-7d614923f172\" (UID: \"9b6ed87b-3b42-4d02-8edd-7d614923f172\") " Mar 12 18:47:53.381728 master-0 kubenswrapper[29097]: I0312 18:47:53.380745 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b6ed87b-3b42-4d02-8edd-7d614923f172-etc-machine-id\") pod \"9b6ed87b-3b42-4d02-8edd-7d614923f172\" (UID: \"9b6ed87b-3b42-4d02-8edd-7d614923f172\") " Mar 12 18:47:53.381728 master-0 kubenswrapper[29097]: I0312 18:47:53.380888 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b6ed87b-3b42-4d02-8edd-7d614923f172-config-data\") pod \"9b6ed87b-3b42-4d02-8edd-7d614923f172\" (UID: \"9b6ed87b-3b42-4d02-8edd-7d614923f172\") " Mar 12 18:47:53.381728 master-0 kubenswrapper[29097]: I0312 18:47:53.380915 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b6ed87b-3b42-4d02-8edd-7d614923f172-scripts\") pod \"9b6ed87b-3b42-4d02-8edd-7d614923f172\" (UID: \"9b6ed87b-3b42-4d02-8edd-7d614923f172\") " Mar 12 18:47:53.381728 master-0 kubenswrapper[29097]: I0312 18:47:53.380959 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvmdk\" (UniqueName: \"kubernetes.io/projected/9b6ed87b-3b42-4d02-8edd-7d614923f172-kube-api-access-dvmdk\") pod \"9b6ed87b-3b42-4d02-8edd-7d614923f172\" (UID: \"9b6ed87b-3b42-4d02-8edd-7d614923f172\") " Mar 12 18:47:53.381728 master-0 kubenswrapper[29097]: I0312 18:47:53.381054 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b6ed87b-3b42-4d02-8edd-7d614923f172-config-data-custom\") pod \"9b6ed87b-3b42-4d02-8edd-7d614923f172\" (UID: \"9b6ed87b-3b42-4d02-8edd-7d614923f172\") " Mar 12 18:47:53.388636 master-0 kubenswrapper[29097]: I0312 18:47:53.387021 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b6ed87b-3b42-4d02-8edd-7d614923f172-logs" (OuterVolumeSpecName: "logs") pod "9b6ed87b-3b42-4d02-8edd-7d614923f172" (UID: "9b6ed87b-3b42-4d02-8edd-7d614923f172"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:47:53.388636 master-0 kubenswrapper[29097]: I0312 18:47:53.387890 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b6ed87b-3b42-4d02-8edd-7d614923f172-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9b6ed87b-3b42-4d02-8edd-7d614923f172" (UID: "9b6ed87b-3b42-4d02-8edd-7d614923f172"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:47:53.487735 master-0 kubenswrapper[29097]: I0312 18:47:53.483409 29097 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b6ed87b-3b42-4d02-8edd-7d614923f172-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:53.487735 master-0 kubenswrapper[29097]: I0312 18:47:53.483447 29097 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b6ed87b-3b42-4d02-8edd-7d614923f172-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:53.487735 master-0 kubenswrapper[29097]: I0312 18:47:53.484726 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b6ed87b-3b42-4d02-8edd-7d614923f172-scripts" (OuterVolumeSpecName: "scripts") pod "9b6ed87b-3b42-4d02-8edd-7d614923f172" (UID: "9b6ed87b-3b42-4d02-8edd-7d614923f172"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:53.487735 master-0 kubenswrapper[29097]: I0312 18:47:53.484995 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b6ed87b-3b42-4d02-8edd-7d614923f172-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9b6ed87b-3b42-4d02-8edd-7d614923f172" (UID: "9b6ed87b-3b42-4d02-8edd-7d614923f172"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:53.491867 master-0 kubenswrapper[29097]: I0312 18:47:53.488375 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b6ed87b-3b42-4d02-8edd-7d614923f172-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b6ed87b-3b42-4d02-8edd-7d614923f172" (UID: "9b6ed87b-3b42-4d02-8edd-7d614923f172"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:53.491867 master-0 kubenswrapper[29097]: I0312 18:47:53.488767 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b6ed87b-3b42-4d02-8edd-7d614923f172-kube-api-access-dvmdk" (OuterVolumeSpecName: "kube-api-access-dvmdk") pod "9b6ed87b-3b42-4d02-8edd-7d614923f172" (UID: "9b6ed87b-3b42-4d02-8edd-7d614923f172"). InnerVolumeSpecName "kube-api-access-dvmdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:47:53.503731 master-0 kubenswrapper[29097]: I0312 18:47:53.503673 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b6ed87b-3b42-4d02-8edd-7d614923f172-config-data" (OuterVolumeSpecName: "config-data") pod "9b6ed87b-3b42-4d02-8edd-7d614923f172" (UID: "9b6ed87b-3b42-4d02-8edd-7d614923f172"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:47:53.602033 master-0 kubenswrapper[29097]: I0312 18:47:53.588020 29097 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b6ed87b-3b42-4d02-8edd-7d614923f172-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:53.602033 master-0 kubenswrapper[29097]: I0312 18:47:53.588061 29097 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b6ed87b-3b42-4d02-8edd-7d614923f172-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:53.602033 master-0 kubenswrapper[29097]: I0312 18:47:53.588073 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvmdk\" (UniqueName: \"kubernetes.io/projected/9b6ed87b-3b42-4d02-8edd-7d614923f172-kube-api-access-dvmdk\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:53.602033 master-0 kubenswrapper[29097]: I0312 18:47:53.588085 29097 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b6ed87b-3b42-4d02-8edd-7d614923f172-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:53.602033 master-0 kubenswrapper[29097]: I0312 18:47:53.588097 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b6ed87b-3b42-4d02-8edd-7d614923f172-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:47:54.019717 master-0 kubenswrapper[29097]: I0312 18:47:54.019666 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8655bff577-lrzbz"] Mar 12 18:47:54.091554 master-0 kubenswrapper[29097]: I0312 18:47:54.091499 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-api-0" event={"ID":"9b6ed87b-3b42-4d02-8edd-7d614923f172","Type":"ContainerDied","Data":"353c86b380d94a8c4f145823bafe898340f0244cad121851c525eb855abd9c1d"} Mar 12 18:47:54.091754 master-0 kubenswrapper[29097]: I0312 18:47:54.091572 29097 scope.go:117] "RemoveContainer" containerID="66c1e55caba9e502b5c09ee1af19482247ba3824eeb43e80be67d9f2f4302b40" Mar 12 18:47:54.091754 master-0 kubenswrapper[29097]: I0312 18:47:54.091623 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:54.093486 master-0 kubenswrapper[29097]: I0312 18:47:54.093446 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8655bff577-lrzbz" event={"ID":"ef9a9c97-1ce8-42ef-b4de-de87dbf5524a","Type":"ContainerStarted","Data":"c69ef2f3491ff16a93b0ea9dcd8b7f3b923e2d805ceeb7db51c234e0b22dcf3f"} Mar 12 18:47:54.144664 master-0 kubenswrapper[29097]: I0312 18:47:54.144152 29097 scope.go:117] "RemoveContainer" containerID="0e16b039dac5210d3086b313b7c3e3de7dd97bd918189d5340825f80cb9896c9" Mar 12 18:47:54.193549 master-0 kubenswrapper[29097]: I0312 18:47:54.193208 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-fa62f-api-0"] Mar 12 18:47:54.245409 master-0 kubenswrapper[29097]: I0312 18:47:54.245380 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-fa62f-api-0"] Mar 12 18:47:54.254284 master-0 kubenswrapper[29097]: I0312 18:47:54.254245 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-fa62f-api-0"] Mar 12 18:47:54.255118 master-0 kubenswrapper[29097]: E0312 18:47:54.255092 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b6ed87b-3b42-4d02-8edd-7d614923f172" containerName="cinder-api" Mar 12 18:47:54.255238 master-0 kubenswrapper[29097]: I0312 18:47:54.255227 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b6ed87b-3b42-4d02-8edd-7d614923f172" containerName="cinder-api" Mar 12 18:47:54.255327 master-0 kubenswrapper[29097]: E0312 18:47:54.255316 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b6ed87b-3b42-4d02-8edd-7d614923f172" containerName="cinder-fa62f-api-log" Mar 12 18:47:54.255385 master-0 kubenswrapper[29097]: I0312 18:47:54.255376 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b6ed87b-3b42-4d02-8edd-7d614923f172" containerName="cinder-fa62f-api-log" Mar 12 18:47:54.255666 master-0 kubenswrapper[29097]: I0312 18:47:54.255653 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b6ed87b-3b42-4d02-8edd-7d614923f172" containerName="cinder-fa62f-api-log" Mar 12 18:47:54.255751 master-0 kubenswrapper[29097]: I0312 18:47:54.255741 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b6ed87b-3b42-4d02-8edd-7d614923f172" containerName="cinder-api" Mar 12 18:47:54.259729 master-0 kubenswrapper[29097]: I0312 18:47:54.259703 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:54.262978 master-0 kubenswrapper[29097]: I0312 18:47:54.262929 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 12 18:47:54.263391 master-0 kubenswrapper[29097]: I0312 18:47:54.263350 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 12 18:47:54.264002 master-0 kubenswrapper[29097]: I0312 18:47:54.263952 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-fa62f-api-config-data" Mar 12 18:47:54.270267 master-0 kubenswrapper[29097]: I0312 18:47:54.270219 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-fa62f-api-0"] Mar 12 18:47:54.419876 master-0 kubenswrapper[29097]: I0312 18:47:54.419579 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f34dc271-d884-440c-bb41-6ddf5ca8d2c2-combined-ca-bundle\") pod \"cinder-fa62f-api-0\" (UID: \"f34dc271-d884-440c-bb41-6ddf5ca8d2c2\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:54.419876 master-0 kubenswrapper[29097]: I0312 18:47:54.419626 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f34dc271-d884-440c-bb41-6ddf5ca8d2c2-config-data-custom\") pod \"cinder-fa62f-api-0\" (UID: \"f34dc271-d884-440c-bb41-6ddf5ca8d2c2\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:54.419876 master-0 kubenswrapper[29097]: I0312 18:47:54.419700 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f34dc271-d884-440c-bb41-6ddf5ca8d2c2-config-data\") pod \"cinder-fa62f-api-0\" (UID: \"f34dc271-d884-440c-bb41-6ddf5ca8d2c2\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:54.419876 master-0 kubenswrapper[29097]: I0312 18:47:54.419859 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt4jc\" (UniqueName: \"kubernetes.io/projected/f34dc271-d884-440c-bb41-6ddf5ca8d2c2-kube-api-access-qt4jc\") pod \"cinder-fa62f-api-0\" (UID: \"f34dc271-d884-440c-bb41-6ddf5ca8d2c2\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:54.420184 master-0 kubenswrapper[29097]: I0312 18:47:54.419913 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f34dc271-d884-440c-bb41-6ddf5ca8d2c2-internal-tls-certs\") pod \"cinder-fa62f-api-0\" (UID: \"f34dc271-d884-440c-bb41-6ddf5ca8d2c2\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:54.420184 master-0 kubenswrapper[29097]: I0312 18:47:54.419954 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f34dc271-d884-440c-bb41-6ddf5ca8d2c2-etc-machine-id\") pod \"cinder-fa62f-api-0\" (UID: \"f34dc271-d884-440c-bb41-6ddf5ca8d2c2\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:54.420184 master-0 kubenswrapper[29097]: I0312 18:47:54.420098 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f34dc271-d884-440c-bb41-6ddf5ca8d2c2-scripts\") pod \"cinder-fa62f-api-0\" (UID: \"f34dc271-d884-440c-bb41-6ddf5ca8d2c2\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:54.420184 master-0 kubenswrapper[29097]: I0312 18:47:54.420136 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f34dc271-d884-440c-bb41-6ddf5ca8d2c2-public-tls-certs\") pod \"cinder-fa62f-api-0\" (UID: \"f34dc271-d884-440c-bb41-6ddf5ca8d2c2\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:54.420707 master-0 kubenswrapper[29097]: I0312 18:47:54.420459 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f34dc271-d884-440c-bb41-6ddf5ca8d2c2-logs\") pod \"cinder-fa62f-api-0\" (UID: \"f34dc271-d884-440c-bb41-6ddf5ca8d2c2\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:54.522163 master-0 kubenswrapper[29097]: I0312 18:47:54.522003 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f34dc271-d884-440c-bb41-6ddf5ca8d2c2-public-tls-certs\") pod \"cinder-fa62f-api-0\" (UID: \"f34dc271-d884-440c-bb41-6ddf5ca8d2c2\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:54.522368 master-0 kubenswrapper[29097]: I0312 18:47:54.522165 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f34dc271-d884-440c-bb41-6ddf5ca8d2c2-logs\") pod \"cinder-fa62f-api-0\" (UID: \"f34dc271-d884-440c-bb41-6ddf5ca8d2c2\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:54.522368 master-0 kubenswrapper[29097]: I0312 18:47:54.522206 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f34dc271-d884-440c-bb41-6ddf5ca8d2c2-combined-ca-bundle\") pod \"cinder-fa62f-api-0\" (UID: \"f34dc271-d884-440c-bb41-6ddf5ca8d2c2\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:54.522368 master-0 kubenswrapper[29097]: I0312 18:47:54.522234 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f34dc271-d884-440c-bb41-6ddf5ca8d2c2-config-data-custom\") pod \"cinder-fa62f-api-0\" (UID: \"f34dc271-d884-440c-bb41-6ddf5ca8d2c2\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:54.522368 master-0 kubenswrapper[29097]: I0312 18:47:54.522262 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f34dc271-d884-440c-bb41-6ddf5ca8d2c2-config-data\") pod \"cinder-fa62f-api-0\" (UID: \"f34dc271-d884-440c-bb41-6ddf5ca8d2c2\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:54.522368 master-0 kubenswrapper[29097]: I0312 18:47:54.522319 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt4jc\" (UniqueName: \"kubernetes.io/projected/f34dc271-d884-440c-bb41-6ddf5ca8d2c2-kube-api-access-qt4jc\") pod \"cinder-fa62f-api-0\" (UID: \"f34dc271-d884-440c-bb41-6ddf5ca8d2c2\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:54.522368 master-0 kubenswrapper[29097]: I0312 18:47:54.522351 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f34dc271-d884-440c-bb41-6ddf5ca8d2c2-internal-tls-certs\") pod \"cinder-fa62f-api-0\" (UID: \"f34dc271-d884-440c-bb41-6ddf5ca8d2c2\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:54.522678 master-0 kubenswrapper[29097]: I0312 18:47:54.522381 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f34dc271-d884-440c-bb41-6ddf5ca8d2c2-etc-machine-id\") pod \"cinder-fa62f-api-0\" (UID: \"f34dc271-d884-440c-bb41-6ddf5ca8d2c2\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:54.522678 master-0 kubenswrapper[29097]: I0312 18:47:54.522454 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f34dc271-d884-440c-bb41-6ddf5ca8d2c2-scripts\") pod \"cinder-fa62f-api-0\" (UID: \"f34dc271-d884-440c-bb41-6ddf5ca8d2c2\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:54.523880 master-0 kubenswrapper[29097]: I0312 18:47:54.523769 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f34dc271-d884-440c-bb41-6ddf5ca8d2c2-etc-machine-id\") pod \"cinder-fa62f-api-0\" (UID: \"f34dc271-d884-440c-bb41-6ddf5ca8d2c2\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:54.524938 master-0 kubenswrapper[29097]: I0312 18:47:54.524865 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f34dc271-d884-440c-bb41-6ddf5ca8d2c2-logs\") pod \"cinder-fa62f-api-0\" (UID: \"f34dc271-d884-440c-bb41-6ddf5ca8d2c2\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:54.527019 master-0 kubenswrapper[29097]: I0312 18:47:54.526898 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f34dc271-d884-440c-bb41-6ddf5ca8d2c2-internal-tls-certs\") pod \"cinder-fa62f-api-0\" (UID: \"f34dc271-d884-440c-bb41-6ddf5ca8d2c2\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:54.528668 master-0 kubenswrapper[29097]: I0312 18:47:54.528349 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f34dc271-d884-440c-bb41-6ddf5ca8d2c2-config-data-custom\") pod \"cinder-fa62f-api-0\" (UID: \"f34dc271-d884-440c-bb41-6ddf5ca8d2c2\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:54.530547 master-0 kubenswrapper[29097]: I0312 18:47:54.529110 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f34dc271-d884-440c-bb41-6ddf5ca8d2c2-scripts\") pod \"cinder-fa62f-api-0\" (UID: \"f34dc271-d884-440c-bb41-6ddf5ca8d2c2\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:54.530547 master-0 kubenswrapper[29097]: I0312 18:47:54.529612 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f34dc271-d884-440c-bb41-6ddf5ca8d2c2-combined-ca-bundle\") pod \"cinder-fa62f-api-0\" (UID: \"f34dc271-d884-440c-bb41-6ddf5ca8d2c2\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:54.530547 master-0 kubenswrapper[29097]: I0312 18:47:54.530479 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f34dc271-d884-440c-bb41-6ddf5ca8d2c2-config-data\") pod \"cinder-fa62f-api-0\" (UID: \"f34dc271-d884-440c-bb41-6ddf5ca8d2c2\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:54.531148 master-0 kubenswrapper[29097]: I0312 18:47:54.530878 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f34dc271-d884-440c-bb41-6ddf5ca8d2c2-public-tls-certs\") pod \"cinder-fa62f-api-0\" (UID: \"f34dc271-d884-440c-bb41-6ddf5ca8d2c2\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:54.542841 master-0 kubenswrapper[29097]: I0312 18:47:54.542765 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt4jc\" (UniqueName: \"kubernetes.io/projected/f34dc271-d884-440c-bb41-6ddf5ca8d2c2-kube-api-access-qt4jc\") pod \"cinder-fa62f-api-0\" (UID: \"f34dc271-d884-440c-bb41-6ddf5ca8d2c2\") " pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:54.581774 master-0 kubenswrapper[29097]: I0312 18:47:54.581111 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fa62f-api-0" Mar 12 18:47:54.735539 master-0 kubenswrapper[29097]: I0312 18:47:54.734716 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b6ed87b-3b42-4d02-8edd-7d614923f172" path="/var/lib/kubelet/pods/9b6ed87b-3b42-4d02-8edd-7d614923f172/volumes" Mar 12 18:47:55.111540 master-0 kubenswrapper[29097]: I0312 18:47:55.110908 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8655bff577-lrzbz" event={"ID":"ef9a9c97-1ce8-42ef-b4de-de87dbf5524a","Type":"ContainerStarted","Data":"aa8ca774259455515d1d9716fe15770e016b7dd125b4c72f398f98e50004183a"} Mar 12 18:47:57.145362 master-0 kubenswrapper[29097]: I0312 18:47:57.145302 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8655bff577-lrzbz" event={"ID":"ef9a9c97-1ce8-42ef-b4de-de87dbf5524a","Type":"ContainerStarted","Data":"01f94003cade16593d355e19a66f33c25fb76ac1369d5d473356139fae7c3bca"} Mar 12 18:47:57.145917 master-0 kubenswrapper[29097]: I0312 18:47:57.145455 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8655bff577-lrzbz" Mar 12 18:47:57.183020 master-0 kubenswrapper[29097]: I0312 18:47:57.182908 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8655bff577-lrzbz" podStartSLOduration=5.1828885 podStartE2EDuration="5.1828885s" podCreationTimestamp="2026-03-12 18:47:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:47:57.178117771 +0000 UTC m=+1116.732097878" watchObservedRunningTime="2026-03-12 18:47:57.1828885 +0000 UTC m=+1116.736868597" Mar 12 18:47:57.525411 master-0 kubenswrapper[29097]: I0312 18:47:57.525322 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:47:57.615824 master-0 kubenswrapper[29097]: I0312 18:47:57.615750 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-fa62f-scheduler-0"] Mar 12 18:47:57.985070 master-0 kubenswrapper[29097]: I0312 18:47:57.985008 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:47:58.071109 master-0 kubenswrapper[29097]: I0312 18:47:58.068353 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-fa62f-volume-lvm-iscsi-0"] Mar 12 18:47:58.108710 master-0 kubenswrapper[29097]: I0312 18:47:58.108659 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-848b9c6b49-l9j7w" Mar 12 18:47:58.192212 master-0 kubenswrapper[29097]: I0312 18:47:58.177680 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" podUID="c63671d0-70ab-4728-b069-2e44bd47570f" containerName="cinder-volume" containerID="cri-o://085c61cb3700432dc93c244c65631c7bd0b9f38f7c1681dc49e764c765a87128" gracePeriod=30 Mar 12 18:47:58.192212 master-0 kubenswrapper[29097]: I0312 18:47:58.179200 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-fa62f-scheduler-0" podUID="4da9fa10-24ef-4383-9bc0-1f4872023810" containerName="cinder-scheduler" containerID="cri-o://1d37547257ae9b62c0150d21d8c4df16f776ab4f2e74f9a7368014c223f280b3" gracePeriod=30 Mar 12 18:47:58.192212 master-0 kubenswrapper[29097]: I0312 18:47:58.179579 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" podUID="c63671d0-70ab-4728-b069-2e44bd47570f" containerName="probe" containerID="cri-o://81983d927714fe74b062336fc87e248c805034440ee13429e5fb7372d00e802b" gracePeriod=30 Mar 12 18:47:58.192212 master-0 kubenswrapper[29097]: I0312 18:47:58.179644 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-fa62f-scheduler-0" podUID="4da9fa10-24ef-4383-9bc0-1f4872023810" containerName="probe" containerID="cri-o://72a6c88dbe937bf87ed200b6f065ad5bbcf29b0740482acf8f95dbc467597eba" gracePeriod=30 Mar 12 18:47:58.226898 master-0 kubenswrapper[29097]: I0312 18:47:58.222224 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56c5578c7c-zjbch"] Mar 12 18:47:58.226898 master-0 kubenswrapper[29097]: I0312 18:47:58.222464 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56c5578c7c-zjbch" podUID="73e3cd3a-c873-4b0f-870d-26ba00b0a910" containerName="dnsmasq-dns" containerID="cri-o://3a9f18cb4fb84893628b85c0e0c2f3e983b38debc61be907f1207c28a89da2c3" gracePeriod=10 Mar 12 18:47:58.414546 master-0 kubenswrapper[29097]: I0312 18:47:58.413458 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-fa62f-backup-0" Mar 12 18:47:58.488650 master-0 kubenswrapper[29097]: I0312 18:47:58.486494 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-fa62f-backup-0"] Mar 12 18:47:59.194154 master-0 kubenswrapper[29097]: I0312 18:47:59.193822 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-fa62f-backup-0" podUID="e7c09427-e3b8-48df-9082-0520d2fc9b23" containerName="cinder-backup" containerID="cri-o://8482d0d8b6612ea6d2febb1b09a99798553f765614e3291bfb4f5587a0357362" gracePeriod=30 Mar 12 18:47:59.194154 master-0 kubenswrapper[29097]: I0312 18:47:59.193875 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-fa62f-backup-0" podUID="e7c09427-e3b8-48df-9082-0520d2fc9b23" containerName="probe" containerID="cri-o://8c0b0831b33cc256833646550454b098834b105b836510fce944e693959a7cae" gracePeriod=30 Mar 12 18:48:00.210636 master-0 kubenswrapper[29097]: I0312 18:48:00.205483 29097 generic.go:334] "Generic (PLEG): container finished" podID="c63671d0-70ab-4728-b069-2e44bd47570f" containerID="81983d927714fe74b062336fc87e248c805034440ee13429e5fb7372d00e802b" exitCode=0 Mar 12 18:48:00.210636 master-0 kubenswrapper[29097]: I0312 18:48:00.205527 29097 generic.go:334] "Generic (PLEG): container finished" podID="c63671d0-70ab-4728-b069-2e44bd47570f" containerID="085c61cb3700432dc93c244c65631c7bd0b9f38f7c1681dc49e764c765a87128" exitCode=0 Mar 12 18:48:00.210636 master-0 kubenswrapper[29097]: I0312 18:48:00.205567 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" event={"ID":"c63671d0-70ab-4728-b069-2e44bd47570f","Type":"ContainerDied","Data":"81983d927714fe74b062336fc87e248c805034440ee13429e5fb7372d00e802b"} Mar 12 18:48:00.210636 master-0 kubenswrapper[29097]: I0312 18:48:00.205593 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" event={"ID":"c63671d0-70ab-4728-b069-2e44bd47570f","Type":"ContainerDied","Data":"085c61cb3700432dc93c244c65631c7bd0b9f38f7c1681dc49e764c765a87128"} Mar 12 18:48:00.210636 master-0 kubenswrapper[29097]: I0312 18:48:00.207232 29097 generic.go:334] "Generic (PLEG): container finished" podID="e7c09427-e3b8-48df-9082-0520d2fc9b23" containerID="8c0b0831b33cc256833646550454b098834b105b836510fce944e693959a7cae" exitCode=0 Mar 12 18:48:00.210636 master-0 kubenswrapper[29097]: I0312 18:48:00.207268 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-backup-0" event={"ID":"e7c09427-e3b8-48df-9082-0520d2fc9b23","Type":"ContainerDied","Data":"8c0b0831b33cc256833646550454b098834b105b836510fce944e693959a7cae"} Mar 12 18:48:00.210636 master-0 kubenswrapper[29097]: I0312 18:48:00.208874 29097 generic.go:334] "Generic (PLEG): container finished" podID="73e3cd3a-c873-4b0f-870d-26ba00b0a910" containerID="3a9f18cb4fb84893628b85c0e0c2f3e983b38debc61be907f1207c28a89da2c3" exitCode=0 Mar 12 18:48:00.210636 master-0 kubenswrapper[29097]: I0312 18:48:00.208909 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c5578c7c-zjbch" event={"ID":"73e3cd3a-c873-4b0f-870d-26ba00b0a910","Type":"ContainerDied","Data":"3a9f18cb4fb84893628b85c0e0c2f3e983b38debc61be907f1207c28a89da2c3"} Mar 12 18:48:00.210636 master-0 kubenswrapper[29097]: I0312 18:48:00.208924 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c5578c7c-zjbch" event={"ID":"73e3cd3a-c873-4b0f-870d-26ba00b0a910","Type":"ContainerDied","Data":"25a2631cc76c36bd5968deb2f8daac913bc3fe80841d42f526f5b96f4948900d"} Mar 12 18:48:00.210636 master-0 kubenswrapper[29097]: I0312 18:48:00.208935 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25a2631cc76c36bd5968deb2f8daac913bc3fe80841d42f526f5b96f4948900d" Mar 12 18:48:00.210636 master-0 kubenswrapper[29097]: I0312 18:48:00.209975 29097 generic.go:334] "Generic (PLEG): container finished" podID="64b3a2fa-455e-45a6-a3b4-9763b68a8faa" containerID="fe86c2aa7b8a77d0ad7cdf55342dfe2d6ce685fe7ceae63e515a2c24b96ca771" exitCode=0 Mar 12 18:48:00.210636 master-0 kubenswrapper[29097]: I0312 18:48:00.210009 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-mzfh7" event={"ID":"64b3a2fa-455e-45a6-a3b4-9763b68a8faa","Type":"ContainerDied","Data":"fe86c2aa7b8a77d0ad7cdf55342dfe2d6ce685fe7ceae63e515a2c24b96ca771"} Mar 12 18:48:00.215076 master-0 kubenswrapper[29097]: I0312 18:48:00.214967 29097 generic.go:334] "Generic (PLEG): container finished" podID="4da9fa10-24ef-4383-9bc0-1f4872023810" containerID="72a6c88dbe937bf87ed200b6f065ad5bbcf29b0740482acf8f95dbc467597eba" exitCode=0 Mar 12 18:48:00.215076 master-0 kubenswrapper[29097]: I0312 18:48:00.215028 29097 generic.go:334] "Generic (PLEG): container finished" podID="4da9fa10-24ef-4383-9bc0-1f4872023810" containerID="1d37547257ae9b62c0150d21d8c4df16f776ab4f2e74f9a7368014c223f280b3" exitCode=0 Mar 12 18:48:00.215076 master-0 kubenswrapper[29097]: I0312 18:48:00.215010 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-scheduler-0" event={"ID":"4da9fa10-24ef-4383-9bc0-1f4872023810","Type":"ContainerDied","Data":"72a6c88dbe937bf87ed200b6f065ad5bbcf29b0740482acf8f95dbc467597eba"} Mar 12 18:48:00.215196 master-0 kubenswrapper[29097]: I0312 18:48:00.215093 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-scheduler-0" event={"ID":"4da9fa10-24ef-4383-9bc0-1f4872023810","Type":"ContainerDied","Data":"1d37547257ae9b62c0150d21d8c4df16f776ab4f2e74f9a7368014c223f280b3"} Mar 12 18:48:00.386920 master-0 kubenswrapper[29097]: I0312 18:48:00.382924 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-fa62f-api-0"] Mar 12 18:48:00.495797 master-0 kubenswrapper[29097]: E0312 18:48:00.495755 29097 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64b3a2fa_455e_45a6_a3b4_9763b68a8faa.slice/crio-conmon-fe86c2aa7b8a77d0ad7cdf55342dfe2d6ce685fe7ceae63e515a2c24b96ca771.scope\": RecentStats: unable to find data in memory cache]" Mar 12 18:48:00.507335 master-0 kubenswrapper[29097]: I0312 18:48:00.506999 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c5578c7c-zjbch" Mar 12 18:48:00.526870 master-0 kubenswrapper[29097]: I0312 18:48:00.526778 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:48:00.533759 master-0 kubenswrapper[29097]: I0312 18:48:00.532216 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73e3cd3a-c873-4b0f-870d-26ba00b0a910-dns-swift-storage-0\") pod \"73e3cd3a-c873-4b0f-870d-26ba00b0a910\" (UID: \"73e3cd3a-c873-4b0f-870d-26ba00b0a910\") " Mar 12 18:48:00.533759 master-0 kubenswrapper[29097]: I0312 18:48:00.532276 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73e3cd3a-c873-4b0f-870d-26ba00b0a910-ovsdbserver-nb\") pod \"73e3cd3a-c873-4b0f-870d-26ba00b0a910\" (UID: \"73e3cd3a-c873-4b0f-870d-26ba00b0a910\") " Mar 12 18:48:00.533759 master-0 kubenswrapper[29097]: I0312 18:48:00.532495 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73e3cd3a-c873-4b0f-870d-26ba00b0a910-dns-svc\") pod \"73e3cd3a-c873-4b0f-870d-26ba00b0a910\" (UID: \"73e3cd3a-c873-4b0f-870d-26ba00b0a910\") " Mar 12 18:48:00.533759 master-0 kubenswrapper[29097]: I0312 18:48:00.532530 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw4gh\" (UniqueName: \"kubernetes.io/projected/73e3cd3a-c873-4b0f-870d-26ba00b0a910-kube-api-access-dw4gh\") pod \"73e3cd3a-c873-4b0f-870d-26ba00b0a910\" (UID: \"73e3cd3a-c873-4b0f-870d-26ba00b0a910\") " Mar 12 18:48:00.533759 master-0 kubenswrapper[29097]: I0312 18:48:00.532567 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73e3cd3a-c873-4b0f-870d-26ba00b0a910-config\") pod \"73e3cd3a-c873-4b0f-870d-26ba00b0a910\" (UID: \"73e3cd3a-c873-4b0f-870d-26ba00b0a910\") " Mar 12 18:48:00.533759 master-0 kubenswrapper[29097]: I0312 18:48:00.532592 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73e3cd3a-c873-4b0f-870d-26ba00b0a910-ovsdbserver-sb\") pod \"73e3cd3a-c873-4b0f-870d-26ba00b0a910\" (UID: \"73e3cd3a-c873-4b0f-870d-26ba00b0a910\") " Mar 12 18:48:00.585667 master-0 kubenswrapper[29097]: I0312 18:48:00.583995 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:00.609552 master-0 kubenswrapper[29097]: I0312 18:48:00.609445 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73e3cd3a-c873-4b0f-870d-26ba00b0a910-kube-api-access-dw4gh" (OuterVolumeSpecName: "kube-api-access-dw4gh") pod "73e3cd3a-c873-4b0f-870d-26ba00b0a910" (UID: "73e3cd3a-c873-4b0f-870d-26ba00b0a910"). InnerVolumeSpecName "kube-api-access-dw4gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:48:00.637152 master-0 kubenswrapper[29097]: I0312 18:48:00.637104 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c63671d0-70ab-4728-b069-2e44bd47570f-combined-ca-bundle\") pod \"c63671d0-70ab-4728-b069-2e44bd47570f\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " Mar 12 18:48:00.637152 master-0 kubenswrapper[29097]: I0312 18:48:00.637154 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-lib-modules\") pod \"c63671d0-70ab-4728-b069-2e44bd47570f\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " Mar 12 18:48:00.637442 master-0 kubenswrapper[29097]: I0312 18:48:00.637173 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-sys\") pod \"c63671d0-70ab-4728-b069-2e44bd47570f\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " Mar 12 18:48:00.637442 master-0 kubenswrapper[29097]: I0312 18:48:00.637192 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-etc-machine-id\") pod \"c63671d0-70ab-4728-b069-2e44bd47570f\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " Mar 12 18:48:00.637442 master-0 kubenswrapper[29097]: I0312 18:48:00.637228 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-var-locks-cinder\") pod \"c63671d0-70ab-4728-b069-2e44bd47570f\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " Mar 12 18:48:00.637442 master-0 kubenswrapper[29097]: I0312 18:48:00.637258 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4da9fa10-24ef-4383-9bc0-1f4872023810-scripts\") pod \"4da9fa10-24ef-4383-9bc0-1f4872023810\" (UID: \"4da9fa10-24ef-4383-9bc0-1f4872023810\") " Mar 12 18:48:00.637442 master-0 kubenswrapper[29097]: I0312 18:48:00.637287 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-etc-iscsi\") pod \"c63671d0-70ab-4728-b069-2e44bd47570f\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " Mar 12 18:48:00.637442 master-0 kubenswrapper[29097]: I0312 18:48:00.637385 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4da9fa10-24ef-4383-9bc0-1f4872023810-config-data-custom\") pod \"4da9fa10-24ef-4383-9bc0-1f4872023810\" (UID: \"4da9fa10-24ef-4383-9bc0-1f4872023810\") " Mar 12 18:48:00.637442 master-0 kubenswrapper[29097]: I0312 18:48:00.637439 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-etc-nvme\") pod \"c63671d0-70ab-4728-b069-2e44bd47570f\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " Mar 12 18:48:00.637728 master-0 kubenswrapper[29097]: I0312 18:48:00.637441 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-sys" (OuterVolumeSpecName: "sys") pod "c63671d0-70ab-4728-b069-2e44bd47570f" (UID: "c63671d0-70ab-4728-b069-2e44bd47570f"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:48:00.637728 master-0 kubenswrapper[29097]: I0312 18:48:00.637494 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c63671d0-70ab-4728-b069-2e44bd47570f-config-data-custom\") pod \"c63671d0-70ab-4728-b069-2e44bd47570f\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " Mar 12 18:48:00.637728 master-0 kubenswrapper[29097]: I0312 18:48:00.637546 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncgwf\" (UniqueName: \"kubernetes.io/projected/4da9fa10-24ef-4383-9bc0-1f4872023810-kube-api-access-ncgwf\") pod \"4da9fa10-24ef-4383-9bc0-1f4872023810\" (UID: \"4da9fa10-24ef-4383-9bc0-1f4872023810\") " Mar 12 18:48:00.637728 master-0 kubenswrapper[29097]: I0312 18:48:00.637603 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4da9fa10-24ef-4383-9bc0-1f4872023810-etc-machine-id\") pod \"4da9fa10-24ef-4383-9bc0-1f4872023810\" (UID: \"4da9fa10-24ef-4383-9bc0-1f4872023810\") " Mar 12 18:48:00.637728 master-0 kubenswrapper[29097]: I0312 18:48:00.637639 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-var-lib-cinder\") pod \"c63671d0-70ab-4728-b069-2e44bd47570f\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " Mar 12 18:48:00.637728 master-0 kubenswrapper[29097]: I0312 18:48:00.637681 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-dev\") pod \"c63671d0-70ab-4728-b069-2e44bd47570f\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " Mar 12 18:48:00.637728 master-0 kubenswrapper[29097]: I0312 18:48:00.637719 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c63671d0-70ab-4728-b069-2e44bd47570f-config-data\") pod \"c63671d0-70ab-4728-b069-2e44bd47570f\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " Mar 12 18:48:00.637926 master-0 kubenswrapper[29097]: I0312 18:48:00.637744 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-var-locks-brick\") pod \"c63671d0-70ab-4728-b069-2e44bd47570f\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " Mar 12 18:48:00.637926 master-0 kubenswrapper[29097]: I0312 18:48:00.637766 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4da9fa10-24ef-4383-9bc0-1f4872023810-combined-ca-bundle\") pod \"4da9fa10-24ef-4383-9bc0-1f4872023810\" (UID: \"4da9fa10-24ef-4383-9bc0-1f4872023810\") " Mar 12 18:48:00.637926 master-0 kubenswrapper[29097]: I0312 18:48:00.637780 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-run\") pod \"c63671d0-70ab-4728-b069-2e44bd47570f\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " Mar 12 18:48:00.637926 master-0 kubenswrapper[29097]: I0312 18:48:00.637820 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4da9fa10-24ef-4383-9bc0-1f4872023810-config-data\") pod \"4da9fa10-24ef-4383-9bc0-1f4872023810\" (UID: \"4da9fa10-24ef-4383-9bc0-1f4872023810\") " Mar 12 18:48:00.637926 master-0 kubenswrapper[29097]: I0312 18:48:00.637839 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdfgw\" (UniqueName: \"kubernetes.io/projected/c63671d0-70ab-4728-b069-2e44bd47570f-kube-api-access-kdfgw\") pod \"c63671d0-70ab-4728-b069-2e44bd47570f\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " Mar 12 18:48:00.637926 master-0 kubenswrapper[29097]: I0312 18:48:00.637858 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c63671d0-70ab-4728-b069-2e44bd47570f-scripts\") pod \"c63671d0-70ab-4728-b069-2e44bd47570f\" (UID: \"c63671d0-70ab-4728-b069-2e44bd47570f\") " Mar 12 18:48:00.638301 master-0 kubenswrapper[29097]: I0312 18:48:00.638279 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw4gh\" (UniqueName: \"kubernetes.io/projected/73e3cd3a-c873-4b0f-870d-26ba00b0a910-kube-api-access-dw4gh\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:00.638575 master-0 kubenswrapper[29097]: I0312 18:48:00.638550 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "c63671d0-70ab-4728-b069-2e44bd47570f" (UID: "c63671d0-70ab-4728-b069-2e44bd47570f"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:48:00.638634 master-0 kubenswrapper[29097]: I0312 18:48:00.638592 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4da9fa10-24ef-4383-9bc0-1f4872023810-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4da9fa10-24ef-4383-9bc0-1f4872023810" (UID: "4da9fa10-24ef-4383-9bc0-1f4872023810"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:48:00.638672 master-0 kubenswrapper[29097]: I0312 18:48:00.638640 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "c63671d0-70ab-4728-b069-2e44bd47570f" (UID: "c63671d0-70ab-4728-b069-2e44bd47570f"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:48:00.638672 master-0 kubenswrapper[29097]: I0312 18:48:00.638662 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-dev" (OuterVolumeSpecName: "dev") pod "c63671d0-70ab-4728-b069-2e44bd47570f" (UID: "c63671d0-70ab-4728-b069-2e44bd47570f"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:48:00.640753 master-0 kubenswrapper[29097]: I0312 18:48:00.640318 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "c63671d0-70ab-4728-b069-2e44bd47570f" (UID: "c63671d0-70ab-4728-b069-2e44bd47570f"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:48:00.640925 master-0 kubenswrapper[29097]: I0312 18:48:00.640907 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c63671d0-70ab-4728-b069-2e44bd47570f" (UID: "c63671d0-70ab-4728-b069-2e44bd47570f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:48:00.641033 master-0 kubenswrapper[29097]: I0312 18:48:00.641018 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "c63671d0-70ab-4728-b069-2e44bd47570f" (UID: "c63671d0-70ab-4728-b069-2e44bd47570f"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:48:00.649498 master-0 kubenswrapper[29097]: I0312 18:48:00.649453 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-run" (OuterVolumeSpecName: "run") pod "c63671d0-70ab-4728-b069-2e44bd47570f" (UID: "c63671d0-70ab-4728-b069-2e44bd47570f"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:48:00.649727 master-0 kubenswrapper[29097]: I0312 18:48:00.649603 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4da9fa10-24ef-4383-9bc0-1f4872023810-kube-api-access-ncgwf" (OuterVolumeSpecName: "kube-api-access-ncgwf") pod "4da9fa10-24ef-4383-9bc0-1f4872023810" (UID: "4da9fa10-24ef-4383-9bc0-1f4872023810"). InnerVolumeSpecName "kube-api-access-ncgwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:48:00.649727 master-0 kubenswrapper[29097]: I0312 18:48:00.649702 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c63671d0-70ab-4728-b069-2e44bd47570f" (UID: "c63671d0-70ab-4728-b069-2e44bd47570f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:48:00.649837 master-0 kubenswrapper[29097]: I0312 18:48:00.649736 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "c63671d0-70ab-4728-b069-2e44bd47570f" (UID: "c63671d0-70ab-4728-b069-2e44bd47570f"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:48:00.650601 master-0 kubenswrapper[29097]: I0312 18:48:00.650573 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4da9fa10-24ef-4383-9bc0-1f4872023810-scripts" (OuterVolumeSpecName: "scripts") pod "4da9fa10-24ef-4383-9bc0-1f4872023810" (UID: "4da9fa10-24ef-4383-9bc0-1f4872023810"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:00.652693 master-0 kubenswrapper[29097]: I0312 18:48:00.652674 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c63671d0-70ab-4728-b069-2e44bd47570f-scripts" (OuterVolumeSpecName: "scripts") pod "c63671d0-70ab-4728-b069-2e44bd47570f" (UID: "c63671d0-70ab-4728-b069-2e44bd47570f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:00.684857 master-0 kubenswrapper[29097]: I0312 18:48:00.684438 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c63671d0-70ab-4728-b069-2e44bd47570f-kube-api-access-kdfgw" (OuterVolumeSpecName: "kube-api-access-kdfgw") pod "c63671d0-70ab-4728-b069-2e44bd47570f" (UID: "c63671d0-70ab-4728-b069-2e44bd47570f"). InnerVolumeSpecName "kube-api-access-kdfgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:48:00.705503 master-0 kubenswrapper[29097]: I0312 18:48:00.705370 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c63671d0-70ab-4728-b069-2e44bd47570f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c63671d0-70ab-4728-b069-2e44bd47570f" (UID: "c63671d0-70ab-4728-b069-2e44bd47570f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:00.710875 master-0 kubenswrapper[29097]: I0312 18:48:00.710784 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4da9fa10-24ef-4383-9bc0-1f4872023810-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4da9fa10-24ef-4383-9bc0-1f4872023810" (UID: "4da9fa10-24ef-4383-9bc0-1f4872023810"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:00.749576 master-0 kubenswrapper[29097]: I0312 18:48:00.749042 29097 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:00.749576 master-0 kubenswrapper[29097]: I0312 18:48:00.749080 29097 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-run\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:00.749576 master-0 kubenswrapper[29097]: I0312 18:48:00.749091 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdfgw\" (UniqueName: \"kubernetes.io/projected/c63671d0-70ab-4728-b069-2e44bd47570f-kube-api-access-kdfgw\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:00.749576 master-0 kubenswrapper[29097]: I0312 18:48:00.749267 29097 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c63671d0-70ab-4728-b069-2e44bd47570f-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:00.749576 master-0 kubenswrapper[29097]: I0312 18:48:00.749285 29097 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-lib-modules\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:00.749576 master-0 kubenswrapper[29097]: I0312 18:48:00.749295 29097 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-sys\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:00.750558 master-0 kubenswrapper[29097]: I0312 18:48:00.749594 29097 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:00.750558 master-0 kubenswrapper[29097]: I0312 18:48:00.749608 29097 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:00.750785 master-0 kubenswrapper[29097]: I0312 18:48:00.750766 29097 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4da9fa10-24ef-4383-9bc0-1f4872023810-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:00.750785 master-0 kubenswrapper[29097]: I0312 18:48:00.750786 29097 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:00.750862 master-0 kubenswrapper[29097]: I0312 18:48:00.750799 29097 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4da9fa10-24ef-4383-9bc0-1f4872023810-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:00.750862 master-0 kubenswrapper[29097]: I0312 18:48:00.750812 29097 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-etc-nvme\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:00.750862 master-0 kubenswrapper[29097]: I0312 18:48:00.750821 29097 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c63671d0-70ab-4728-b069-2e44bd47570f-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:00.750862 master-0 kubenswrapper[29097]: I0312 18:48:00.750830 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncgwf\" (UniqueName: \"kubernetes.io/projected/4da9fa10-24ef-4383-9bc0-1f4872023810-kube-api-access-ncgwf\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:00.750862 master-0 kubenswrapper[29097]: I0312 18:48:00.750840 29097 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4da9fa10-24ef-4383-9bc0-1f4872023810-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:00.750862 master-0 kubenswrapper[29097]: I0312 18:48:00.750849 29097 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:00.750862 master-0 kubenswrapper[29097]: I0312 18:48:00.750859 29097 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c63671d0-70ab-4728-b069-2e44bd47570f-dev\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:00.836998 master-0 kubenswrapper[29097]: I0312 18:48:00.836270 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73e3cd3a-c873-4b0f-870d-26ba00b0a910-config" (OuterVolumeSpecName: "config") pod "73e3cd3a-c873-4b0f-870d-26ba00b0a910" (UID: "73e3cd3a-c873-4b0f-870d-26ba00b0a910"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:48:00.854327 master-0 kubenswrapper[29097]: I0312 18:48:00.853862 29097 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73e3cd3a-c873-4b0f-870d-26ba00b0a910-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:00.861301 master-0 kubenswrapper[29097]: I0312 18:48:00.861248 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4da9fa10-24ef-4383-9bc0-1f4872023810-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4da9fa10-24ef-4383-9bc0-1f4872023810" (UID: "4da9fa10-24ef-4383-9bc0-1f4872023810"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:00.865123 master-0 kubenswrapper[29097]: I0312 18:48:00.865047 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73e3cd3a-c873-4b0f-870d-26ba00b0a910-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "73e3cd3a-c873-4b0f-870d-26ba00b0a910" (UID: "73e3cd3a-c873-4b0f-870d-26ba00b0a910"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:48:00.885585 master-0 kubenswrapper[29097]: I0312 18:48:00.885113 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73e3cd3a-c873-4b0f-870d-26ba00b0a910-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "73e3cd3a-c873-4b0f-870d-26ba00b0a910" (UID: "73e3cd3a-c873-4b0f-870d-26ba00b0a910"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:48:00.885585 master-0 kubenswrapper[29097]: I0312 18:48:00.885225 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73e3cd3a-c873-4b0f-870d-26ba00b0a910-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "73e3cd3a-c873-4b0f-870d-26ba00b0a910" (UID: "73e3cd3a-c873-4b0f-870d-26ba00b0a910"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:48:00.892483 master-0 kubenswrapper[29097]: I0312 18:48:00.892432 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73e3cd3a-c873-4b0f-870d-26ba00b0a910-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "73e3cd3a-c873-4b0f-870d-26ba00b0a910" (UID: "73e3cd3a-c873-4b0f-870d-26ba00b0a910"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:48:00.904674 master-0 kubenswrapper[29097]: I0312 18:48:00.903963 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c63671d0-70ab-4728-b069-2e44bd47570f-config-data" (OuterVolumeSpecName: "config-data") pod "c63671d0-70ab-4728-b069-2e44bd47570f" (UID: "c63671d0-70ab-4728-b069-2e44bd47570f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:00.907654 master-0 kubenswrapper[29097]: I0312 18:48:00.906897 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c63671d0-70ab-4728-b069-2e44bd47570f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c63671d0-70ab-4728-b069-2e44bd47570f" (UID: "c63671d0-70ab-4728-b069-2e44bd47570f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:00.960422 master-0 kubenswrapper[29097]: I0312 18:48:00.960082 29097 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73e3cd3a-c873-4b0f-870d-26ba00b0a910-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:00.960422 master-0 kubenswrapper[29097]: I0312 18:48:00.960378 29097 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73e3cd3a-c873-4b0f-870d-26ba00b0a910-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:00.960422 master-0 kubenswrapper[29097]: I0312 18:48:00.960402 29097 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73e3cd3a-c873-4b0f-870d-26ba00b0a910-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:00.960422 master-0 kubenswrapper[29097]: I0312 18:48:00.960415 29097 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73e3cd3a-c873-4b0f-870d-26ba00b0a910-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:00.960422 master-0 kubenswrapper[29097]: I0312 18:48:00.960427 29097 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c63671d0-70ab-4728-b069-2e44bd47570f-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:00.960921 master-0 kubenswrapper[29097]: I0312 18:48:00.960440 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4da9fa10-24ef-4383-9bc0-1f4872023810-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:00.960921 master-0 kubenswrapper[29097]: I0312 18:48:00.960452 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c63671d0-70ab-4728-b069-2e44bd47570f-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:00.960921 master-0 kubenswrapper[29097]: I0312 18:48:00.960642 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4da9fa10-24ef-4383-9bc0-1f4872023810-config-data" (OuterVolumeSpecName: "config-data") pod "4da9fa10-24ef-4383-9bc0-1f4872023810" (UID: "4da9fa10-24ef-4383-9bc0-1f4872023810"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:01.063632 master-0 kubenswrapper[29097]: I0312 18:48:01.063576 29097 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4da9fa10-24ef-4383-9bc0-1f4872023810-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:01.274416 master-0 kubenswrapper[29097]: I0312 18:48:01.274355 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" event={"ID":"c63671d0-70ab-4728-b069-2e44bd47570f","Type":"ContainerDied","Data":"38411be8da2e08a2881f028f8e15f34fde419c11f4775b3b609fcfa853b939c8"} Mar 12 18:48:01.275228 master-0 kubenswrapper[29097]: I0312 18:48:01.274432 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.275228 master-0 kubenswrapper[29097]: I0312 18:48:01.274446 29097 scope.go:117] "RemoveContainer" containerID="81983d927714fe74b062336fc87e248c805034440ee13429e5fb7372d00e802b" Mar 12 18:48:01.280772 master-0 kubenswrapper[29097]: I0312 18:48:01.280596 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-api-0" event={"ID":"f34dc271-d884-440c-bb41-6ddf5ca8d2c2","Type":"ContainerStarted","Data":"e3c7544abacab414e5e96cffb7c20a1c70883be43cc62512727a4aa04faf04d1"} Mar 12 18:48:01.280772 master-0 kubenswrapper[29097]: I0312 18:48:01.280646 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-api-0" event={"ID":"f34dc271-d884-440c-bb41-6ddf5ca8d2c2","Type":"ContainerStarted","Data":"fcbf694b9517b13f9ade394f8d9a033d7a9e38da514fb4c648b848b330fb53cf"} Mar 12 18:48:01.287453 master-0 kubenswrapper[29097]: I0312 18:48:01.284503 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-mzfh7" event={"ID":"64b3a2fa-455e-45a6-a3b4-9763b68a8faa","Type":"ContainerStarted","Data":"dae7cadbc74e2168f49f3aaf41d51c9bf3319c2504a8c94bfb9ad5e77a5727ff"} Mar 12 18:48:01.291781 master-0 kubenswrapper[29097]: I0312 18:48:01.291553 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c5578c7c-zjbch" Mar 12 18:48:01.292682 master-0 kubenswrapper[29097]: I0312 18:48:01.292654 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:48:01.294795 master-0 kubenswrapper[29097]: I0312 18:48:01.294604 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-scheduler-0" event={"ID":"4da9fa10-24ef-4383-9bc0-1f4872023810","Type":"ContainerDied","Data":"9272b6a0852dd9ab45b30a56570e70b9dbd4d8fb33054eec35d16ab0e1a78c95"} Mar 12 18:48:01.321849 master-0 kubenswrapper[29097]: I0312 18:48:01.321764 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-db-sync-mzfh7" podStartSLOduration=5.007223451 podStartE2EDuration="28.321743955s" podCreationTimestamp="2026-03-12 18:47:33 +0000 UTC" firstStartedPulling="2026-03-12 18:47:36.29219851 +0000 UTC m=+1095.846178607" lastFinishedPulling="2026-03-12 18:47:59.606719014 +0000 UTC m=+1119.160699111" observedRunningTime="2026-03-12 18:48:01.317092119 +0000 UTC m=+1120.871072226" watchObservedRunningTime="2026-03-12 18:48:01.321743955 +0000 UTC m=+1120.875724052" Mar 12 18:48:01.371567 master-0 kubenswrapper[29097]: I0312 18:48:01.371032 29097 scope.go:117] "RemoveContainer" containerID="085c61cb3700432dc93c244c65631c7bd0b9f38f7c1681dc49e764c765a87128" Mar 12 18:48:01.392710 master-0 kubenswrapper[29097]: I0312 18:48:01.392650 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56c5578c7c-zjbch"] Mar 12 18:48:01.406458 master-0 kubenswrapper[29097]: I0312 18:48:01.403903 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56c5578c7c-zjbch"] Mar 12 18:48:01.431553 master-0 kubenswrapper[29097]: I0312 18:48:01.423642 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-fa62f-scheduler-0"] Mar 12 18:48:01.495784 master-0 kubenswrapper[29097]: I0312 18:48:01.493900 29097 scope.go:117] "RemoveContainer" containerID="72a6c88dbe937bf87ed200b6f065ad5bbcf29b0740482acf8f95dbc467597eba" Mar 12 18:48:01.512544 master-0 kubenswrapper[29097]: I0312 18:48:01.512205 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-fa62f-scheduler-0"] Mar 12 18:48:01.536733 master-0 kubenswrapper[29097]: I0312 18:48:01.536607 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-fa62f-volume-lvm-iscsi-0"] Mar 12 18:48:01.545838 master-0 kubenswrapper[29097]: I0312 18:48:01.545420 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-fa62f-scheduler-0"] Mar 12 18:48:01.546687 master-0 kubenswrapper[29097]: E0312 18:48:01.546142 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73e3cd3a-c873-4b0f-870d-26ba00b0a910" containerName="init" Mar 12 18:48:01.546687 master-0 kubenswrapper[29097]: I0312 18:48:01.546159 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="73e3cd3a-c873-4b0f-870d-26ba00b0a910" containerName="init" Mar 12 18:48:01.546687 master-0 kubenswrapper[29097]: E0312 18:48:01.546176 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c63671d0-70ab-4728-b069-2e44bd47570f" containerName="probe" Mar 12 18:48:01.546687 master-0 kubenswrapper[29097]: I0312 18:48:01.546182 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="c63671d0-70ab-4728-b069-2e44bd47570f" containerName="probe" Mar 12 18:48:01.546687 master-0 kubenswrapper[29097]: E0312 18:48:01.546205 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73e3cd3a-c873-4b0f-870d-26ba00b0a910" containerName="dnsmasq-dns" Mar 12 18:48:01.546687 master-0 kubenswrapper[29097]: I0312 18:48:01.546212 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="73e3cd3a-c873-4b0f-870d-26ba00b0a910" containerName="dnsmasq-dns" Mar 12 18:48:01.546687 master-0 kubenswrapper[29097]: E0312 18:48:01.546238 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4da9fa10-24ef-4383-9bc0-1f4872023810" containerName="cinder-scheduler" Mar 12 18:48:01.546687 master-0 kubenswrapper[29097]: I0312 18:48:01.546246 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="4da9fa10-24ef-4383-9bc0-1f4872023810" containerName="cinder-scheduler" Mar 12 18:48:01.546687 master-0 kubenswrapper[29097]: E0312 18:48:01.546260 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4da9fa10-24ef-4383-9bc0-1f4872023810" containerName="probe" Mar 12 18:48:01.546687 master-0 kubenswrapper[29097]: I0312 18:48:01.546267 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="4da9fa10-24ef-4383-9bc0-1f4872023810" containerName="probe" Mar 12 18:48:01.546687 master-0 kubenswrapper[29097]: E0312 18:48:01.546340 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c63671d0-70ab-4728-b069-2e44bd47570f" containerName="cinder-volume" Mar 12 18:48:01.546687 master-0 kubenswrapper[29097]: I0312 18:48:01.546350 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="c63671d0-70ab-4728-b069-2e44bd47570f" containerName="cinder-volume" Mar 12 18:48:01.546687 master-0 kubenswrapper[29097]: I0312 18:48:01.546576 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="c63671d0-70ab-4728-b069-2e44bd47570f" containerName="probe" Mar 12 18:48:01.547659 master-0 kubenswrapper[29097]: I0312 18:48:01.547033 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="73e3cd3a-c873-4b0f-870d-26ba00b0a910" containerName="dnsmasq-dns" Mar 12 18:48:01.547659 master-0 kubenswrapper[29097]: I0312 18:48:01.547050 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="c63671d0-70ab-4728-b069-2e44bd47570f" containerName="cinder-volume" Mar 12 18:48:01.547659 master-0 kubenswrapper[29097]: I0312 18:48:01.547058 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="4da9fa10-24ef-4383-9bc0-1f4872023810" containerName="probe" Mar 12 18:48:01.547659 master-0 kubenswrapper[29097]: I0312 18:48:01.547074 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="4da9fa10-24ef-4383-9bc0-1f4872023810" containerName="cinder-scheduler" Mar 12 18:48:01.548465 master-0 kubenswrapper[29097]: I0312 18:48:01.548427 29097 scope.go:117] "RemoveContainer" containerID="1d37547257ae9b62c0150d21d8c4df16f776ab4f2e74f9a7368014c223f280b3" Mar 12 18:48:01.549151 master-0 kubenswrapper[29097]: I0312 18:48:01.549119 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:48:01.550946 master-0 kubenswrapper[29097]: I0312 18:48:01.550925 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-fa62f-scheduler-config-data" Mar 12 18:48:01.562393 master-0 kubenswrapper[29097]: I0312 18:48:01.562217 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-fa62f-volume-lvm-iscsi-0"] Mar 12 18:48:01.600768 master-0 kubenswrapper[29097]: I0312 18:48:01.600485 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eae648d4-64f1-4b96-aec2-dd0410be0ffd-config-data\") pod \"cinder-fa62f-scheduler-0\" (UID: \"eae648d4-64f1-4b96-aec2-dd0410be0ffd\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:48:01.600768 master-0 kubenswrapper[29097]: I0312 18:48:01.600615 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eae648d4-64f1-4b96-aec2-dd0410be0ffd-scripts\") pod \"cinder-fa62f-scheduler-0\" (UID: \"eae648d4-64f1-4b96-aec2-dd0410be0ffd\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:48:01.600768 master-0 kubenswrapper[29097]: I0312 18:48:01.600658 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjwln\" (UniqueName: \"kubernetes.io/projected/eae648d4-64f1-4b96-aec2-dd0410be0ffd-kube-api-access-xjwln\") pod \"cinder-fa62f-scheduler-0\" (UID: \"eae648d4-64f1-4b96-aec2-dd0410be0ffd\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:48:01.600768 master-0 kubenswrapper[29097]: I0312 18:48:01.600770 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eae648d4-64f1-4b96-aec2-dd0410be0ffd-etc-machine-id\") pod \"cinder-fa62f-scheduler-0\" (UID: \"eae648d4-64f1-4b96-aec2-dd0410be0ffd\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:48:01.601033 master-0 kubenswrapper[29097]: I0312 18:48:01.600818 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eae648d4-64f1-4b96-aec2-dd0410be0ffd-config-data-custom\") pod \"cinder-fa62f-scheduler-0\" (UID: \"eae648d4-64f1-4b96-aec2-dd0410be0ffd\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:48:01.601033 master-0 kubenswrapper[29097]: I0312 18:48:01.600911 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eae648d4-64f1-4b96-aec2-dd0410be0ffd-combined-ca-bundle\") pod \"cinder-fa62f-scheduler-0\" (UID: \"eae648d4-64f1-4b96-aec2-dd0410be0ffd\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:48:01.621276 master-0 kubenswrapper[29097]: I0312 18:48:01.621227 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-fa62f-scheduler-0"] Mar 12 18:48:01.648176 master-0 kubenswrapper[29097]: I0312 18:48:01.648111 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-fa62f-volume-lvm-iscsi-0"] Mar 12 18:48:01.653994 master-0 kubenswrapper[29097]: I0312 18:48:01.653823 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.663939 master-0 kubenswrapper[29097]: I0312 18:48:01.663861 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-fa62f-volume-lvm-iscsi-config-data" Mar 12 18:48:01.669598 master-0 kubenswrapper[29097]: I0312 18:48:01.668692 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-fa62f-volume-lvm-iscsi-0"] Mar 12 18:48:01.703790 master-0 kubenswrapper[29097]: I0312 18:48:01.703003 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eae648d4-64f1-4b96-aec2-dd0410be0ffd-etc-machine-id\") pod \"cinder-fa62f-scheduler-0\" (UID: \"eae648d4-64f1-4b96-aec2-dd0410be0ffd\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:48:01.703790 master-0 kubenswrapper[29097]: I0312 18:48:01.703082 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eae648d4-64f1-4b96-aec2-dd0410be0ffd-config-data-custom\") pod \"cinder-fa62f-scheduler-0\" (UID: \"eae648d4-64f1-4b96-aec2-dd0410be0ffd\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:48:01.703790 master-0 kubenswrapper[29097]: I0312 18:48:01.703301 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eae648d4-64f1-4b96-aec2-dd0410be0ffd-etc-machine-id\") pod \"cinder-fa62f-scheduler-0\" (UID: \"eae648d4-64f1-4b96-aec2-dd0410be0ffd\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:48:01.703790 master-0 kubenswrapper[29097]: I0312 18:48:01.703389 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eae648d4-64f1-4b96-aec2-dd0410be0ffd-combined-ca-bundle\") pod \"cinder-fa62f-scheduler-0\" (UID: \"eae648d4-64f1-4b96-aec2-dd0410be0ffd\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:48:01.703790 master-0 kubenswrapper[29097]: I0312 18:48:01.703697 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eae648d4-64f1-4b96-aec2-dd0410be0ffd-config-data\") pod \"cinder-fa62f-scheduler-0\" (UID: \"eae648d4-64f1-4b96-aec2-dd0410be0ffd\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:48:01.703790 master-0 kubenswrapper[29097]: I0312 18:48:01.703789 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eae648d4-64f1-4b96-aec2-dd0410be0ffd-scripts\") pod \"cinder-fa62f-scheduler-0\" (UID: \"eae648d4-64f1-4b96-aec2-dd0410be0ffd\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:48:01.704789 master-0 kubenswrapper[29097]: I0312 18:48:01.703828 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjwln\" (UniqueName: \"kubernetes.io/projected/eae648d4-64f1-4b96-aec2-dd0410be0ffd-kube-api-access-xjwln\") pod \"cinder-fa62f-scheduler-0\" (UID: \"eae648d4-64f1-4b96-aec2-dd0410be0ffd\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:48:01.709088 master-0 kubenswrapper[29097]: I0312 18:48:01.707918 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eae648d4-64f1-4b96-aec2-dd0410be0ffd-config-data\") pod \"cinder-fa62f-scheduler-0\" (UID: \"eae648d4-64f1-4b96-aec2-dd0410be0ffd\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:48:01.710857 master-0 kubenswrapper[29097]: I0312 18:48:01.709620 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eae648d4-64f1-4b96-aec2-dd0410be0ffd-config-data-custom\") pod \"cinder-fa62f-scheduler-0\" (UID: \"eae648d4-64f1-4b96-aec2-dd0410be0ffd\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:48:01.710857 master-0 kubenswrapper[29097]: I0312 18:48:01.709899 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eae648d4-64f1-4b96-aec2-dd0410be0ffd-scripts\") pod \"cinder-fa62f-scheduler-0\" (UID: \"eae648d4-64f1-4b96-aec2-dd0410be0ffd\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:48:01.714306 master-0 kubenswrapper[29097]: I0312 18:48:01.714221 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eae648d4-64f1-4b96-aec2-dd0410be0ffd-combined-ca-bundle\") pod \"cinder-fa62f-scheduler-0\" (UID: \"eae648d4-64f1-4b96-aec2-dd0410be0ffd\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:48:01.725948 master-0 kubenswrapper[29097]: I0312 18:48:01.725845 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjwln\" (UniqueName: \"kubernetes.io/projected/eae648d4-64f1-4b96-aec2-dd0410be0ffd-kube-api-access-xjwln\") pod \"cinder-fa62f-scheduler-0\" (UID: \"eae648d4-64f1-4b96-aec2-dd0410be0ffd\") " pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:48:01.806223 master-0 kubenswrapper[29097]: I0312 18:48:01.805796 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/91270577-6388-4208-afd7-bdeb3edc3d99-dev\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.806223 master-0 kubenswrapper[29097]: I0312 18:48:01.805881 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91270577-6388-4208-afd7-bdeb3edc3d99-combined-ca-bundle\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.806223 master-0 kubenswrapper[29097]: I0312 18:48:01.805925 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/91270577-6388-4208-afd7-bdeb3edc3d99-sys\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.806223 master-0 kubenswrapper[29097]: I0312 18:48:01.805947 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/91270577-6388-4208-afd7-bdeb3edc3d99-var-locks-brick\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.806223 master-0 kubenswrapper[29097]: I0312 18:48:01.805977 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/91270577-6388-4208-afd7-bdeb3edc3d99-var-locks-cinder\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.806223 master-0 kubenswrapper[29097]: I0312 18:48:01.806013 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/91270577-6388-4208-afd7-bdeb3edc3d99-etc-machine-id\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.806223 master-0 kubenswrapper[29097]: I0312 18:48:01.806051 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91270577-6388-4208-afd7-bdeb3edc3d99-scripts\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.806223 master-0 kubenswrapper[29097]: I0312 18:48:01.806101 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/91270577-6388-4208-afd7-bdeb3edc3d99-var-lib-cinder\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.806223 master-0 kubenswrapper[29097]: I0312 18:48:01.806123 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91270577-6388-4208-afd7-bdeb3edc3d99-config-data\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.806223 master-0 kubenswrapper[29097]: I0312 18:48:01.806165 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/91270577-6388-4208-afd7-bdeb3edc3d99-etc-nvme\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.806223 master-0 kubenswrapper[29097]: I0312 18:48:01.806181 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44fqx\" (UniqueName: \"kubernetes.io/projected/91270577-6388-4208-afd7-bdeb3edc3d99-kube-api-access-44fqx\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.806223 master-0 kubenswrapper[29097]: I0312 18:48:01.806213 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/91270577-6388-4208-afd7-bdeb3edc3d99-config-data-custom\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.806758 master-0 kubenswrapper[29097]: I0312 18:48:01.806250 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/91270577-6388-4208-afd7-bdeb3edc3d99-run\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.806758 master-0 kubenswrapper[29097]: I0312 18:48:01.806278 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/91270577-6388-4208-afd7-bdeb3edc3d99-etc-iscsi\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.806758 master-0 kubenswrapper[29097]: I0312 18:48:01.806302 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91270577-6388-4208-afd7-bdeb3edc3d99-lib-modules\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.906856 master-0 kubenswrapper[29097]: I0312 18:48:01.906793 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:48:01.910718 master-0 kubenswrapper[29097]: I0312 18:48:01.907881 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/91270577-6388-4208-afd7-bdeb3edc3d99-dev\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.910718 master-0 kubenswrapper[29097]: I0312 18:48:01.907939 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91270577-6388-4208-afd7-bdeb3edc3d99-combined-ca-bundle\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.910718 master-0 kubenswrapper[29097]: I0312 18:48:01.907961 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/91270577-6388-4208-afd7-bdeb3edc3d99-dev\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.910718 master-0 kubenswrapper[29097]: I0312 18:48:01.907979 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/91270577-6388-4208-afd7-bdeb3edc3d99-sys\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.910718 master-0 kubenswrapper[29097]: I0312 18:48:01.908031 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/91270577-6388-4208-afd7-bdeb3edc3d99-sys\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.910718 master-0 kubenswrapper[29097]: I0312 18:48:01.908046 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/91270577-6388-4208-afd7-bdeb3edc3d99-var-locks-brick\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.910718 master-0 kubenswrapper[29097]: I0312 18:48:01.908088 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/91270577-6388-4208-afd7-bdeb3edc3d99-var-locks-cinder\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.910718 master-0 kubenswrapper[29097]: I0312 18:48:01.908140 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/91270577-6388-4208-afd7-bdeb3edc3d99-etc-machine-id\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.910718 master-0 kubenswrapper[29097]: I0312 18:48:01.908200 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91270577-6388-4208-afd7-bdeb3edc3d99-scripts\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.910718 master-0 kubenswrapper[29097]: I0312 18:48:01.908270 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/91270577-6388-4208-afd7-bdeb3edc3d99-var-lib-cinder\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.910718 master-0 kubenswrapper[29097]: I0312 18:48:01.908310 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91270577-6388-4208-afd7-bdeb3edc3d99-config-data\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.910718 master-0 kubenswrapper[29097]: I0312 18:48:01.908398 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/91270577-6388-4208-afd7-bdeb3edc3d99-etc-nvme\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.910718 master-0 kubenswrapper[29097]: I0312 18:48:01.908425 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44fqx\" (UniqueName: \"kubernetes.io/projected/91270577-6388-4208-afd7-bdeb3edc3d99-kube-api-access-44fqx\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.910718 master-0 kubenswrapper[29097]: I0312 18:48:01.908477 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/91270577-6388-4208-afd7-bdeb3edc3d99-config-data-custom\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.910718 master-0 kubenswrapper[29097]: I0312 18:48:01.908551 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/91270577-6388-4208-afd7-bdeb3edc3d99-run\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.910718 master-0 kubenswrapper[29097]: I0312 18:48:01.908603 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/91270577-6388-4208-afd7-bdeb3edc3d99-etc-iscsi\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.910718 master-0 kubenswrapper[29097]: I0312 18:48:01.908648 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91270577-6388-4208-afd7-bdeb3edc3d99-lib-modules\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.910718 master-0 kubenswrapper[29097]: I0312 18:48:01.908749 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91270577-6388-4208-afd7-bdeb3edc3d99-lib-modules\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.910718 master-0 kubenswrapper[29097]: I0312 18:48:01.908813 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/91270577-6388-4208-afd7-bdeb3edc3d99-var-lib-cinder\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.910718 master-0 kubenswrapper[29097]: I0312 18:48:01.909478 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/91270577-6388-4208-afd7-bdeb3edc3d99-var-locks-cinder\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.910718 master-0 kubenswrapper[29097]: I0312 18:48:01.909685 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/91270577-6388-4208-afd7-bdeb3edc3d99-var-locks-brick\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.910718 master-0 kubenswrapper[29097]: I0312 18:48:01.909716 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/91270577-6388-4208-afd7-bdeb3edc3d99-etc-machine-id\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.910718 master-0 kubenswrapper[29097]: I0312 18:48:01.909755 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/91270577-6388-4208-afd7-bdeb3edc3d99-etc-nvme\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.910718 master-0 kubenswrapper[29097]: I0312 18:48:01.909779 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/91270577-6388-4208-afd7-bdeb3edc3d99-run\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.910718 master-0 kubenswrapper[29097]: I0312 18:48:01.909801 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/91270577-6388-4208-afd7-bdeb3edc3d99-etc-iscsi\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.914704 master-0 kubenswrapper[29097]: I0312 18:48:01.914643 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/91270577-6388-4208-afd7-bdeb3edc3d99-config-data-custom\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.918204 master-0 kubenswrapper[29097]: I0312 18:48:01.918167 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91270577-6388-4208-afd7-bdeb3edc3d99-combined-ca-bundle\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.926157 master-0 kubenswrapper[29097]: I0312 18:48:01.926111 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44fqx\" (UniqueName: \"kubernetes.io/projected/91270577-6388-4208-afd7-bdeb3edc3d99-kube-api-access-44fqx\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.931201 master-0 kubenswrapper[29097]: I0312 18:48:01.931151 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91270577-6388-4208-afd7-bdeb3edc3d99-config-data\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:01.979933 master-0 kubenswrapper[29097]: I0312 18:48:01.979861 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91270577-6388-4208-afd7-bdeb3edc3d99-scripts\") pod \"cinder-fa62f-volume-lvm-iscsi-0\" (UID: \"91270577-6388-4208-afd7-bdeb3edc3d99\") " pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:02.278540 master-0 kubenswrapper[29097]: I0312 18:48:02.278474 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:02.331446 master-0 kubenswrapper[29097]: I0312 18:48:02.331009 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-api-0" event={"ID":"f34dc271-d884-440c-bb41-6ddf5ca8d2c2","Type":"ContainerStarted","Data":"536a97a441e3f71b28e0142e100a7791d1c1188f97b352d13c63cec08c0d68b6"} Mar 12 18:48:02.331446 master-0 kubenswrapper[29097]: I0312 18:48:02.331200 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-fa62f-api-0" Mar 12 18:48:02.433706 master-0 kubenswrapper[29097]: I0312 18:48:02.433238 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-fa62f-api-0" podStartSLOduration=8.433219396 podStartE2EDuration="8.433219396s" podCreationTimestamp="2026-03-12 18:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:48:02.363832944 +0000 UTC m=+1121.917813041" watchObservedRunningTime="2026-03-12 18:48:02.433219396 +0000 UTC m=+1121.987199493" Mar 12 18:48:02.440008 master-0 kubenswrapper[29097]: I0312 18:48:02.435211 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-fa62f-scheduler-0"] Mar 12 18:48:02.735537 master-0 kubenswrapper[29097]: I0312 18:48:02.735423 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4da9fa10-24ef-4383-9bc0-1f4872023810" path="/var/lib/kubelet/pods/4da9fa10-24ef-4383-9bc0-1f4872023810/volumes" Mar 12 18:48:02.736439 master-0 kubenswrapper[29097]: I0312 18:48:02.736423 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73e3cd3a-c873-4b0f-870d-26ba00b0a910" path="/var/lib/kubelet/pods/73e3cd3a-c873-4b0f-870d-26ba00b0a910/volumes" Mar 12 18:48:02.737160 master-0 kubenswrapper[29097]: I0312 18:48:02.737143 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c63671d0-70ab-4728-b069-2e44bd47570f" path="/var/lib/kubelet/pods/c63671d0-70ab-4728-b069-2e44bd47570f/volumes" Mar 12 18:48:02.865878 master-0 kubenswrapper[29097]: W0312 18:48:02.865846 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91270577_6388_4208_afd7_bdeb3edc3d99.slice/crio-46b61ef8367dbfb1906df325ea43d5804e37c18e072eceb67beec8ba1f3357fd WatchSource:0}: Error finding container 46b61ef8367dbfb1906df325ea43d5804e37c18e072eceb67beec8ba1f3357fd: Status 404 returned error can't find the container with id 46b61ef8367dbfb1906df325ea43d5804e37c18e072eceb67beec8ba1f3357fd Mar 12 18:48:02.876483 master-0 kubenswrapper[29097]: I0312 18:48:02.876436 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-fa62f-volume-lvm-iscsi-0"] Mar 12 18:48:03.357175 master-0 kubenswrapper[29097]: I0312 18:48:03.357118 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-scheduler-0" event={"ID":"eae648d4-64f1-4b96-aec2-dd0410be0ffd","Type":"ContainerStarted","Data":"36f7fe05d4006481055f488e72a7bc35a8162068f7b794562da491ec146ad1a1"} Mar 12 18:48:03.357610 master-0 kubenswrapper[29097]: I0312 18:48:03.357182 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-scheduler-0" event={"ID":"eae648d4-64f1-4b96-aec2-dd0410be0ffd","Type":"ContainerStarted","Data":"2a09c5eb4827053aaef3b1b44c780ad769d2af305fba04d278680b7baae3fb22"} Mar 12 18:48:03.362700 master-0 kubenswrapper[29097]: I0312 18:48:03.362656 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" event={"ID":"91270577-6388-4208-afd7-bdeb3edc3d99","Type":"ContainerStarted","Data":"549fcf6ea92943a7535c220def068aab25231738abe3cf98462bb806102a240d"} Mar 12 18:48:03.362837 master-0 kubenswrapper[29097]: I0312 18:48:03.362702 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" event={"ID":"91270577-6388-4208-afd7-bdeb3edc3d99","Type":"ContainerStarted","Data":"46b61ef8367dbfb1906df325ea43d5804e37c18e072eceb67beec8ba1f3357fd"} Mar 12 18:48:04.233698 master-0 kubenswrapper[29097]: I0312 18:48:04.233181 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:04.292680 master-0 kubenswrapper[29097]: I0312 18:48:04.292612 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7c09427-e3b8-48df-9082-0520d2fc9b23-scripts\") pod \"e7c09427-e3b8-48df-9082-0520d2fc9b23\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " Mar 12 18:48:04.292871 master-0 kubenswrapper[29097]: I0312 18:48:04.292718 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-var-locks-brick\") pod \"e7c09427-e3b8-48df-9082-0520d2fc9b23\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " Mar 12 18:48:04.292871 master-0 kubenswrapper[29097]: I0312 18:48:04.292750 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-sys\") pod \"e7c09427-e3b8-48df-9082-0520d2fc9b23\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " Mar 12 18:48:04.292871 master-0 kubenswrapper[29097]: I0312 18:48:04.292775 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-var-locks-cinder\") pod \"e7c09427-e3b8-48df-9082-0520d2fc9b23\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " Mar 12 18:48:04.293002 master-0 kubenswrapper[29097]: I0312 18:48:04.292879 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-dev\") pod \"e7c09427-e3b8-48df-9082-0520d2fc9b23\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " Mar 12 18:48:04.293002 master-0 kubenswrapper[29097]: I0312 18:48:04.292934 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7c09427-e3b8-48df-9082-0520d2fc9b23-config-data\") pod \"e7c09427-e3b8-48df-9082-0520d2fc9b23\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " Mar 12 18:48:04.293002 master-0 kubenswrapper[29097]: I0312 18:48:04.292949 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-lib-modules\") pod \"e7c09427-e3b8-48df-9082-0520d2fc9b23\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " Mar 12 18:48:04.293002 master-0 kubenswrapper[29097]: I0312 18:48:04.292965 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-etc-iscsi\") pod \"e7c09427-e3b8-48df-9082-0520d2fc9b23\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " Mar 12 18:48:04.293002 master-0 kubenswrapper[29097]: I0312 18:48:04.292986 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7c09427-e3b8-48df-9082-0520d2fc9b23-combined-ca-bundle\") pod \"e7c09427-e3b8-48df-9082-0520d2fc9b23\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " Mar 12 18:48:04.293147 master-0 kubenswrapper[29097]: I0312 18:48:04.293017 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-run\") pod \"e7c09427-e3b8-48df-9082-0520d2fc9b23\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " Mar 12 18:48:04.293147 master-0 kubenswrapper[29097]: I0312 18:48:04.293038 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-etc-nvme\") pod \"e7c09427-e3b8-48df-9082-0520d2fc9b23\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " Mar 12 18:48:04.293147 master-0 kubenswrapper[29097]: I0312 18:48:04.293077 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-var-lib-cinder\") pod \"e7c09427-e3b8-48df-9082-0520d2fc9b23\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " Mar 12 18:48:04.293147 master-0 kubenswrapper[29097]: I0312 18:48:04.293121 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lq4d\" (UniqueName: \"kubernetes.io/projected/e7c09427-e3b8-48df-9082-0520d2fc9b23-kube-api-access-2lq4d\") pod \"e7c09427-e3b8-48df-9082-0520d2fc9b23\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " Mar 12 18:48:04.293266 master-0 kubenswrapper[29097]: I0312 18:48:04.293204 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7c09427-e3b8-48df-9082-0520d2fc9b23-config-data-custom\") pod \"e7c09427-e3b8-48df-9082-0520d2fc9b23\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " Mar 12 18:48:04.293266 master-0 kubenswrapper[29097]: I0312 18:48:04.293223 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-etc-machine-id\") pod \"e7c09427-e3b8-48df-9082-0520d2fc9b23\" (UID: \"e7c09427-e3b8-48df-9082-0520d2fc9b23\") " Mar 12 18:48:04.293351 master-0 kubenswrapper[29097]: I0312 18:48:04.293313 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-sys" (OuterVolumeSpecName: "sys") pod "e7c09427-e3b8-48df-9082-0520d2fc9b23" (UID: "e7c09427-e3b8-48df-9082-0520d2fc9b23"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:48:04.293780 master-0 kubenswrapper[29097]: I0312 18:48:04.293765 29097 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-sys\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:04.293846 master-0 kubenswrapper[29097]: I0312 18:48:04.293832 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e7c09427-e3b8-48df-9082-0520d2fc9b23" (UID: "e7c09427-e3b8-48df-9082-0520d2fc9b23"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:48:04.293909 master-0 kubenswrapper[29097]: I0312 18:48:04.293865 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e7c09427-e3b8-48df-9082-0520d2fc9b23" (UID: "e7c09427-e3b8-48df-9082-0520d2fc9b23"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:48:04.293909 master-0 kubenswrapper[29097]: I0312 18:48:04.293883 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "e7c09427-e3b8-48df-9082-0520d2fc9b23" (UID: "e7c09427-e3b8-48df-9082-0520d2fc9b23"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:48:04.297683 master-0 kubenswrapper[29097]: I0312 18:48:04.297537 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "e7c09427-e3b8-48df-9082-0520d2fc9b23" (UID: "e7c09427-e3b8-48df-9082-0520d2fc9b23"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:48:04.297683 master-0 kubenswrapper[29097]: I0312 18:48:04.297612 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "e7c09427-e3b8-48df-9082-0520d2fc9b23" (UID: "e7c09427-e3b8-48df-9082-0520d2fc9b23"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:48:04.297894 master-0 kubenswrapper[29097]: I0312 18:48:04.297699 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-dev" (OuterVolumeSpecName: "dev") pod "e7c09427-e3b8-48df-9082-0520d2fc9b23" (UID: "e7c09427-e3b8-48df-9082-0520d2fc9b23"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:48:04.297894 master-0 kubenswrapper[29097]: I0312 18:48:04.297743 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "e7c09427-e3b8-48df-9082-0520d2fc9b23" (UID: "e7c09427-e3b8-48df-9082-0520d2fc9b23"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:48:04.297894 master-0 kubenswrapper[29097]: I0312 18:48:04.297761 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "e7c09427-e3b8-48df-9082-0520d2fc9b23" (UID: "e7c09427-e3b8-48df-9082-0520d2fc9b23"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:48:04.300335 master-0 kubenswrapper[29097]: I0312 18:48:04.300292 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-run" (OuterVolumeSpecName: "run") pod "e7c09427-e3b8-48df-9082-0520d2fc9b23" (UID: "e7c09427-e3b8-48df-9082-0520d2fc9b23"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 18:48:04.307962 master-0 kubenswrapper[29097]: I0312 18:48:04.307910 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7c09427-e3b8-48df-9082-0520d2fc9b23-kube-api-access-2lq4d" (OuterVolumeSpecName: "kube-api-access-2lq4d") pod "e7c09427-e3b8-48df-9082-0520d2fc9b23" (UID: "e7c09427-e3b8-48df-9082-0520d2fc9b23"). InnerVolumeSpecName "kube-api-access-2lq4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:48:04.312980 master-0 kubenswrapper[29097]: I0312 18:48:04.312445 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7c09427-e3b8-48df-9082-0520d2fc9b23-scripts" (OuterVolumeSpecName: "scripts") pod "e7c09427-e3b8-48df-9082-0520d2fc9b23" (UID: "e7c09427-e3b8-48df-9082-0520d2fc9b23"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:04.331852 master-0 kubenswrapper[29097]: I0312 18:48:04.331774 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7c09427-e3b8-48df-9082-0520d2fc9b23-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e7c09427-e3b8-48df-9082-0520d2fc9b23" (UID: "e7c09427-e3b8-48df-9082-0520d2fc9b23"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:04.397070 master-0 kubenswrapper[29097]: I0312 18:48:04.396865 29097 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7c09427-e3b8-48df-9082-0520d2fc9b23-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:04.397070 master-0 kubenswrapper[29097]: I0312 18:48:04.396899 29097 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:04.397070 master-0 kubenswrapper[29097]: I0312 18:48:04.396909 29097 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7c09427-e3b8-48df-9082-0520d2fc9b23-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:04.397070 master-0 kubenswrapper[29097]: I0312 18:48:04.396918 29097 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:04.397070 master-0 kubenswrapper[29097]: I0312 18:48:04.396926 29097 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:04.397070 master-0 kubenswrapper[29097]: I0312 18:48:04.396934 29097 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-dev\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:04.397070 master-0 kubenswrapper[29097]: I0312 18:48:04.396944 29097 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-lib-modules\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:04.397070 master-0 kubenswrapper[29097]: I0312 18:48:04.396953 29097 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:04.397070 master-0 kubenswrapper[29097]: I0312 18:48:04.396960 29097 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-run\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:04.397070 master-0 kubenswrapper[29097]: I0312 18:48:04.396975 29097 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-etc-nvme\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:04.397070 master-0 kubenswrapper[29097]: I0312 18:48:04.396983 29097 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e7c09427-e3b8-48df-9082-0520d2fc9b23-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:04.397070 master-0 kubenswrapper[29097]: I0312 18:48:04.396994 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lq4d\" (UniqueName: \"kubernetes.io/projected/e7c09427-e3b8-48df-9082-0520d2fc9b23-kube-api-access-2lq4d\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:04.402386 master-0 kubenswrapper[29097]: I0312 18:48:04.401890 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" event={"ID":"91270577-6388-4208-afd7-bdeb3edc3d99","Type":"ContainerStarted","Data":"6d11c2221bfec1fee7d955872f404df0ea7798ed26f290c9d3564f23d9e290f9"} Mar 12 18:48:04.418579 master-0 kubenswrapper[29097]: I0312 18:48:04.417937 29097 generic.go:334] "Generic (PLEG): container finished" podID="e7c09427-e3b8-48df-9082-0520d2fc9b23" containerID="8482d0d8b6612ea6d2febb1b09a99798553f765614e3291bfb4f5587a0357362" exitCode=0 Mar 12 18:48:04.418579 master-0 kubenswrapper[29097]: I0312 18:48:04.417993 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-backup-0" event={"ID":"e7c09427-e3b8-48df-9082-0520d2fc9b23","Type":"ContainerDied","Data":"8482d0d8b6612ea6d2febb1b09a99798553f765614e3291bfb4f5587a0357362"} Mar 12 18:48:04.418579 master-0 kubenswrapper[29097]: I0312 18:48:04.418053 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-backup-0" event={"ID":"e7c09427-e3b8-48df-9082-0520d2fc9b23","Type":"ContainerDied","Data":"34d4697af4e495be980cfc94fff7f8f17c9984c3dd647215185a1b0ea0700b08"} Mar 12 18:48:04.418579 master-0 kubenswrapper[29097]: I0312 18:48:04.418075 29097 scope.go:117] "RemoveContainer" containerID="8c0b0831b33cc256833646550454b098834b105b836510fce944e693959a7cae" Mar 12 18:48:04.418579 master-0 kubenswrapper[29097]: I0312 18:48:04.418020 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:04.429808 master-0 kubenswrapper[29097]: I0312 18:48:04.429748 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-scheduler-0" event={"ID":"eae648d4-64f1-4b96-aec2-dd0410be0ffd","Type":"ContainerStarted","Data":"b9ba6f6c64d904050fea4ed2337f90a9afd5aa7d66591f1581dceb7bed30ec8a"} Mar 12 18:48:04.442561 master-0 kubenswrapper[29097]: I0312 18:48:04.440886 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" podStartSLOduration=3.440865236 podStartE2EDuration="3.440865236s" podCreationTimestamp="2026-03-12 18:48:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:48:04.43218237 +0000 UTC m=+1123.986162497" watchObservedRunningTime="2026-03-12 18:48:04.440865236 +0000 UTC m=+1123.994845333" Mar 12 18:48:04.461061 master-0 kubenswrapper[29097]: I0312 18:48:04.460992 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7c09427-e3b8-48df-9082-0520d2fc9b23-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7c09427-e3b8-48df-9082-0520d2fc9b23" (UID: "e7c09427-e3b8-48df-9082-0520d2fc9b23"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:04.501019 master-0 kubenswrapper[29097]: I0312 18:48:04.500806 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7c09427-e3b8-48df-9082-0520d2fc9b23-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:04.521761 master-0 kubenswrapper[29097]: I0312 18:48:04.520941 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7c09427-e3b8-48df-9082-0520d2fc9b23-config-data" (OuterVolumeSpecName: "config-data") pod "e7c09427-e3b8-48df-9082-0520d2fc9b23" (UID: "e7c09427-e3b8-48df-9082-0520d2fc9b23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:04.576571 master-0 kubenswrapper[29097]: I0312 18:48:04.576489 29097 scope.go:117] "RemoveContainer" containerID="8482d0d8b6612ea6d2febb1b09a99798553f765614e3291bfb4f5587a0357362" Mar 12 18:48:04.604101 master-0 kubenswrapper[29097]: I0312 18:48:04.604046 29097 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7c09427-e3b8-48df-9082-0520d2fc9b23-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:04.604990 master-0 kubenswrapper[29097]: I0312 18:48:04.604934 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-fa62f-scheduler-0" podStartSLOduration=3.60491647 podStartE2EDuration="3.60491647s" podCreationTimestamp="2026-03-12 18:48:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:48:04.602386637 +0000 UTC m=+1124.156366744" watchObservedRunningTime="2026-03-12 18:48:04.60491647 +0000 UTC m=+1124.158896567" Mar 12 18:48:04.621005 master-0 kubenswrapper[29097]: I0312 18:48:04.620041 29097 scope.go:117] "RemoveContainer" containerID="8c0b0831b33cc256833646550454b098834b105b836510fce944e693959a7cae" Mar 12 18:48:04.628640 master-0 kubenswrapper[29097]: E0312 18:48:04.621648 29097 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c0b0831b33cc256833646550454b098834b105b836510fce944e693959a7cae\": container with ID starting with 8c0b0831b33cc256833646550454b098834b105b836510fce944e693959a7cae not found: ID does not exist" containerID="8c0b0831b33cc256833646550454b098834b105b836510fce944e693959a7cae" Mar 12 18:48:04.628640 master-0 kubenswrapper[29097]: I0312 18:48:04.621683 29097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c0b0831b33cc256833646550454b098834b105b836510fce944e693959a7cae"} err="failed to get container status \"8c0b0831b33cc256833646550454b098834b105b836510fce944e693959a7cae\": rpc error: code = NotFound desc = could not find container \"8c0b0831b33cc256833646550454b098834b105b836510fce944e693959a7cae\": container with ID starting with 8c0b0831b33cc256833646550454b098834b105b836510fce944e693959a7cae not found: ID does not exist" Mar 12 18:48:04.628640 master-0 kubenswrapper[29097]: I0312 18:48:04.621703 29097 scope.go:117] "RemoveContainer" containerID="8482d0d8b6612ea6d2febb1b09a99798553f765614e3291bfb4f5587a0357362" Mar 12 18:48:04.628640 master-0 kubenswrapper[29097]: E0312 18:48:04.622566 29097 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8482d0d8b6612ea6d2febb1b09a99798553f765614e3291bfb4f5587a0357362\": container with ID starting with 8482d0d8b6612ea6d2febb1b09a99798553f765614e3291bfb4f5587a0357362 not found: ID does not exist" containerID="8482d0d8b6612ea6d2febb1b09a99798553f765614e3291bfb4f5587a0357362" Mar 12 18:48:04.628640 master-0 kubenswrapper[29097]: I0312 18:48:04.622599 29097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8482d0d8b6612ea6d2febb1b09a99798553f765614e3291bfb4f5587a0357362"} err="failed to get container status \"8482d0d8b6612ea6d2febb1b09a99798553f765614e3291bfb4f5587a0357362\": rpc error: code = NotFound desc = could not find container \"8482d0d8b6612ea6d2febb1b09a99798553f765614e3291bfb4f5587a0357362\": container with ID starting with 8482d0d8b6612ea6d2febb1b09a99798553f765614e3291bfb4f5587a0357362 not found: ID does not exist" Mar 12 18:48:04.868186 master-0 kubenswrapper[29097]: I0312 18:48:04.868119 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-fa62f-backup-0"] Mar 12 18:48:05.192021 master-0 kubenswrapper[29097]: I0312 18:48:05.191959 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-fa62f-backup-0"] Mar 12 18:48:05.258361 master-0 kubenswrapper[29097]: I0312 18:48:05.257052 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-fa62f-backup-0"] Mar 12 18:48:05.258361 master-0 kubenswrapper[29097]: E0312 18:48:05.257605 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7c09427-e3b8-48df-9082-0520d2fc9b23" containerName="probe" Mar 12 18:48:05.258361 master-0 kubenswrapper[29097]: I0312 18:48:05.257622 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7c09427-e3b8-48df-9082-0520d2fc9b23" containerName="probe" Mar 12 18:48:05.258361 master-0 kubenswrapper[29097]: E0312 18:48:05.257639 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7c09427-e3b8-48df-9082-0520d2fc9b23" containerName="cinder-backup" Mar 12 18:48:05.258361 master-0 kubenswrapper[29097]: I0312 18:48:05.257646 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7c09427-e3b8-48df-9082-0520d2fc9b23" containerName="cinder-backup" Mar 12 18:48:05.258361 master-0 kubenswrapper[29097]: I0312 18:48:05.257845 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7c09427-e3b8-48df-9082-0520d2fc9b23" containerName="cinder-backup" Mar 12 18:48:05.258361 master-0 kubenswrapper[29097]: I0312 18:48:05.257881 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7c09427-e3b8-48df-9082-0520d2fc9b23" containerName="probe" Mar 12 18:48:05.259055 master-0 kubenswrapper[29097]: I0312 18:48:05.259016 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.262649 master-0 kubenswrapper[29097]: I0312 18:48:05.261739 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-fa62f-backup-config-data" Mar 12 18:48:05.322162 master-0 kubenswrapper[29097]: I0312 18:48:05.319730 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6459537-fb89-4b67-8478-89b1dd4a397e-lib-modules\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.322162 master-0 kubenswrapper[29097]: I0312 18:48:05.319803 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b6459537-fb89-4b67-8478-89b1dd4a397e-etc-iscsi\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.322162 master-0 kubenswrapper[29097]: I0312 18:48:05.319833 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b6459537-fb89-4b67-8478-89b1dd4a397e-dev\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.322162 master-0 kubenswrapper[29097]: I0312 18:48:05.319865 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6459537-fb89-4b67-8478-89b1dd4a397e-combined-ca-bundle\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.322162 master-0 kubenswrapper[29097]: I0312 18:48:05.319913 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b6459537-fb89-4b67-8478-89b1dd4a397e-run\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.322162 master-0 kubenswrapper[29097]: I0312 18:48:05.319936 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b6459537-fb89-4b67-8478-89b1dd4a397e-var-locks-cinder\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.322162 master-0 kubenswrapper[29097]: I0312 18:48:05.319961 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b6459537-fb89-4b67-8478-89b1dd4a397e-var-lib-cinder\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.322162 master-0 kubenswrapper[29097]: I0312 18:48:05.320015 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b6459537-fb89-4b67-8478-89b1dd4a397e-etc-machine-id\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.322162 master-0 kubenswrapper[29097]: I0312 18:48:05.320048 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b6459537-fb89-4b67-8478-89b1dd4a397e-etc-nvme\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.322162 master-0 kubenswrapper[29097]: I0312 18:48:05.320108 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6459537-fb89-4b67-8478-89b1dd4a397e-scripts\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.322162 master-0 kubenswrapper[29097]: I0312 18:48:05.320189 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6459537-fb89-4b67-8478-89b1dd4a397e-config-data\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.322162 master-0 kubenswrapper[29097]: I0312 18:48:05.320228 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b6459537-fb89-4b67-8478-89b1dd4a397e-var-locks-brick\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.322162 master-0 kubenswrapper[29097]: I0312 18:48:05.320293 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs7d4\" (UniqueName: \"kubernetes.io/projected/b6459537-fb89-4b67-8478-89b1dd4a397e-kube-api-access-cs7d4\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.322162 master-0 kubenswrapper[29097]: I0312 18:48:05.320345 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6459537-fb89-4b67-8478-89b1dd4a397e-config-data-custom\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.322162 master-0 kubenswrapper[29097]: I0312 18:48:05.320368 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b6459537-fb89-4b67-8478-89b1dd4a397e-sys\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.348878 master-0 kubenswrapper[29097]: I0312 18:48:05.330936 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-fa62f-backup-0"] Mar 12 18:48:05.421776 master-0 kubenswrapper[29097]: I0312 18:48:05.421717 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6459537-fb89-4b67-8478-89b1dd4a397e-config-data-custom\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.421776 master-0 kubenswrapper[29097]: I0312 18:48:05.421776 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b6459537-fb89-4b67-8478-89b1dd4a397e-sys\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.422453 master-0 kubenswrapper[29097]: I0312 18:48:05.422190 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6459537-fb89-4b67-8478-89b1dd4a397e-lib-modules\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.422453 master-0 kubenswrapper[29097]: I0312 18:48:05.422321 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b6459537-fb89-4b67-8478-89b1dd4a397e-etc-iscsi\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.422453 master-0 kubenswrapper[29097]: I0312 18:48:05.422367 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b6459537-fb89-4b67-8478-89b1dd4a397e-dev\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.422594 master-0 kubenswrapper[29097]: I0312 18:48:05.422484 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6459537-fb89-4b67-8478-89b1dd4a397e-lib-modules\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.422594 master-0 kubenswrapper[29097]: I0312 18:48:05.422539 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6459537-fb89-4b67-8478-89b1dd4a397e-combined-ca-bundle\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.422686 master-0 kubenswrapper[29097]: I0312 18:48:05.422668 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b6459537-fb89-4b67-8478-89b1dd4a397e-sys\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.422725 master-0 kubenswrapper[29097]: I0312 18:48:05.422705 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b6459537-fb89-4b67-8478-89b1dd4a397e-etc-iscsi\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.422764 master-0 kubenswrapper[29097]: I0312 18:48:05.422730 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b6459537-fb89-4b67-8478-89b1dd4a397e-dev\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.424180 master-0 kubenswrapper[29097]: I0312 18:48:05.422846 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b6459537-fb89-4b67-8478-89b1dd4a397e-run\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.424180 master-0 kubenswrapper[29097]: I0312 18:48:05.422868 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b6459537-fb89-4b67-8478-89b1dd4a397e-var-locks-cinder\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.424180 master-0 kubenswrapper[29097]: I0312 18:48:05.422892 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b6459537-fb89-4b67-8478-89b1dd4a397e-var-lib-cinder\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.424180 master-0 kubenswrapper[29097]: I0312 18:48:05.422986 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b6459537-fb89-4b67-8478-89b1dd4a397e-etc-machine-id\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.424180 master-0 kubenswrapper[29097]: I0312 18:48:05.423020 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b6459537-fb89-4b67-8478-89b1dd4a397e-etc-nvme\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.424180 master-0 kubenswrapper[29097]: I0312 18:48:05.423134 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6459537-fb89-4b67-8478-89b1dd4a397e-scripts\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.424180 master-0 kubenswrapper[29097]: I0312 18:48:05.423206 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6459537-fb89-4b67-8478-89b1dd4a397e-config-data\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.424180 master-0 kubenswrapper[29097]: I0312 18:48:05.423226 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b6459537-fb89-4b67-8478-89b1dd4a397e-var-locks-brick\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.424180 master-0 kubenswrapper[29097]: I0312 18:48:05.423276 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b6459537-fb89-4b67-8478-89b1dd4a397e-etc-machine-id\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.424180 master-0 kubenswrapper[29097]: I0312 18:48:05.423357 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b6459537-fb89-4b67-8478-89b1dd4a397e-var-locks-brick\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.424180 master-0 kubenswrapper[29097]: I0312 18:48:05.423371 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs7d4\" (UniqueName: \"kubernetes.io/projected/b6459537-fb89-4b67-8478-89b1dd4a397e-kube-api-access-cs7d4\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.424180 master-0 kubenswrapper[29097]: I0312 18:48:05.423399 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b6459537-fb89-4b67-8478-89b1dd4a397e-etc-nvme\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.424180 master-0 kubenswrapper[29097]: I0312 18:48:05.423581 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b6459537-fb89-4b67-8478-89b1dd4a397e-var-lib-cinder\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.424180 master-0 kubenswrapper[29097]: I0312 18:48:05.424086 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b6459537-fb89-4b67-8478-89b1dd4a397e-var-locks-cinder\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.424904 master-0 kubenswrapper[29097]: I0312 18:48:05.424585 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b6459537-fb89-4b67-8478-89b1dd4a397e-run\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.426798 master-0 kubenswrapper[29097]: I0312 18:48:05.426760 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6459537-fb89-4b67-8478-89b1dd4a397e-config-data-custom\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.428125 master-0 kubenswrapper[29097]: I0312 18:48:05.428050 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6459537-fb89-4b67-8478-89b1dd4a397e-combined-ca-bundle\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.428505 master-0 kubenswrapper[29097]: I0312 18:48:05.428472 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6459537-fb89-4b67-8478-89b1dd4a397e-config-data\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.434024 master-0 kubenswrapper[29097]: I0312 18:48:05.433943 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6459537-fb89-4b67-8478-89b1dd4a397e-scripts\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.524990 master-0 kubenswrapper[29097]: I0312 18:48:05.524600 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs7d4\" (UniqueName: \"kubernetes.io/projected/b6459537-fb89-4b67-8478-89b1dd4a397e-kube-api-access-cs7d4\") pod \"cinder-fa62f-backup-0\" (UID: \"b6459537-fb89-4b67-8478-89b1dd4a397e\") " pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:05.584293 master-0 kubenswrapper[29097]: I0312 18:48:05.584245 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:06.316567 master-0 kubenswrapper[29097]: I0312 18:48:06.314603 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-fa62f-backup-0"] Mar 12 18:48:06.472816 master-0 kubenswrapper[29097]: I0312 18:48:06.472759 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-backup-0" event={"ID":"b6459537-fb89-4b67-8478-89b1dd4a397e","Type":"ContainerStarted","Data":"b789957758b599aae78015a7b6db54423e40b64b3967ea405df3f7f0cd477ba8"} Mar 12 18:48:06.741676 master-0 kubenswrapper[29097]: I0312 18:48:06.741211 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7c09427-e3b8-48df-9082-0520d2fc9b23" path="/var/lib/kubelet/pods/e7c09427-e3b8-48df-9082-0520d2fc9b23/volumes" Mar 12 18:48:06.907566 master-0 kubenswrapper[29097]: I0312 18:48:06.907506 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:48:07.279552 master-0 kubenswrapper[29097]: I0312 18:48:07.279473 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:07.490432 master-0 kubenswrapper[29097]: I0312 18:48:07.490331 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-backup-0" event={"ID":"b6459537-fb89-4b67-8478-89b1dd4a397e","Type":"ContainerStarted","Data":"05d7d559d35a8cd08ae27d9fde1cc84f9987f41dd8e0fd147689cf0a3521047d"} Mar 12 18:48:07.490967 master-0 kubenswrapper[29097]: I0312 18:48:07.490948 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fa62f-backup-0" event={"ID":"b6459537-fb89-4b67-8478-89b1dd4a397e","Type":"ContainerStarted","Data":"7f61ac164f9f6f4c2134ef11ccfde80aa92df69256b26807f979abcea3e82f57"} Mar 12 18:48:07.519565 master-0 kubenswrapper[29097]: I0312 18:48:07.519484 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-fa62f-backup-0" podStartSLOduration=2.519466198 podStartE2EDuration="2.519466198s" podCreationTimestamp="2026-03-12 18:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:48:07.514324189 +0000 UTC m=+1127.068304286" watchObservedRunningTime="2026-03-12 18:48:07.519466198 +0000 UTC m=+1127.073446295" Mar 12 18:48:08.383901 master-0 kubenswrapper[29097]: I0312 18:48:08.381399 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6b99b5d8f4-vzr8p" Mar 12 18:48:08.390433 master-0 kubenswrapper[29097]: I0312 18:48:08.390363 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6b99b5d8f4-vzr8p" Mar 12 18:48:08.795543 master-0 kubenswrapper[29097]: I0312 18:48:08.794888 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5664b69d46-kq48m"] Mar 12 18:48:08.803562 master-0 kubenswrapper[29097]: I0312 18:48:08.803308 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5664b69d46-kq48m" Mar 12 18:48:08.816820 master-0 kubenswrapper[29097]: I0312 18:48:08.816321 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5664b69d46-kq48m"] Mar 12 18:48:08.927632 master-0 kubenswrapper[29097]: I0312 18:48:08.927162 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cd110a9-8d4a-4c00-9b94-1a2cb117d463-public-tls-certs\") pod \"placement-5664b69d46-kq48m\" (UID: \"8cd110a9-8d4a-4c00-9b94-1a2cb117d463\") " pod="openstack/placement-5664b69d46-kq48m" Mar 12 18:48:08.927632 master-0 kubenswrapper[29097]: I0312 18:48:08.927258 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cd110a9-8d4a-4c00-9b94-1a2cb117d463-config-data\") pod \"placement-5664b69d46-kq48m\" (UID: \"8cd110a9-8d4a-4c00-9b94-1a2cb117d463\") " pod="openstack/placement-5664b69d46-kq48m" Mar 12 18:48:08.927632 master-0 kubenswrapper[29097]: I0312 18:48:08.927366 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cd110a9-8d4a-4c00-9b94-1a2cb117d463-scripts\") pod \"placement-5664b69d46-kq48m\" (UID: \"8cd110a9-8d4a-4c00-9b94-1a2cb117d463\") " pod="openstack/placement-5664b69d46-kq48m" Mar 12 18:48:08.927632 master-0 kubenswrapper[29097]: I0312 18:48:08.927397 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cd110a9-8d4a-4c00-9b94-1a2cb117d463-combined-ca-bundle\") pod \"placement-5664b69d46-kq48m\" (UID: \"8cd110a9-8d4a-4c00-9b94-1a2cb117d463\") " pod="openstack/placement-5664b69d46-kq48m" Mar 12 18:48:08.927632 master-0 kubenswrapper[29097]: I0312 18:48:08.927423 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cd110a9-8d4a-4c00-9b94-1a2cb117d463-internal-tls-certs\") pod \"placement-5664b69d46-kq48m\" (UID: \"8cd110a9-8d4a-4c00-9b94-1a2cb117d463\") " pod="openstack/placement-5664b69d46-kq48m" Mar 12 18:48:08.927632 master-0 kubenswrapper[29097]: I0312 18:48:08.927448 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cd110a9-8d4a-4c00-9b94-1a2cb117d463-logs\") pod \"placement-5664b69d46-kq48m\" (UID: \"8cd110a9-8d4a-4c00-9b94-1a2cb117d463\") " pod="openstack/placement-5664b69d46-kq48m" Mar 12 18:48:08.927632 master-0 kubenswrapper[29097]: I0312 18:48:08.927463 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd7gn\" (UniqueName: \"kubernetes.io/projected/8cd110a9-8d4a-4c00-9b94-1a2cb117d463-kube-api-access-xd7gn\") pod \"placement-5664b69d46-kq48m\" (UID: \"8cd110a9-8d4a-4c00-9b94-1a2cb117d463\") " pod="openstack/placement-5664b69d46-kq48m" Mar 12 18:48:09.029603 master-0 kubenswrapper[29097]: I0312 18:48:09.029432 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cd110a9-8d4a-4c00-9b94-1a2cb117d463-config-data\") pod \"placement-5664b69d46-kq48m\" (UID: \"8cd110a9-8d4a-4c00-9b94-1a2cb117d463\") " pod="openstack/placement-5664b69d46-kq48m" Mar 12 18:48:09.029603 master-0 kubenswrapper[29097]: I0312 18:48:09.029568 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cd110a9-8d4a-4c00-9b94-1a2cb117d463-scripts\") pod \"placement-5664b69d46-kq48m\" (UID: \"8cd110a9-8d4a-4c00-9b94-1a2cb117d463\") " pod="openstack/placement-5664b69d46-kq48m" Mar 12 18:48:09.029603 master-0 kubenswrapper[29097]: I0312 18:48:09.029594 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cd110a9-8d4a-4c00-9b94-1a2cb117d463-combined-ca-bundle\") pod \"placement-5664b69d46-kq48m\" (UID: \"8cd110a9-8d4a-4c00-9b94-1a2cb117d463\") " pod="openstack/placement-5664b69d46-kq48m" Mar 12 18:48:09.029603 master-0 kubenswrapper[29097]: I0312 18:48:09.029618 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cd110a9-8d4a-4c00-9b94-1a2cb117d463-internal-tls-certs\") pod \"placement-5664b69d46-kq48m\" (UID: \"8cd110a9-8d4a-4c00-9b94-1a2cb117d463\") " pod="openstack/placement-5664b69d46-kq48m" Mar 12 18:48:09.029997 master-0 kubenswrapper[29097]: I0312 18:48:09.029635 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cd110a9-8d4a-4c00-9b94-1a2cb117d463-logs\") pod \"placement-5664b69d46-kq48m\" (UID: \"8cd110a9-8d4a-4c00-9b94-1a2cb117d463\") " pod="openstack/placement-5664b69d46-kq48m" Mar 12 18:48:09.029997 master-0 kubenswrapper[29097]: I0312 18:48:09.029655 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd7gn\" (UniqueName: \"kubernetes.io/projected/8cd110a9-8d4a-4c00-9b94-1a2cb117d463-kube-api-access-xd7gn\") pod \"placement-5664b69d46-kq48m\" (UID: \"8cd110a9-8d4a-4c00-9b94-1a2cb117d463\") " pod="openstack/placement-5664b69d46-kq48m" Mar 12 18:48:09.029997 master-0 kubenswrapper[29097]: I0312 18:48:09.029749 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cd110a9-8d4a-4c00-9b94-1a2cb117d463-public-tls-certs\") pod \"placement-5664b69d46-kq48m\" (UID: \"8cd110a9-8d4a-4c00-9b94-1a2cb117d463\") " pod="openstack/placement-5664b69d46-kq48m" Mar 12 18:48:09.031695 master-0 kubenswrapper[29097]: I0312 18:48:09.031673 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cd110a9-8d4a-4c00-9b94-1a2cb117d463-logs\") pod \"placement-5664b69d46-kq48m\" (UID: \"8cd110a9-8d4a-4c00-9b94-1a2cb117d463\") " pod="openstack/placement-5664b69d46-kq48m" Mar 12 18:48:09.047297 master-0 kubenswrapper[29097]: I0312 18:48:09.037213 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cd110a9-8d4a-4c00-9b94-1a2cb117d463-public-tls-certs\") pod \"placement-5664b69d46-kq48m\" (UID: \"8cd110a9-8d4a-4c00-9b94-1a2cb117d463\") " pod="openstack/placement-5664b69d46-kq48m" Mar 12 18:48:09.047297 master-0 kubenswrapper[29097]: I0312 18:48:09.037555 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cd110a9-8d4a-4c00-9b94-1a2cb117d463-scripts\") pod \"placement-5664b69d46-kq48m\" (UID: \"8cd110a9-8d4a-4c00-9b94-1a2cb117d463\") " pod="openstack/placement-5664b69d46-kq48m" Mar 12 18:48:09.047297 master-0 kubenswrapper[29097]: I0312 18:48:09.037726 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cd110a9-8d4a-4c00-9b94-1a2cb117d463-combined-ca-bundle\") pod \"placement-5664b69d46-kq48m\" (UID: \"8cd110a9-8d4a-4c00-9b94-1a2cb117d463\") " pod="openstack/placement-5664b69d46-kq48m" Mar 12 18:48:09.047297 master-0 kubenswrapper[29097]: I0312 18:48:09.038553 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cd110a9-8d4a-4c00-9b94-1a2cb117d463-config-data\") pod \"placement-5664b69d46-kq48m\" (UID: \"8cd110a9-8d4a-4c00-9b94-1a2cb117d463\") " pod="openstack/placement-5664b69d46-kq48m" Mar 12 18:48:09.050574 master-0 kubenswrapper[29097]: I0312 18:48:09.050321 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd7gn\" (UniqueName: \"kubernetes.io/projected/8cd110a9-8d4a-4c00-9b94-1a2cb117d463-kube-api-access-xd7gn\") pod \"placement-5664b69d46-kq48m\" (UID: \"8cd110a9-8d4a-4c00-9b94-1a2cb117d463\") " pod="openstack/placement-5664b69d46-kq48m" Mar 12 18:48:09.051002 master-0 kubenswrapper[29097]: I0312 18:48:09.050920 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cd110a9-8d4a-4c00-9b94-1a2cb117d463-internal-tls-certs\") pod \"placement-5664b69d46-kq48m\" (UID: \"8cd110a9-8d4a-4c00-9b94-1a2cb117d463\") " pod="openstack/placement-5664b69d46-kq48m" Mar 12 18:48:09.162501 master-0 kubenswrapper[29097]: I0312 18:48:09.162365 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5664b69d46-kq48m" Mar 12 18:48:09.698923 master-0 kubenswrapper[29097]: I0312 18:48:09.697577 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5664b69d46-kq48m"] Mar 12 18:48:10.561066 master-0 kubenswrapper[29097]: I0312 18:48:10.560991 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5664b69d46-kq48m" event={"ID":"8cd110a9-8d4a-4c00-9b94-1a2cb117d463","Type":"ContainerStarted","Data":"af355e35a400fe06bc2789a8b8c768c2bebf6c6dc2073b350fee7a11960cd9f6"} Mar 12 18:48:10.567656 master-0 kubenswrapper[29097]: I0312 18:48:10.561081 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5664b69d46-kq48m" event={"ID":"8cd110a9-8d4a-4c00-9b94-1a2cb117d463","Type":"ContainerStarted","Data":"c5e0083a82fd9b0411c41b0b19ae1d2a3bfd75dee476d8c0dd26388158524e4f"} Mar 12 18:48:10.567656 master-0 kubenswrapper[29097]: I0312 18:48:10.561097 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5664b69d46-kq48m" event={"ID":"8cd110a9-8d4a-4c00-9b94-1a2cb117d463","Type":"ContainerStarted","Data":"cd772e97c2ad2f866c8abce9a3339c5b0e1a13eba58f4bd64fab19ee6f6cc88f"} Mar 12 18:48:10.567656 master-0 kubenswrapper[29097]: I0312 18:48:10.562364 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5664b69d46-kq48m" Mar 12 18:48:10.567656 master-0 kubenswrapper[29097]: I0312 18:48:10.562391 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5664b69d46-kq48m" Mar 12 18:48:10.588097 master-0 kubenswrapper[29097]: I0312 18:48:10.586805 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:10.588097 master-0 kubenswrapper[29097]: I0312 18:48:10.587912 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5664b69d46-kq48m" podStartSLOduration=2.587893804 podStartE2EDuration="2.587893804s" podCreationTimestamp="2026-03-12 18:48:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:48:10.583481734 +0000 UTC m=+1130.137461851" watchObservedRunningTime="2026-03-12 18:48:10.587893804 +0000 UTC m=+1130.141873901" Mar 12 18:48:11.731691 master-0 kubenswrapper[29097]: I0312 18:48:11.731639 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-fa62f-api-0" Mar 12 18:48:12.270297 master-0 kubenswrapper[29097]: I0312 18:48:12.270246 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-fa62f-scheduler-0" Mar 12 18:48:12.483237 master-0 kubenswrapper[29097]: I0312 18:48:12.482591 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" Mar 12 18:48:14.619675 master-0 kubenswrapper[29097]: I0312 18:48:14.619621 29097 generic.go:334] "Generic (PLEG): container finished" podID="64b3a2fa-455e-45a6-a3b4-9763b68a8faa" containerID="dae7cadbc74e2168f49f3aaf41d51c9bf3319c2504a8c94bfb9ad5e77a5727ff" exitCode=0 Mar 12 18:48:14.620185 master-0 kubenswrapper[29097]: I0312 18:48:14.619681 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-mzfh7" event={"ID":"64b3a2fa-455e-45a6-a3b4-9763b68a8faa","Type":"ContainerDied","Data":"dae7cadbc74e2168f49f3aaf41d51c9bf3319c2504a8c94bfb9ad5e77a5727ff"} Mar 12 18:48:15.797302 master-0 kubenswrapper[29097]: I0312 18:48:15.797249 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-798795c956-754f2" Mar 12 18:48:15.825148 master-0 kubenswrapper[29097]: I0312 18:48:15.825088 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-fa62f-backup-0" Mar 12 18:48:16.191594 master-0 kubenswrapper[29097]: I0312 18:48:16.191328 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-mzfh7" Mar 12 18:48:16.290736 master-0 kubenswrapper[29097]: I0312 18:48:16.290667 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-config-data-merged\") pod \"64b3a2fa-455e-45a6-a3b4-9763b68a8faa\" (UID: \"64b3a2fa-455e-45a6-a3b4-9763b68a8faa\") " Mar 12 18:48:16.290980 master-0 kubenswrapper[29097]: I0312 18:48:16.290876 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8tdl\" (UniqueName: \"kubernetes.io/projected/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-kube-api-access-m8tdl\") pod \"64b3a2fa-455e-45a6-a3b4-9763b68a8faa\" (UID: \"64b3a2fa-455e-45a6-a3b4-9763b68a8faa\") " Mar 12 18:48:16.290980 master-0 kubenswrapper[29097]: I0312 18:48:16.290905 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-scripts\") pod \"64b3a2fa-455e-45a6-a3b4-9763b68a8faa\" (UID: \"64b3a2fa-455e-45a6-a3b4-9763b68a8faa\") " Mar 12 18:48:16.290980 master-0 kubenswrapper[29097]: I0312 18:48:16.290921 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-combined-ca-bundle\") pod \"64b3a2fa-455e-45a6-a3b4-9763b68a8faa\" (UID: \"64b3a2fa-455e-45a6-a3b4-9763b68a8faa\") " Mar 12 18:48:16.291084 master-0 kubenswrapper[29097]: I0312 18:48:16.290994 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-etc-podinfo\") pod \"64b3a2fa-455e-45a6-a3b4-9763b68a8faa\" (UID: \"64b3a2fa-455e-45a6-a3b4-9763b68a8faa\") " Mar 12 18:48:16.291084 master-0 kubenswrapper[29097]: I0312 18:48:16.291074 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-config-data\") pod \"64b3a2fa-455e-45a6-a3b4-9763b68a8faa\" (UID: \"64b3a2fa-455e-45a6-a3b4-9763b68a8faa\") " Mar 12 18:48:16.292773 master-0 kubenswrapper[29097]: I0312 18:48:16.292686 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "64b3a2fa-455e-45a6-a3b4-9763b68a8faa" (UID: "64b3a2fa-455e-45a6-a3b4-9763b68a8faa"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:48:16.312118 master-0 kubenswrapper[29097]: I0312 18:48:16.312051 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-kube-api-access-m8tdl" (OuterVolumeSpecName: "kube-api-access-m8tdl") pod "64b3a2fa-455e-45a6-a3b4-9763b68a8faa" (UID: "64b3a2fa-455e-45a6-a3b4-9763b68a8faa"). InnerVolumeSpecName "kube-api-access-m8tdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:48:16.314799 master-0 kubenswrapper[29097]: I0312 18:48:16.314743 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-scripts" (OuterVolumeSpecName: "scripts") pod "64b3a2fa-455e-45a6-a3b4-9763b68a8faa" (UID: "64b3a2fa-455e-45a6-a3b4-9763b68a8faa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:16.319147 master-0 kubenswrapper[29097]: I0312 18:48:16.318747 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "64b3a2fa-455e-45a6-a3b4-9763b68a8faa" (UID: "64b3a2fa-455e-45a6-a3b4-9763b68a8faa"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 12 18:48:16.328503 master-0 kubenswrapper[29097]: I0312 18:48:16.328456 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-config-data" (OuterVolumeSpecName: "config-data") pod "64b3a2fa-455e-45a6-a3b4-9763b68a8faa" (UID: "64b3a2fa-455e-45a6-a3b4-9763b68a8faa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:16.352389 master-0 kubenswrapper[29097]: I0312 18:48:16.352279 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64b3a2fa-455e-45a6-a3b4-9763b68a8faa" (UID: "64b3a2fa-455e-45a6-a3b4-9763b68a8faa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:16.393144 master-0 kubenswrapper[29097]: I0312 18:48:16.393091 29097 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-config-data-merged\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:16.393144 master-0 kubenswrapper[29097]: I0312 18:48:16.393131 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8tdl\" (UniqueName: \"kubernetes.io/projected/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-kube-api-access-m8tdl\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:16.393144 master-0 kubenswrapper[29097]: I0312 18:48:16.393144 29097 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:16.393144 master-0 kubenswrapper[29097]: I0312 18:48:16.393154 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:16.393453 master-0 kubenswrapper[29097]: I0312 18:48:16.393162 29097 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:16.393453 master-0 kubenswrapper[29097]: I0312 18:48:16.393171 29097 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64b3a2fa-455e-45a6-a3b4-9763b68a8faa-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:16.490373 master-0 kubenswrapper[29097]: I0312 18:48:16.490318 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-ffbf57c88-5pgzn" Mar 12 18:48:16.658252 master-0 kubenswrapper[29097]: I0312 18:48:16.658182 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-mzfh7" event={"ID":"64b3a2fa-455e-45a6-a3b4-9763b68a8faa","Type":"ContainerDied","Data":"f1c909b3dbae7961faba9e5a0d2f7b6da8b9a95ab844ed7f63b95ddc1c204ac6"} Mar 12 18:48:16.658252 master-0 kubenswrapper[29097]: I0312 18:48:16.658221 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-mzfh7" Mar 12 18:48:16.658252 master-0 kubenswrapper[29097]: I0312 18:48:16.658238 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1c909b3dbae7961faba9e5a0d2f7b6da8b9a95ab844ed7f63b95ddc1c204ac6" Mar 12 18:48:17.072881 master-0 kubenswrapper[29097]: I0312 18:48:17.072822 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-create-d4xb7"] Mar 12 18:48:17.073978 master-0 kubenswrapper[29097]: E0312 18:48:17.073954 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64b3a2fa-455e-45a6-a3b4-9763b68a8faa" containerName="init" Mar 12 18:48:17.074027 master-0 kubenswrapper[29097]: I0312 18:48:17.073979 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="64b3a2fa-455e-45a6-a3b4-9763b68a8faa" containerName="init" Mar 12 18:48:17.074064 master-0 kubenswrapper[29097]: E0312 18:48:17.074037 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64b3a2fa-455e-45a6-a3b4-9763b68a8faa" containerName="ironic-db-sync" Mar 12 18:48:17.074064 master-0 kubenswrapper[29097]: I0312 18:48:17.074045 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="64b3a2fa-455e-45a6-a3b4-9763b68a8faa" containerName="ironic-db-sync" Mar 12 18:48:17.077815 master-0 kubenswrapper[29097]: I0312 18:48:17.077775 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="64b3a2fa-455e-45a6-a3b4-9763b68a8faa" containerName="ironic-db-sync" Mar 12 18:48:17.081254 master-0 kubenswrapper[29097]: I0312 18:48:17.081230 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-d4xb7" Mar 12 18:48:17.119674 master-0 kubenswrapper[29097]: I0312 18:48:17.118224 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-d4xb7"] Mar 12 18:48:17.146401 master-0 kubenswrapper[29097]: I0312 18:48:17.138351 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz6qt\" (UniqueName: \"kubernetes.io/projected/07e88358-2e50-4143-beec-5f4a0698dcca-kube-api-access-dz6qt\") pod \"ironic-inspector-db-create-d4xb7\" (UID: \"07e88358-2e50-4143-beec-5f4a0698dcca\") " pod="openstack/ironic-inspector-db-create-d4xb7" Mar 12 18:48:17.146401 master-0 kubenswrapper[29097]: I0312 18:48:17.138474 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07e88358-2e50-4143-beec-5f4a0698dcca-operator-scripts\") pod \"ironic-inspector-db-create-d4xb7\" (UID: \"07e88358-2e50-4143-beec-5f4a0698dcca\") " pod="openstack/ironic-inspector-db-create-d4xb7" Mar 12 18:48:17.232185 master-0 kubenswrapper[29097]: I0312 18:48:17.227894 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9c57cd77c-r4fqr"] Mar 12 18:48:17.232185 master-0 kubenswrapper[29097]: I0312 18:48:17.229855 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9c57cd77c-r4fqr"] Mar 12 18:48:17.232185 master-0 kubenswrapper[29097]: I0312 18:48:17.229943 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9c57cd77c-r4fqr" Mar 12 18:48:17.241821 master-0 kubenswrapper[29097]: I0312 18:48:17.240209 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz6qt\" (UniqueName: \"kubernetes.io/projected/07e88358-2e50-4143-beec-5f4a0698dcca-kube-api-access-dz6qt\") pod \"ironic-inspector-db-create-d4xb7\" (UID: \"07e88358-2e50-4143-beec-5f4a0698dcca\") " pod="openstack/ironic-inspector-db-create-d4xb7" Mar 12 18:48:17.241821 master-0 kubenswrapper[29097]: I0312 18:48:17.240275 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07e88358-2e50-4143-beec-5f4a0698dcca-operator-scripts\") pod \"ironic-inspector-db-create-d4xb7\" (UID: \"07e88358-2e50-4143-beec-5f4a0698dcca\") " pod="openstack/ironic-inspector-db-create-d4xb7" Mar 12 18:48:17.241821 master-0 kubenswrapper[29097]: I0312 18:48:17.241012 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07e88358-2e50-4143-beec-5f4a0698dcca-operator-scripts\") pod \"ironic-inspector-db-create-d4xb7\" (UID: \"07e88358-2e50-4143-beec-5f4a0698dcca\") " pod="openstack/ironic-inspector-db-create-d4xb7" Mar 12 18:48:17.257845 master-0 kubenswrapper[29097]: I0312 18:48:17.257795 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-cea2-account-create-update-8lqsg"] Mar 12 18:48:17.260069 master-0 kubenswrapper[29097]: I0312 18:48:17.260039 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-cea2-account-create-update-8lqsg" Mar 12 18:48:17.270294 master-0 kubenswrapper[29097]: I0312 18:48:17.265545 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-db-secret" Mar 12 18:48:17.270294 master-0 kubenswrapper[29097]: I0312 18:48:17.268374 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-neutron-agent-7cb69d965b-d79tc"] Mar 12 18:48:17.270294 master-0 kubenswrapper[29097]: I0312 18:48:17.269849 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" Mar 12 18:48:17.272701 master-0 kubenswrapper[29097]: I0312 18:48:17.272663 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-ironic-neutron-agent-config-data" Mar 12 18:48:17.304782 master-0 kubenswrapper[29097]: I0312 18:48:17.304663 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz6qt\" (UniqueName: \"kubernetes.io/projected/07e88358-2e50-4143-beec-5f4a0698dcca-kube-api-access-dz6qt\") pod \"ironic-inspector-db-create-d4xb7\" (UID: \"07e88358-2e50-4143-beec-5f4a0698dcca\") " pod="openstack/ironic-inspector-db-create-d4xb7" Mar 12 18:48:17.347205 master-0 kubenswrapper[29097]: I0312 18:48:17.347014 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-7cb69d965b-d79tc"] Mar 12 18:48:17.369609 master-0 kubenswrapper[29097]: I0312 18:48:17.368892 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-cea2-account-create-update-8lqsg"] Mar 12 18:48:17.433734 master-0 kubenswrapper[29097]: I0312 18:48:17.432970 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-55ccfbf469-qsbxt"] Mar 12 18:48:17.441242 master-0 kubenswrapper[29097]: I0312 18:48:17.438156 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-55ccfbf469-qsbxt" Mar 12 18:48:17.441242 master-0 kubenswrapper[29097]: I0312 18:48:17.440947 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Mar 12 18:48:17.441242 master-0 kubenswrapper[29097]: I0312 18:48:17.441089 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-config-data" Mar 12 18:48:17.441493 master-0 kubenswrapper[29097]: I0312 18:48:17.441266 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-transport" Mar 12 18:48:17.441493 master-0 kubenswrapper[29097]: I0312 18:48:17.441413 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 12 18:48:17.441767 master-0 kubenswrapper[29097]: I0312 18:48:17.441736 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-scripts" Mar 12 18:48:17.443210 master-0 kubenswrapper[29097]: I0312 18:48:17.443178 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47h8x\" (UniqueName: \"kubernetes.io/projected/87a19dc7-5415-4d3d-a22e-9e2524a67e38-kube-api-access-47h8x\") pod \"ironic-neutron-agent-7cb69d965b-d79tc\" (UID: \"87a19dc7-5415-4d3d-a22e-9e2524a67e38\") " pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" Mar 12 18:48:17.443286 master-0 kubenswrapper[29097]: I0312 18:48:17.443223 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmhr2\" (UniqueName: \"kubernetes.io/projected/5d607db3-da18-458f-b364-187dc0ddc676-kube-api-access-dmhr2\") pod \"ironic-inspector-cea2-account-create-update-8lqsg\" (UID: \"5d607db3-da18-458f-b364-187dc0ddc676\") " pod="openstack/ironic-inspector-cea2-account-create-update-8lqsg" Mar 12 18:48:17.443286 master-0 kubenswrapper[29097]: I0312 18:48:17.443244 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87a19dc7-5415-4d3d-a22e-9e2524a67e38-combined-ca-bundle\") pod \"ironic-neutron-agent-7cb69d965b-d79tc\" (UID: \"87a19dc7-5415-4d3d-a22e-9e2524a67e38\") " pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" Mar 12 18:48:17.443286 master-0 kubenswrapper[29097]: I0312 18:48:17.443265 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bqz8\" (UniqueName: \"kubernetes.io/projected/1c556c93-e908-407c-9f50-ad291a0bb179-kube-api-access-5bqz8\") pod \"dnsmasq-dns-9c57cd77c-r4fqr\" (UID: \"1c556c93-e908-407c-9f50-ad291a0bb179\") " pod="openstack/dnsmasq-dns-9c57cd77c-r4fqr" Mar 12 18:48:17.443381 master-0 kubenswrapper[29097]: I0312 18:48:17.443299 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c556c93-e908-407c-9f50-ad291a0bb179-ovsdbserver-sb\") pod \"dnsmasq-dns-9c57cd77c-r4fqr\" (UID: \"1c556c93-e908-407c-9f50-ad291a0bb179\") " pod="openstack/dnsmasq-dns-9c57cd77c-r4fqr" Mar 12 18:48:17.443416 master-0 kubenswrapper[29097]: I0312 18:48:17.443389 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/87a19dc7-5415-4d3d-a22e-9e2524a67e38-config\") pod \"ironic-neutron-agent-7cb69d965b-d79tc\" (UID: \"87a19dc7-5415-4d3d-a22e-9e2524a67e38\") " pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" Mar 12 18:48:17.443450 master-0 kubenswrapper[29097]: I0312 18:48:17.443435 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c556c93-e908-407c-9f50-ad291a0bb179-dns-swift-storage-0\") pod \"dnsmasq-dns-9c57cd77c-r4fqr\" (UID: \"1c556c93-e908-407c-9f50-ad291a0bb179\") " pod="openstack/dnsmasq-dns-9c57cd77c-r4fqr" Mar 12 18:48:17.444249 master-0 kubenswrapper[29097]: I0312 18:48:17.443551 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c556c93-e908-407c-9f50-ad291a0bb179-config\") pod \"dnsmasq-dns-9c57cd77c-r4fqr\" (UID: \"1c556c93-e908-407c-9f50-ad291a0bb179\") " pod="openstack/dnsmasq-dns-9c57cd77c-r4fqr" Mar 12 18:48:17.444249 master-0 kubenswrapper[29097]: I0312 18:48:17.443610 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c556c93-e908-407c-9f50-ad291a0bb179-ovsdbserver-nb\") pod \"dnsmasq-dns-9c57cd77c-r4fqr\" (UID: \"1c556c93-e908-407c-9f50-ad291a0bb179\") " pod="openstack/dnsmasq-dns-9c57cd77c-r4fqr" Mar 12 18:48:17.444249 master-0 kubenswrapper[29097]: I0312 18:48:17.443682 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d607db3-da18-458f-b364-187dc0ddc676-operator-scripts\") pod \"ironic-inspector-cea2-account-create-update-8lqsg\" (UID: \"5d607db3-da18-458f-b364-187dc0ddc676\") " pod="openstack/ironic-inspector-cea2-account-create-update-8lqsg" Mar 12 18:48:17.444249 master-0 kubenswrapper[29097]: I0312 18:48:17.443704 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c556c93-e908-407c-9f50-ad291a0bb179-dns-svc\") pod \"dnsmasq-dns-9c57cd77c-r4fqr\" (UID: \"1c556c93-e908-407c-9f50-ad291a0bb179\") " pod="openstack/dnsmasq-dns-9c57cd77c-r4fqr" Mar 12 18:48:17.456154 master-0 kubenswrapper[29097]: I0312 18:48:17.456104 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-d4xb7" Mar 12 18:48:17.472498 master-0 kubenswrapper[29097]: I0312 18:48:17.472448 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-55ccfbf469-qsbxt"] Mar 12 18:48:17.546988 master-0 kubenswrapper[29097]: I0312 18:48:17.546910 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d607db3-da18-458f-b364-187dc0ddc676-operator-scripts\") pod \"ironic-inspector-cea2-account-create-update-8lqsg\" (UID: \"5d607db3-da18-458f-b364-187dc0ddc676\") " pod="openstack/ironic-inspector-cea2-account-create-update-8lqsg" Mar 12 18:48:17.546988 master-0 kubenswrapper[29097]: I0312 18:48:17.546973 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c556c93-e908-407c-9f50-ad291a0bb179-dns-svc\") pod \"dnsmasq-dns-9c57cd77c-r4fqr\" (UID: \"1c556c93-e908-407c-9f50-ad291a0bb179\") " pod="openstack/dnsmasq-dns-9c57cd77c-r4fqr" Mar 12 18:48:17.547235 master-0 kubenswrapper[29097]: I0312 18:48:17.547006 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhmlp\" (UniqueName: \"kubernetes.io/projected/1b2854ab-857a-4029-b03e-470c4452693e-kube-api-access-fhmlp\") pod \"ironic-55ccfbf469-qsbxt\" (UID: \"1b2854ab-857a-4029-b03e-470c4452693e\") " pod="openstack/ironic-55ccfbf469-qsbxt" Mar 12 18:48:17.547235 master-0 kubenswrapper[29097]: I0312 18:48:17.547053 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47h8x\" (UniqueName: \"kubernetes.io/projected/87a19dc7-5415-4d3d-a22e-9e2524a67e38-kube-api-access-47h8x\") pod \"ironic-neutron-agent-7cb69d965b-d79tc\" (UID: \"87a19dc7-5415-4d3d-a22e-9e2524a67e38\") " pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" Mar 12 18:48:17.547235 master-0 kubenswrapper[29097]: I0312 18:48:17.547075 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b2854ab-857a-4029-b03e-470c4452693e-config-data\") pod \"ironic-55ccfbf469-qsbxt\" (UID: \"1b2854ab-857a-4029-b03e-470c4452693e\") " pod="openstack/ironic-55ccfbf469-qsbxt" Mar 12 18:48:17.547235 master-0 kubenswrapper[29097]: I0312 18:48:17.547099 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmhr2\" (UniqueName: \"kubernetes.io/projected/5d607db3-da18-458f-b364-187dc0ddc676-kube-api-access-dmhr2\") pod \"ironic-inspector-cea2-account-create-update-8lqsg\" (UID: \"5d607db3-da18-458f-b364-187dc0ddc676\") " pod="openstack/ironic-inspector-cea2-account-create-update-8lqsg" Mar 12 18:48:17.547235 master-0 kubenswrapper[29097]: I0312 18:48:17.547115 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b2854ab-857a-4029-b03e-470c4452693e-config-data-custom\") pod \"ironic-55ccfbf469-qsbxt\" (UID: \"1b2854ab-857a-4029-b03e-470c4452693e\") " pod="openstack/ironic-55ccfbf469-qsbxt" Mar 12 18:48:17.547235 master-0 kubenswrapper[29097]: I0312 18:48:17.547136 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87a19dc7-5415-4d3d-a22e-9e2524a67e38-combined-ca-bundle\") pod \"ironic-neutron-agent-7cb69d965b-d79tc\" (UID: \"87a19dc7-5415-4d3d-a22e-9e2524a67e38\") " pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" Mar 12 18:48:17.547418 master-0 kubenswrapper[29097]: I0312 18:48:17.547220 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b2854ab-857a-4029-b03e-470c4452693e-logs\") pod \"ironic-55ccfbf469-qsbxt\" (UID: \"1b2854ab-857a-4029-b03e-470c4452693e\") " pod="openstack/ironic-55ccfbf469-qsbxt" Mar 12 18:48:17.547418 master-0 kubenswrapper[29097]: I0312 18:48:17.547275 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bqz8\" (UniqueName: \"kubernetes.io/projected/1c556c93-e908-407c-9f50-ad291a0bb179-kube-api-access-5bqz8\") pod \"dnsmasq-dns-9c57cd77c-r4fqr\" (UID: \"1c556c93-e908-407c-9f50-ad291a0bb179\") " pod="openstack/dnsmasq-dns-9c57cd77c-r4fqr" Mar 12 18:48:17.547418 master-0 kubenswrapper[29097]: I0312 18:48:17.547295 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c556c93-e908-407c-9f50-ad291a0bb179-ovsdbserver-sb\") pod \"dnsmasq-dns-9c57cd77c-r4fqr\" (UID: \"1c556c93-e908-407c-9f50-ad291a0bb179\") " pod="openstack/dnsmasq-dns-9c57cd77c-r4fqr" Mar 12 18:48:17.547524 master-0 kubenswrapper[29097]: I0312 18:48:17.547423 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/87a19dc7-5415-4d3d-a22e-9e2524a67e38-config\") pod \"ironic-neutron-agent-7cb69d965b-d79tc\" (UID: \"87a19dc7-5415-4d3d-a22e-9e2524a67e38\") " pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" Mar 12 18:48:17.547524 master-0 kubenswrapper[29097]: I0312 18:48:17.547479 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1b2854ab-857a-4029-b03e-470c4452693e-config-data-merged\") pod \"ironic-55ccfbf469-qsbxt\" (UID: \"1b2854ab-857a-4029-b03e-470c4452693e\") " pod="openstack/ironic-55ccfbf469-qsbxt" Mar 12 18:48:17.547613 master-0 kubenswrapper[29097]: I0312 18:48:17.547532 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c556c93-e908-407c-9f50-ad291a0bb179-dns-swift-storage-0\") pod \"dnsmasq-dns-9c57cd77c-r4fqr\" (UID: \"1c556c93-e908-407c-9f50-ad291a0bb179\") " pod="openstack/dnsmasq-dns-9c57cd77c-r4fqr" Mar 12 18:48:17.547613 master-0 kubenswrapper[29097]: I0312 18:48:17.547595 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c556c93-e908-407c-9f50-ad291a0bb179-config\") pod \"dnsmasq-dns-9c57cd77c-r4fqr\" (UID: \"1c556c93-e908-407c-9f50-ad291a0bb179\") " pod="openstack/dnsmasq-dns-9c57cd77c-r4fqr" Mar 12 18:48:17.547675 master-0 kubenswrapper[29097]: I0312 18:48:17.547615 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b2854ab-857a-4029-b03e-470c4452693e-scripts\") pod \"ironic-55ccfbf469-qsbxt\" (UID: \"1b2854ab-857a-4029-b03e-470c4452693e\") " pod="openstack/ironic-55ccfbf469-qsbxt" Mar 12 18:48:17.547717 master-0 kubenswrapper[29097]: I0312 18:48:17.547679 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1b2854ab-857a-4029-b03e-470c4452693e-etc-podinfo\") pod \"ironic-55ccfbf469-qsbxt\" (UID: \"1b2854ab-857a-4029-b03e-470c4452693e\") " pod="openstack/ironic-55ccfbf469-qsbxt" Mar 12 18:48:17.547717 master-0 kubenswrapper[29097]: I0312 18:48:17.547704 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b2854ab-857a-4029-b03e-470c4452693e-combined-ca-bundle\") pod \"ironic-55ccfbf469-qsbxt\" (UID: \"1b2854ab-857a-4029-b03e-470c4452693e\") " pod="openstack/ironic-55ccfbf469-qsbxt" Mar 12 18:48:17.547780 master-0 kubenswrapper[29097]: I0312 18:48:17.547769 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c556c93-e908-407c-9f50-ad291a0bb179-ovsdbserver-nb\") pod \"dnsmasq-dns-9c57cd77c-r4fqr\" (UID: \"1c556c93-e908-407c-9f50-ad291a0bb179\") " pod="openstack/dnsmasq-dns-9c57cd77c-r4fqr" Mar 12 18:48:17.548781 master-0 kubenswrapper[29097]: I0312 18:48:17.548544 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c556c93-e908-407c-9f50-ad291a0bb179-ovsdbserver-nb\") pod \"dnsmasq-dns-9c57cd77c-r4fqr\" (UID: \"1c556c93-e908-407c-9f50-ad291a0bb179\") " pod="openstack/dnsmasq-dns-9c57cd77c-r4fqr" Mar 12 18:48:17.548781 master-0 kubenswrapper[29097]: I0312 18:48:17.548640 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d607db3-da18-458f-b364-187dc0ddc676-operator-scripts\") pod \"ironic-inspector-cea2-account-create-update-8lqsg\" (UID: \"5d607db3-da18-458f-b364-187dc0ddc676\") " pod="openstack/ironic-inspector-cea2-account-create-update-8lqsg" Mar 12 18:48:17.548896 master-0 kubenswrapper[29097]: I0312 18:48:17.548848 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c556c93-e908-407c-9f50-ad291a0bb179-ovsdbserver-sb\") pod \"dnsmasq-dns-9c57cd77c-r4fqr\" (UID: \"1c556c93-e908-407c-9f50-ad291a0bb179\") " pod="openstack/dnsmasq-dns-9c57cd77c-r4fqr" Mar 12 18:48:17.549499 master-0 kubenswrapper[29097]: I0312 18:48:17.549473 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c556c93-e908-407c-9f50-ad291a0bb179-dns-svc\") pod \"dnsmasq-dns-9c57cd77c-r4fqr\" (UID: \"1c556c93-e908-407c-9f50-ad291a0bb179\") " pod="openstack/dnsmasq-dns-9c57cd77c-r4fqr" Mar 12 18:48:17.552491 master-0 kubenswrapper[29097]: I0312 18:48:17.550252 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c556c93-e908-407c-9f50-ad291a0bb179-dns-swift-storage-0\") pod \"dnsmasq-dns-9c57cd77c-r4fqr\" (UID: \"1c556c93-e908-407c-9f50-ad291a0bb179\") " pod="openstack/dnsmasq-dns-9c57cd77c-r4fqr" Mar 12 18:48:17.552491 master-0 kubenswrapper[29097]: I0312 18:48:17.550408 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c556c93-e908-407c-9f50-ad291a0bb179-config\") pod \"dnsmasq-dns-9c57cd77c-r4fqr\" (UID: \"1c556c93-e908-407c-9f50-ad291a0bb179\") " pod="openstack/dnsmasq-dns-9c57cd77c-r4fqr" Mar 12 18:48:17.552491 master-0 kubenswrapper[29097]: I0312 18:48:17.552362 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87a19dc7-5415-4d3d-a22e-9e2524a67e38-combined-ca-bundle\") pod \"ironic-neutron-agent-7cb69d965b-d79tc\" (UID: \"87a19dc7-5415-4d3d-a22e-9e2524a67e38\") " pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" Mar 12 18:48:17.564264 master-0 kubenswrapper[29097]: I0312 18:48:17.563337 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/87a19dc7-5415-4d3d-a22e-9e2524a67e38-config\") pod \"ironic-neutron-agent-7cb69d965b-d79tc\" (UID: \"87a19dc7-5415-4d3d-a22e-9e2524a67e38\") " pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" Mar 12 18:48:17.567001 master-0 kubenswrapper[29097]: I0312 18:48:17.566964 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmhr2\" (UniqueName: \"kubernetes.io/projected/5d607db3-da18-458f-b364-187dc0ddc676-kube-api-access-dmhr2\") pod \"ironic-inspector-cea2-account-create-update-8lqsg\" (UID: \"5d607db3-da18-458f-b364-187dc0ddc676\") " pod="openstack/ironic-inspector-cea2-account-create-update-8lqsg" Mar 12 18:48:17.568002 master-0 kubenswrapper[29097]: I0312 18:48:17.567969 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47h8x\" (UniqueName: \"kubernetes.io/projected/87a19dc7-5415-4d3d-a22e-9e2524a67e38-kube-api-access-47h8x\") pod \"ironic-neutron-agent-7cb69d965b-d79tc\" (UID: \"87a19dc7-5415-4d3d-a22e-9e2524a67e38\") " pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" Mar 12 18:48:17.570397 master-0 kubenswrapper[29097]: I0312 18:48:17.570293 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bqz8\" (UniqueName: \"kubernetes.io/projected/1c556c93-e908-407c-9f50-ad291a0bb179-kube-api-access-5bqz8\") pod \"dnsmasq-dns-9c57cd77c-r4fqr\" (UID: \"1c556c93-e908-407c-9f50-ad291a0bb179\") " pod="openstack/dnsmasq-dns-9c57cd77c-r4fqr" Mar 12 18:48:17.605559 master-0 kubenswrapper[29097]: I0312 18:48:17.605244 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9c57cd77c-r4fqr" Mar 12 18:48:17.651981 master-0 kubenswrapper[29097]: I0312 18:48:17.649875 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b2854ab-857a-4029-b03e-470c4452693e-config-data\") pod \"ironic-55ccfbf469-qsbxt\" (UID: \"1b2854ab-857a-4029-b03e-470c4452693e\") " pod="openstack/ironic-55ccfbf469-qsbxt" Mar 12 18:48:17.651981 master-0 kubenswrapper[29097]: I0312 18:48:17.649926 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b2854ab-857a-4029-b03e-470c4452693e-config-data-custom\") pod \"ironic-55ccfbf469-qsbxt\" (UID: \"1b2854ab-857a-4029-b03e-470c4452693e\") " pod="openstack/ironic-55ccfbf469-qsbxt" Mar 12 18:48:17.651981 master-0 kubenswrapper[29097]: I0312 18:48:17.649946 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b2854ab-857a-4029-b03e-470c4452693e-logs\") pod \"ironic-55ccfbf469-qsbxt\" (UID: \"1b2854ab-857a-4029-b03e-470c4452693e\") " pod="openstack/ironic-55ccfbf469-qsbxt" Mar 12 18:48:17.651981 master-0 kubenswrapper[29097]: I0312 18:48:17.650015 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1b2854ab-857a-4029-b03e-470c4452693e-config-data-merged\") pod \"ironic-55ccfbf469-qsbxt\" (UID: \"1b2854ab-857a-4029-b03e-470c4452693e\") " pod="openstack/ironic-55ccfbf469-qsbxt" Mar 12 18:48:17.651981 master-0 kubenswrapper[29097]: I0312 18:48:17.650048 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b2854ab-857a-4029-b03e-470c4452693e-scripts\") pod \"ironic-55ccfbf469-qsbxt\" (UID: \"1b2854ab-857a-4029-b03e-470c4452693e\") " pod="openstack/ironic-55ccfbf469-qsbxt" Mar 12 18:48:17.651981 master-0 kubenswrapper[29097]: I0312 18:48:17.650065 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1b2854ab-857a-4029-b03e-470c4452693e-etc-podinfo\") pod \"ironic-55ccfbf469-qsbxt\" (UID: \"1b2854ab-857a-4029-b03e-470c4452693e\") " pod="openstack/ironic-55ccfbf469-qsbxt" Mar 12 18:48:17.651981 master-0 kubenswrapper[29097]: I0312 18:48:17.650087 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b2854ab-857a-4029-b03e-470c4452693e-combined-ca-bundle\") pod \"ironic-55ccfbf469-qsbxt\" (UID: \"1b2854ab-857a-4029-b03e-470c4452693e\") " pod="openstack/ironic-55ccfbf469-qsbxt" Mar 12 18:48:17.651981 master-0 kubenswrapper[29097]: I0312 18:48:17.650175 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhmlp\" (UniqueName: \"kubernetes.io/projected/1b2854ab-857a-4029-b03e-470c4452693e-kube-api-access-fhmlp\") pod \"ironic-55ccfbf469-qsbxt\" (UID: \"1b2854ab-857a-4029-b03e-470c4452693e\") " pod="openstack/ironic-55ccfbf469-qsbxt" Mar 12 18:48:17.651981 master-0 kubenswrapper[29097]: I0312 18:48:17.651254 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1b2854ab-857a-4029-b03e-470c4452693e-config-data-merged\") pod \"ironic-55ccfbf469-qsbxt\" (UID: \"1b2854ab-857a-4029-b03e-470c4452693e\") " pod="openstack/ironic-55ccfbf469-qsbxt" Mar 12 18:48:17.652643 master-0 kubenswrapper[29097]: I0312 18:48:17.652626 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b2854ab-857a-4029-b03e-470c4452693e-logs\") pod \"ironic-55ccfbf469-qsbxt\" (UID: \"1b2854ab-857a-4029-b03e-470c4452693e\") " pod="openstack/ironic-55ccfbf469-qsbxt" Mar 12 18:48:17.657394 master-0 kubenswrapper[29097]: I0312 18:48:17.654073 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b2854ab-857a-4029-b03e-470c4452693e-config-data-custom\") pod \"ironic-55ccfbf469-qsbxt\" (UID: \"1b2854ab-857a-4029-b03e-470c4452693e\") " pod="openstack/ironic-55ccfbf469-qsbxt" Mar 12 18:48:17.657394 master-0 kubenswrapper[29097]: I0312 18:48:17.654881 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b2854ab-857a-4029-b03e-470c4452693e-config-data\") pod \"ironic-55ccfbf469-qsbxt\" (UID: \"1b2854ab-857a-4029-b03e-470c4452693e\") " pod="openstack/ironic-55ccfbf469-qsbxt" Mar 12 18:48:17.657871 master-0 kubenswrapper[29097]: I0312 18:48:17.657852 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1b2854ab-857a-4029-b03e-470c4452693e-etc-podinfo\") pod \"ironic-55ccfbf469-qsbxt\" (UID: \"1b2854ab-857a-4029-b03e-470c4452693e\") " pod="openstack/ironic-55ccfbf469-qsbxt" Mar 12 18:48:17.660635 master-0 kubenswrapper[29097]: I0312 18:48:17.659999 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b2854ab-857a-4029-b03e-470c4452693e-combined-ca-bundle\") pod \"ironic-55ccfbf469-qsbxt\" (UID: \"1b2854ab-857a-4029-b03e-470c4452693e\") " pod="openstack/ironic-55ccfbf469-qsbxt" Mar 12 18:48:17.662696 master-0 kubenswrapper[29097]: I0312 18:48:17.662493 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b2854ab-857a-4029-b03e-470c4452693e-scripts\") pod \"ironic-55ccfbf469-qsbxt\" (UID: \"1b2854ab-857a-4029-b03e-470c4452693e\") " pod="openstack/ironic-55ccfbf469-qsbxt" Mar 12 18:48:17.682862 master-0 kubenswrapper[29097]: I0312 18:48:17.682814 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhmlp\" (UniqueName: \"kubernetes.io/projected/1b2854ab-857a-4029-b03e-470c4452693e-kube-api-access-fhmlp\") pod \"ironic-55ccfbf469-qsbxt\" (UID: \"1b2854ab-857a-4029-b03e-470c4452693e\") " pod="openstack/ironic-55ccfbf469-qsbxt" Mar 12 18:48:17.688841 master-0 kubenswrapper[29097]: I0312 18:48:17.688620 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-cea2-account-create-update-8lqsg" Mar 12 18:48:17.697243 master-0 kubenswrapper[29097]: I0312 18:48:17.696399 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" Mar 12 18:48:17.775840 master-0 kubenswrapper[29097]: I0312 18:48:17.775777 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-55ccfbf469-qsbxt" Mar 12 18:48:18.099978 master-0 kubenswrapper[29097]: I0312 18:48:18.099924 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-d4xb7"] Mar 12 18:48:18.248246 master-0 kubenswrapper[29097]: I0312 18:48:18.248182 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9c57cd77c-r4fqr"] Mar 12 18:48:19.037397 master-0 kubenswrapper[29097]: I0312 18:48:19.037355 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-d4xb7" event={"ID":"07e88358-2e50-4143-beec-5f4a0698dcca","Type":"ContainerStarted","Data":"ad72d21227927379fd94951774435abc4d004dc80afd66abf63e9b37eda934c3"} Mar 12 18:48:19.037603 master-0 kubenswrapper[29097]: I0312 18:48:19.037585 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9c57cd77c-r4fqr" event={"ID":"1c556c93-e908-407c-9f50-ad291a0bb179","Type":"ContainerStarted","Data":"923f43950678978efcdccf938e10af43e7388f73ffce85a4c92fc076b93b5629"} Mar 12 18:48:19.041472 master-0 kubenswrapper[29097]: I0312 18:48:19.041448 29097 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 18:48:19.078002 master-0 kubenswrapper[29097]: I0312 18:48:19.077963 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-55ccfbf469-qsbxt"] Mar 12 18:48:19.119343 master-0 kubenswrapper[29097]: I0312 18:48:19.117150 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-cea2-account-create-update-8lqsg"] Mar 12 18:48:19.141325 master-0 kubenswrapper[29097]: I0312 18:48:19.140612 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-7cb69d965b-d79tc"] Mar 12 18:48:19.313640 master-0 kubenswrapper[29097]: I0312 18:48:19.313591 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-conductor-0"] Mar 12 18:48:19.319676 master-0 kubenswrapper[29097]: I0312 18:48:19.319648 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Mar 12 18:48:19.323483 master-0 kubenswrapper[29097]: I0312 18:48:19.323264 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-config-data" Mar 12 18:48:19.323483 master-0 kubenswrapper[29097]: I0312 18:48:19.323457 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-scripts" Mar 12 18:48:19.403012 master-0 kubenswrapper[29097]: I0312 18:48:19.402916 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Mar 12 18:48:19.498927 master-0 kubenswrapper[29097]: I0312 18:48:19.497188 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmjjs\" (UniqueName: \"kubernetes.io/projected/7fb8fbc7-949d-4526-8456-fbf8277cee2f-kube-api-access-pmjjs\") pod \"ironic-conductor-0\" (UID: \"7fb8fbc7-949d-4526-8456-fbf8277cee2f\") " pod="openstack/ironic-conductor-0" Mar 12 18:48:19.498927 master-0 kubenswrapper[29097]: I0312 18:48:19.497309 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7fb8fbc7-949d-4526-8456-fbf8277cee2f-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"7fb8fbc7-949d-4526-8456-fbf8277cee2f\") " pod="openstack/ironic-conductor-0" Mar 12 18:48:19.498927 master-0 kubenswrapper[29097]: I0312 18:48:19.497368 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fb8fbc7-949d-4526-8456-fbf8277cee2f-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"7fb8fbc7-949d-4526-8456-fbf8277cee2f\") " pod="openstack/ironic-conductor-0" Mar 12 18:48:19.498927 master-0 kubenswrapper[29097]: I0312 18:48:19.497388 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7fb8fbc7-949d-4526-8456-fbf8277cee2f-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"7fb8fbc7-949d-4526-8456-fbf8277cee2f\") " pod="openstack/ironic-conductor-0" Mar 12 18:48:19.498927 master-0 kubenswrapper[29097]: I0312 18:48:19.497456 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fb8fbc7-949d-4526-8456-fbf8277cee2f-config-data\") pod \"ironic-conductor-0\" (UID: \"7fb8fbc7-949d-4526-8456-fbf8277cee2f\") " pod="openstack/ironic-conductor-0" Mar 12 18:48:19.498927 master-0 kubenswrapper[29097]: I0312 18:48:19.497479 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c33be201-c751-466a-b6c7-9d1932d839d6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ca740de5-7885-4133-806a-587c0c8c0400\") pod \"ironic-conductor-0\" (UID: \"7fb8fbc7-949d-4526-8456-fbf8277cee2f\") " pod="openstack/ironic-conductor-0" Mar 12 18:48:19.498927 master-0 kubenswrapper[29097]: I0312 18:48:19.497536 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7fb8fbc7-949d-4526-8456-fbf8277cee2f-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"7fb8fbc7-949d-4526-8456-fbf8277cee2f\") " pod="openstack/ironic-conductor-0" Mar 12 18:48:19.498927 master-0 kubenswrapper[29097]: I0312 18:48:19.497729 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fb8fbc7-949d-4526-8456-fbf8277cee2f-scripts\") pod \"ironic-conductor-0\" (UID: \"7fb8fbc7-949d-4526-8456-fbf8277cee2f\") " pod="openstack/ironic-conductor-0" Mar 12 18:48:19.601574 master-0 kubenswrapper[29097]: I0312 18:48:19.599189 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fb8fbc7-949d-4526-8456-fbf8277cee2f-scripts\") pod \"ironic-conductor-0\" (UID: \"7fb8fbc7-949d-4526-8456-fbf8277cee2f\") " pod="openstack/ironic-conductor-0" Mar 12 18:48:19.601574 master-0 kubenswrapper[29097]: I0312 18:48:19.599351 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmjjs\" (UniqueName: \"kubernetes.io/projected/7fb8fbc7-949d-4526-8456-fbf8277cee2f-kube-api-access-pmjjs\") pod \"ironic-conductor-0\" (UID: \"7fb8fbc7-949d-4526-8456-fbf8277cee2f\") " pod="openstack/ironic-conductor-0" Mar 12 18:48:19.601574 master-0 kubenswrapper[29097]: I0312 18:48:19.599828 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7fb8fbc7-949d-4526-8456-fbf8277cee2f-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"7fb8fbc7-949d-4526-8456-fbf8277cee2f\") " pod="openstack/ironic-conductor-0" Mar 12 18:48:19.601574 master-0 kubenswrapper[29097]: I0312 18:48:19.600221 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7fb8fbc7-949d-4526-8456-fbf8277cee2f-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"7fb8fbc7-949d-4526-8456-fbf8277cee2f\") " pod="openstack/ironic-conductor-0" Mar 12 18:48:19.601574 master-0 kubenswrapper[29097]: I0312 18:48:19.600278 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fb8fbc7-949d-4526-8456-fbf8277cee2f-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"7fb8fbc7-949d-4526-8456-fbf8277cee2f\") " pod="openstack/ironic-conductor-0" Mar 12 18:48:19.601574 master-0 kubenswrapper[29097]: I0312 18:48:19.600296 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7fb8fbc7-949d-4526-8456-fbf8277cee2f-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"7fb8fbc7-949d-4526-8456-fbf8277cee2f\") " pod="openstack/ironic-conductor-0" Mar 12 18:48:19.601574 master-0 kubenswrapper[29097]: I0312 18:48:19.600906 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fb8fbc7-949d-4526-8456-fbf8277cee2f-config-data\") pod \"ironic-conductor-0\" (UID: \"7fb8fbc7-949d-4526-8456-fbf8277cee2f\") " pod="openstack/ironic-conductor-0" Mar 12 18:48:19.601574 master-0 kubenswrapper[29097]: I0312 18:48:19.600940 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c33be201-c751-466a-b6c7-9d1932d839d6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ca740de5-7885-4133-806a-587c0c8c0400\") pod \"ironic-conductor-0\" (UID: \"7fb8fbc7-949d-4526-8456-fbf8277cee2f\") " pod="openstack/ironic-conductor-0" Mar 12 18:48:19.601574 master-0 kubenswrapper[29097]: I0312 18:48:19.600970 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7fb8fbc7-949d-4526-8456-fbf8277cee2f-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"7fb8fbc7-949d-4526-8456-fbf8277cee2f\") " pod="openstack/ironic-conductor-0" Mar 12 18:48:19.604027 master-0 kubenswrapper[29097]: I0312 18:48:19.603981 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fb8fbc7-949d-4526-8456-fbf8277cee2f-scripts\") pod \"ironic-conductor-0\" (UID: \"7fb8fbc7-949d-4526-8456-fbf8277cee2f\") " pod="openstack/ironic-conductor-0" Mar 12 18:48:19.604142 master-0 kubenswrapper[29097]: I0312 18:48:19.604007 29097 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 18:48:19.604142 master-0 kubenswrapper[29097]: I0312 18:48:19.604075 29097 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c33be201-c751-466a-b6c7-9d1932d839d6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ca740de5-7885-4133-806a-587c0c8c0400\") pod \"ironic-conductor-0\" (UID: \"7fb8fbc7-949d-4526-8456-fbf8277cee2f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/05c43a04ed87efea0a588b3aba2ae41575d23e016a5c73901c46d91be7273a41/globalmount\"" pod="openstack/ironic-conductor-0" Mar 12 18:48:19.604361 master-0 kubenswrapper[29097]: I0312 18:48:19.604332 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7fb8fbc7-949d-4526-8456-fbf8277cee2f-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"7fb8fbc7-949d-4526-8456-fbf8277cee2f\") " pod="openstack/ironic-conductor-0" Mar 12 18:48:19.604612 master-0 kubenswrapper[29097]: I0312 18:48:19.604460 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fb8fbc7-949d-4526-8456-fbf8277cee2f-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"7fb8fbc7-949d-4526-8456-fbf8277cee2f\") " pod="openstack/ironic-conductor-0" Mar 12 18:48:19.606220 master-0 kubenswrapper[29097]: I0312 18:48:19.606184 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7fb8fbc7-949d-4526-8456-fbf8277cee2f-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"7fb8fbc7-949d-4526-8456-fbf8277cee2f\") " pod="openstack/ironic-conductor-0" Mar 12 18:48:19.615606 master-0 kubenswrapper[29097]: I0312 18:48:19.615557 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fb8fbc7-949d-4526-8456-fbf8277cee2f-config-data\") pod \"ironic-conductor-0\" (UID: \"7fb8fbc7-949d-4526-8456-fbf8277cee2f\") " pod="openstack/ironic-conductor-0" Mar 12 18:48:19.617988 master-0 kubenswrapper[29097]: I0312 18:48:19.617944 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmjjs\" (UniqueName: \"kubernetes.io/projected/7fb8fbc7-949d-4526-8456-fbf8277cee2f-kube-api-access-pmjjs\") pod \"ironic-conductor-0\" (UID: \"7fb8fbc7-949d-4526-8456-fbf8277cee2f\") " pod="openstack/ironic-conductor-0" Mar 12 18:48:20.056710 master-0 kubenswrapper[29097]: I0312 18:48:20.054026 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" event={"ID":"87a19dc7-5415-4d3d-a22e-9e2524a67e38","Type":"ContainerStarted","Data":"51c6ed6db947ead7e48da11e4f62708197d0b2f048105085183a10c043b513cf"} Mar 12 18:48:20.063532 master-0 kubenswrapper[29097]: I0312 18:48:20.061343 29097 generic.go:334] "Generic (PLEG): container finished" podID="1c556c93-e908-407c-9f50-ad291a0bb179" containerID="9bcea6245874411139f2ad2f808a35d8864b7cda959005f74cb0124b883933e3" exitCode=0 Mar 12 18:48:20.063532 master-0 kubenswrapper[29097]: I0312 18:48:20.061427 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9c57cd77c-r4fqr" event={"ID":"1c556c93-e908-407c-9f50-ad291a0bb179","Type":"ContainerDied","Data":"9bcea6245874411139f2ad2f808a35d8864b7cda959005f74cb0124b883933e3"} Mar 12 18:48:20.077541 master-0 kubenswrapper[29097]: I0312 18:48:20.072949 29097 generic.go:334] "Generic (PLEG): container finished" podID="5d607db3-da18-458f-b364-187dc0ddc676" containerID="baae62a5708af0f8a709d427d2e5302c794f168a68ae02a6d6c59479f013c0d6" exitCode=0 Mar 12 18:48:20.077541 master-0 kubenswrapper[29097]: I0312 18:48:20.073143 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-cea2-account-create-update-8lqsg" event={"ID":"5d607db3-da18-458f-b364-187dc0ddc676","Type":"ContainerDied","Data":"baae62a5708af0f8a709d427d2e5302c794f168a68ae02a6d6c59479f013c0d6"} Mar 12 18:48:20.077541 master-0 kubenswrapper[29097]: I0312 18:48:20.073169 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-cea2-account-create-update-8lqsg" event={"ID":"5d607db3-da18-458f-b364-187dc0ddc676","Type":"ContainerStarted","Data":"02c42f22ae94a7b34caa9cdbb99fa368a85a060a02e3d99282770ef3431b2383"} Mar 12 18:48:20.077541 master-0 kubenswrapper[29097]: I0312 18:48:20.077459 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-55ccfbf469-qsbxt" event={"ID":"1b2854ab-857a-4029-b03e-470c4452693e","Type":"ContainerStarted","Data":"44cd5045eef6be158a58220799de6c214609df152467126f9f0ac639ed8c2b88"} Mar 12 18:48:20.083526 master-0 kubenswrapper[29097]: I0312 18:48:20.081157 29097 generic.go:334] "Generic (PLEG): container finished" podID="07e88358-2e50-4143-beec-5f4a0698dcca" containerID="b2d639c08ac1416fee22672c8139a385ca968043a1be1aa301316a25e479ef21" exitCode=0 Mar 12 18:48:20.083526 master-0 kubenswrapper[29097]: I0312 18:48:20.081200 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-d4xb7" event={"ID":"07e88358-2e50-4143-beec-5f4a0698dcca","Type":"ContainerDied","Data":"b2d639c08ac1416fee22672c8139a385ca968043a1be1aa301316a25e479ef21"} Mar 12 18:48:20.408537 master-0 kubenswrapper[29097]: I0312 18:48:20.408391 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 12 18:48:20.425039 master-0 kubenswrapper[29097]: I0312 18:48:20.424972 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 12 18:48:20.437891 master-0 kubenswrapper[29097]: I0312 18:48:20.435109 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 12 18:48:20.442938 master-0 kubenswrapper[29097]: I0312 18:48:20.439227 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 12 18:48:20.442938 master-0 kubenswrapper[29097]: I0312 18:48:20.439481 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 12 18:48:20.540169 master-0 kubenswrapper[29097]: I0312 18:48:20.538813 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c9a54b83-be30-4fdb-a23c-1ad2ce020453-openstack-config-secret\") pod \"openstackclient\" (UID: \"c9a54b83-be30-4fdb-a23c-1ad2ce020453\") " pod="openstack/openstackclient" Mar 12 18:48:20.540424 master-0 kubenswrapper[29097]: I0312 18:48:20.540401 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c9a54b83-be30-4fdb-a23c-1ad2ce020453-openstack-config\") pod \"openstackclient\" (UID: \"c9a54b83-be30-4fdb-a23c-1ad2ce020453\") " pod="openstack/openstackclient" Mar 12 18:48:20.540590 master-0 kubenswrapper[29097]: I0312 18:48:20.540574 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a54b83-be30-4fdb-a23c-1ad2ce020453-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c9a54b83-be30-4fdb-a23c-1ad2ce020453\") " pod="openstack/openstackclient" Mar 12 18:48:20.540818 master-0 kubenswrapper[29097]: I0312 18:48:20.540801 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l9fw\" (UniqueName: \"kubernetes.io/projected/c9a54b83-be30-4fdb-a23c-1ad2ce020453-kube-api-access-6l9fw\") pod \"openstackclient\" (UID: \"c9a54b83-be30-4fdb-a23c-1ad2ce020453\") " pod="openstack/openstackclient" Mar 12 18:48:20.649364 master-0 kubenswrapper[29097]: I0312 18:48:20.649295 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c9a54b83-be30-4fdb-a23c-1ad2ce020453-openstack-config-secret\") pod \"openstackclient\" (UID: \"c9a54b83-be30-4fdb-a23c-1ad2ce020453\") " pod="openstack/openstackclient" Mar 12 18:48:20.651706 master-0 kubenswrapper[29097]: I0312 18:48:20.649386 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c9a54b83-be30-4fdb-a23c-1ad2ce020453-openstack-config\") pod \"openstackclient\" (UID: \"c9a54b83-be30-4fdb-a23c-1ad2ce020453\") " pod="openstack/openstackclient" Mar 12 18:48:20.651706 master-0 kubenswrapper[29097]: I0312 18:48:20.650417 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c9a54b83-be30-4fdb-a23c-1ad2ce020453-openstack-config\") pod \"openstackclient\" (UID: \"c9a54b83-be30-4fdb-a23c-1ad2ce020453\") " pod="openstack/openstackclient" Mar 12 18:48:20.651706 master-0 kubenswrapper[29097]: I0312 18:48:20.650439 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a54b83-be30-4fdb-a23c-1ad2ce020453-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c9a54b83-be30-4fdb-a23c-1ad2ce020453\") " pod="openstack/openstackclient" Mar 12 18:48:20.651706 master-0 kubenswrapper[29097]: I0312 18:48:20.650680 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l9fw\" (UniqueName: \"kubernetes.io/projected/c9a54b83-be30-4fdb-a23c-1ad2ce020453-kube-api-access-6l9fw\") pod \"openstackclient\" (UID: \"c9a54b83-be30-4fdb-a23c-1ad2ce020453\") " pod="openstack/openstackclient" Mar 12 18:48:20.660280 master-0 kubenswrapper[29097]: I0312 18:48:20.660240 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a54b83-be30-4fdb-a23c-1ad2ce020453-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c9a54b83-be30-4fdb-a23c-1ad2ce020453\") " pod="openstack/openstackclient" Mar 12 18:48:20.673110 master-0 kubenswrapper[29097]: I0312 18:48:20.672456 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l9fw\" (UniqueName: \"kubernetes.io/projected/c9a54b83-be30-4fdb-a23c-1ad2ce020453-kube-api-access-6l9fw\") pod \"openstackclient\" (UID: \"c9a54b83-be30-4fdb-a23c-1ad2ce020453\") " pod="openstack/openstackclient" Mar 12 18:48:20.680968 master-0 kubenswrapper[29097]: I0312 18:48:20.680497 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c9a54b83-be30-4fdb-a23c-1ad2ce020453-openstack-config-secret\") pod \"openstackclient\" (UID: \"c9a54b83-be30-4fdb-a23c-1ad2ce020453\") " pod="openstack/openstackclient" Mar 12 18:48:20.778397 master-0 kubenswrapper[29097]: I0312 18:48:20.778348 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 12 18:48:21.034243 master-0 kubenswrapper[29097]: I0312 18:48:21.034192 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c33be201-c751-466a-b6c7-9d1932d839d6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ca740de5-7885-4133-806a-587c0c8c0400\") pod \"ironic-conductor-0\" (UID: \"7fb8fbc7-949d-4526-8456-fbf8277cee2f\") " pod="openstack/ironic-conductor-0" Mar 12 18:48:21.099386 master-0 kubenswrapper[29097]: I0312 18:48:21.099323 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9c57cd77c-r4fqr" event={"ID":"1c556c93-e908-407c-9f50-ad291a0bb179","Type":"ContainerStarted","Data":"84420a33b4aea2f3c66a83786c8826a0bdc5ecffdcccccef53e00875780f9062"} Mar 12 18:48:21.100408 master-0 kubenswrapper[29097]: I0312 18:48:21.100381 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9c57cd77c-r4fqr" Mar 12 18:48:21.240050 master-0 kubenswrapper[29097]: E0312 18:48:21.238029 29097 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33feec78_4592_4343_965b_aa1b7044fcf3.slice/crio-26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Error finding container 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Status 404 returned error can't find the container with id 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547 Mar 12 18:48:21.319370 master-0 kubenswrapper[29097]: I0312 18:48:21.319089 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Mar 12 18:48:21.781696 master-0 kubenswrapper[29097]: I0312 18:48:21.781616 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9c57cd77c-r4fqr" podStartSLOduration=4.781597805 podStartE2EDuration="4.781597805s" podCreationTimestamp="2026-03-12 18:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:48:21.134809244 +0000 UTC m=+1140.688789341" watchObservedRunningTime="2026-03-12 18:48:21.781597805 +0000 UTC m=+1141.335577902" Mar 12 18:48:21.793534 master-0 kubenswrapper[29097]: I0312 18:48:21.791642 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-6b755b479c-nl884"] Mar 12 18:48:21.794927 master-0 kubenswrapper[29097]: I0312 18:48:21.793889 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-6b755b479c-nl884" Mar 12 18:48:21.802530 master-0 kubenswrapper[29097]: I0312 18:48:21.799594 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-public-svc" Mar 12 18:48:21.818529 master-0 kubenswrapper[29097]: I0312 18:48:21.815286 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-internal-svc" Mar 12 18:48:21.852533 master-0 kubenswrapper[29097]: I0312 18:48:21.841762 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-6b755b479c-nl884"] Mar 12 18:48:21.905533 master-0 kubenswrapper[29097]: I0312 18:48:21.901856 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/734222f4-1b3b-4cb5-9ad3-029a54640f81-combined-ca-bundle\") pod \"ironic-6b755b479c-nl884\" (UID: \"734222f4-1b3b-4cb5-9ad3-029a54640f81\") " pod="openstack/ironic-6b755b479c-nl884" Mar 12 18:48:21.905533 master-0 kubenswrapper[29097]: I0312 18:48:21.902008 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/734222f4-1b3b-4cb5-9ad3-029a54640f81-logs\") pod \"ironic-6b755b479c-nl884\" (UID: \"734222f4-1b3b-4cb5-9ad3-029a54640f81\") " pod="openstack/ironic-6b755b479c-nl884" Mar 12 18:48:21.905533 master-0 kubenswrapper[29097]: I0312 18:48:21.902034 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/734222f4-1b3b-4cb5-9ad3-029a54640f81-etc-podinfo\") pod \"ironic-6b755b479c-nl884\" (UID: \"734222f4-1b3b-4cb5-9ad3-029a54640f81\") " pod="openstack/ironic-6b755b479c-nl884" Mar 12 18:48:21.905533 master-0 kubenswrapper[29097]: I0312 18:48:21.902099 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/734222f4-1b3b-4cb5-9ad3-029a54640f81-internal-tls-certs\") pod \"ironic-6b755b479c-nl884\" (UID: \"734222f4-1b3b-4cb5-9ad3-029a54640f81\") " pod="openstack/ironic-6b755b479c-nl884" Mar 12 18:48:21.905533 master-0 kubenswrapper[29097]: I0312 18:48:21.902164 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/734222f4-1b3b-4cb5-9ad3-029a54640f81-config-data-merged\") pod \"ironic-6b755b479c-nl884\" (UID: \"734222f4-1b3b-4cb5-9ad3-029a54640f81\") " pod="openstack/ironic-6b755b479c-nl884" Mar 12 18:48:21.905533 master-0 kubenswrapper[29097]: I0312 18:48:21.902193 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/734222f4-1b3b-4cb5-9ad3-029a54640f81-public-tls-certs\") pod \"ironic-6b755b479c-nl884\" (UID: \"734222f4-1b3b-4cb5-9ad3-029a54640f81\") " pod="openstack/ironic-6b755b479c-nl884" Mar 12 18:48:21.905533 master-0 kubenswrapper[29097]: I0312 18:48:21.902214 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7bcm\" (UniqueName: \"kubernetes.io/projected/734222f4-1b3b-4cb5-9ad3-029a54640f81-kube-api-access-n7bcm\") pod \"ironic-6b755b479c-nl884\" (UID: \"734222f4-1b3b-4cb5-9ad3-029a54640f81\") " pod="openstack/ironic-6b755b479c-nl884" Mar 12 18:48:21.905533 master-0 kubenswrapper[29097]: I0312 18:48:21.902233 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/734222f4-1b3b-4cb5-9ad3-029a54640f81-scripts\") pod \"ironic-6b755b479c-nl884\" (UID: \"734222f4-1b3b-4cb5-9ad3-029a54640f81\") " pod="openstack/ironic-6b755b479c-nl884" Mar 12 18:48:21.905533 master-0 kubenswrapper[29097]: I0312 18:48:21.902263 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/734222f4-1b3b-4cb5-9ad3-029a54640f81-config-data-custom\") pod \"ironic-6b755b479c-nl884\" (UID: \"734222f4-1b3b-4cb5-9ad3-029a54640f81\") " pod="openstack/ironic-6b755b479c-nl884" Mar 12 18:48:21.905533 master-0 kubenswrapper[29097]: I0312 18:48:21.902282 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/734222f4-1b3b-4cb5-9ad3-029a54640f81-config-data\") pod \"ironic-6b755b479c-nl884\" (UID: \"734222f4-1b3b-4cb5-9ad3-029a54640f81\") " pod="openstack/ironic-6b755b479c-nl884" Mar 12 18:48:22.006471 master-0 kubenswrapper[29097]: I0312 18:48:22.006414 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/734222f4-1b3b-4cb5-9ad3-029a54640f81-config-data-merged\") pod \"ironic-6b755b479c-nl884\" (UID: \"734222f4-1b3b-4cb5-9ad3-029a54640f81\") " pod="openstack/ironic-6b755b479c-nl884" Mar 12 18:48:22.006701 master-0 kubenswrapper[29097]: I0312 18:48:22.006496 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/734222f4-1b3b-4cb5-9ad3-029a54640f81-public-tls-certs\") pod \"ironic-6b755b479c-nl884\" (UID: \"734222f4-1b3b-4cb5-9ad3-029a54640f81\") " pod="openstack/ironic-6b755b479c-nl884" Mar 12 18:48:22.006701 master-0 kubenswrapper[29097]: I0312 18:48:22.006537 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7bcm\" (UniqueName: \"kubernetes.io/projected/734222f4-1b3b-4cb5-9ad3-029a54640f81-kube-api-access-n7bcm\") pod \"ironic-6b755b479c-nl884\" (UID: \"734222f4-1b3b-4cb5-9ad3-029a54640f81\") " pod="openstack/ironic-6b755b479c-nl884" Mar 12 18:48:22.006701 master-0 kubenswrapper[29097]: I0312 18:48:22.006560 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/734222f4-1b3b-4cb5-9ad3-029a54640f81-scripts\") pod \"ironic-6b755b479c-nl884\" (UID: \"734222f4-1b3b-4cb5-9ad3-029a54640f81\") " pod="openstack/ironic-6b755b479c-nl884" Mar 12 18:48:22.006701 master-0 kubenswrapper[29097]: I0312 18:48:22.006592 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/734222f4-1b3b-4cb5-9ad3-029a54640f81-config-data-custom\") pod \"ironic-6b755b479c-nl884\" (UID: \"734222f4-1b3b-4cb5-9ad3-029a54640f81\") " pod="openstack/ironic-6b755b479c-nl884" Mar 12 18:48:22.006701 master-0 kubenswrapper[29097]: I0312 18:48:22.006609 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/734222f4-1b3b-4cb5-9ad3-029a54640f81-config-data\") pod \"ironic-6b755b479c-nl884\" (UID: \"734222f4-1b3b-4cb5-9ad3-029a54640f81\") " pod="openstack/ironic-6b755b479c-nl884" Mar 12 18:48:22.006701 master-0 kubenswrapper[29097]: I0312 18:48:22.006688 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/734222f4-1b3b-4cb5-9ad3-029a54640f81-combined-ca-bundle\") pod \"ironic-6b755b479c-nl884\" (UID: \"734222f4-1b3b-4cb5-9ad3-029a54640f81\") " pod="openstack/ironic-6b755b479c-nl884" Mar 12 18:48:22.006915 master-0 kubenswrapper[29097]: I0312 18:48:22.006803 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/734222f4-1b3b-4cb5-9ad3-029a54640f81-logs\") pod \"ironic-6b755b479c-nl884\" (UID: \"734222f4-1b3b-4cb5-9ad3-029a54640f81\") " pod="openstack/ironic-6b755b479c-nl884" Mar 12 18:48:22.006915 master-0 kubenswrapper[29097]: I0312 18:48:22.006835 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/734222f4-1b3b-4cb5-9ad3-029a54640f81-etc-podinfo\") pod \"ironic-6b755b479c-nl884\" (UID: \"734222f4-1b3b-4cb5-9ad3-029a54640f81\") " pod="openstack/ironic-6b755b479c-nl884" Mar 12 18:48:22.006915 master-0 kubenswrapper[29097]: I0312 18:48:22.006862 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/734222f4-1b3b-4cb5-9ad3-029a54640f81-internal-tls-certs\") pod \"ironic-6b755b479c-nl884\" (UID: \"734222f4-1b3b-4cb5-9ad3-029a54640f81\") " pod="openstack/ironic-6b755b479c-nl884" Mar 12 18:48:22.009055 master-0 kubenswrapper[29097]: I0312 18:48:22.009000 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/734222f4-1b3b-4cb5-9ad3-029a54640f81-logs\") pod \"ironic-6b755b479c-nl884\" (UID: \"734222f4-1b3b-4cb5-9ad3-029a54640f81\") " pod="openstack/ironic-6b755b479c-nl884" Mar 12 18:48:22.010862 master-0 kubenswrapper[29097]: I0312 18:48:22.010825 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/734222f4-1b3b-4cb5-9ad3-029a54640f81-config-data-merged\") pod \"ironic-6b755b479c-nl884\" (UID: \"734222f4-1b3b-4cb5-9ad3-029a54640f81\") " pod="openstack/ironic-6b755b479c-nl884" Mar 12 18:48:22.028428 master-0 kubenswrapper[29097]: I0312 18:48:22.024606 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7bcm\" (UniqueName: \"kubernetes.io/projected/734222f4-1b3b-4cb5-9ad3-029a54640f81-kube-api-access-n7bcm\") pod \"ironic-6b755b479c-nl884\" (UID: \"734222f4-1b3b-4cb5-9ad3-029a54640f81\") " pod="openstack/ironic-6b755b479c-nl884" Mar 12 18:48:22.028428 master-0 kubenswrapper[29097]: I0312 18:48:22.025002 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/734222f4-1b3b-4cb5-9ad3-029a54640f81-etc-podinfo\") pod \"ironic-6b755b479c-nl884\" (UID: \"734222f4-1b3b-4cb5-9ad3-029a54640f81\") " pod="openstack/ironic-6b755b479c-nl884" Mar 12 18:48:22.028428 master-0 kubenswrapper[29097]: I0312 18:48:22.027971 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/734222f4-1b3b-4cb5-9ad3-029a54640f81-internal-tls-certs\") pod \"ironic-6b755b479c-nl884\" (UID: \"734222f4-1b3b-4cb5-9ad3-029a54640f81\") " pod="openstack/ironic-6b755b479c-nl884" Mar 12 18:48:22.032438 master-0 kubenswrapper[29097]: I0312 18:48:22.029243 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/734222f4-1b3b-4cb5-9ad3-029a54640f81-config-data-custom\") pod \"ironic-6b755b479c-nl884\" (UID: \"734222f4-1b3b-4cb5-9ad3-029a54640f81\") " pod="openstack/ironic-6b755b479c-nl884" Mar 12 18:48:22.032438 master-0 kubenswrapper[29097]: I0312 18:48:22.032187 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/734222f4-1b3b-4cb5-9ad3-029a54640f81-scripts\") pod \"ironic-6b755b479c-nl884\" (UID: \"734222f4-1b3b-4cb5-9ad3-029a54640f81\") " pod="openstack/ironic-6b755b479c-nl884" Mar 12 18:48:22.034463 master-0 kubenswrapper[29097]: I0312 18:48:22.033184 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/734222f4-1b3b-4cb5-9ad3-029a54640f81-config-data\") pod \"ironic-6b755b479c-nl884\" (UID: \"734222f4-1b3b-4cb5-9ad3-029a54640f81\") " pod="openstack/ironic-6b755b479c-nl884" Mar 12 18:48:22.048537 master-0 kubenswrapper[29097]: I0312 18:48:22.042496 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/734222f4-1b3b-4cb5-9ad3-029a54640f81-combined-ca-bundle\") pod \"ironic-6b755b479c-nl884\" (UID: \"734222f4-1b3b-4cb5-9ad3-029a54640f81\") " pod="openstack/ironic-6b755b479c-nl884" Mar 12 18:48:22.048537 master-0 kubenswrapper[29097]: I0312 18:48:22.046172 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/734222f4-1b3b-4cb5-9ad3-029a54640f81-public-tls-certs\") pod \"ironic-6b755b479c-nl884\" (UID: \"734222f4-1b3b-4cb5-9ad3-029a54640f81\") " pod="openstack/ironic-6b755b479c-nl884" Mar 12 18:48:22.157332 master-0 kubenswrapper[29097]: I0312 18:48:22.157269 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-6b755b479c-nl884" Mar 12 18:48:22.464859 master-0 kubenswrapper[29097]: I0312 18:48:22.464819 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-d4xb7" Mar 12 18:48:22.473431 master-0 kubenswrapper[29097]: I0312 18:48:22.473335 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-cea2-account-create-update-8lqsg" Mar 12 18:48:22.624914 master-0 kubenswrapper[29097]: I0312 18:48:22.624790 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d607db3-da18-458f-b364-187dc0ddc676-operator-scripts\") pod \"5d607db3-da18-458f-b364-187dc0ddc676\" (UID: \"5d607db3-da18-458f-b364-187dc0ddc676\") " Mar 12 18:48:22.625313 master-0 kubenswrapper[29097]: I0312 18:48:22.625261 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz6qt\" (UniqueName: \"kubernetes.io/projected/07e88358-2e50-4143-beec-5f4a0698dcca-kube-api-access-dz6qt\") pod \"07e88358-2e50-4143-beec-5f4a0698dcca\" (UID: \"07e88358-2e50-4143-beec-5f4a0698dcca\") " Mar 12 18:48:22.625465 master-0 kubenswrapper[29097]: I0312 18:48:22.625448 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmhr2\" (UniqueName: \"kubernetes.io/projected/5d607db3-da18-458f-b364-187dc0ddc676-kube-api-access-dmhr2\") pod \"5d607db3-da18-458f-b364-187dc0ddc676\" (UID: \"5d607db3-da18-458f-b364-187dc0ddc676\") " Mar 12 18:48:22.625810 master-0 kubenswrapper[29097]: I0312 18:48:22.625680 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07e88358-2e50-4143-beec-5f4a0698dcca-operator-scripts\") pod \"07e88358-2e50-4143-beec-5f4a0698dcca\" (UID: \"07e88358-2e50-4143-beec-5f4a0698dcca\") " Mar 12 18:48:22.632763 master-0 kubenswrapper[29097]: I0312 18:48:22.632008 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d607db3-da18-458f-b364-187dc0ddc676-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d607db3-da18-458f-b364-187dc0ddc676" (UID: "5d607db3-da18-458f-b364-187dc0ddc676"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:48:22.632763 master-0 kubenswrapper[29097]: I0312 18:48:22.632022 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07e88358-2e50-4143-beec-5f4a0698dcca-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "07e88358-2e50-4143-beec-5f4a0698dcca" (UID: "07e88358-2e50-4143-beec-5f4a0698dcca"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:48:22.643677 master-0 kubenswrapper[29097]: I0312 18:48:22.643619 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07e88358-2e50-4143-beec-5f4a0698dcca-kube-api-access-dz6qt" (OuterVolumeSpecName: "kube-api-access-dz6qt") pod "07e88358-2e50-4143-beec-5f4a0698dcca" (UID: "07e88358-2e50-4143-beec-5f4a0698dcca"). InnerVolumeSpecName "kube-api-access-dz6qt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:48:22.646930 master-0 kubenswrapper[29097]: I0312 18:48:22.646566 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7b649cbbbb-tkhcf"] Mar 12 18:48:22.646930 master-0 kubenswrapper[29097]: I0312 18:48:22.646796 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d607db3-da18-458f-b364-187dc0ddc676-kube-api-access-dmhr2" (OuterVolumeSpecName: "kube-api-access-dmhr2") pod "5d607db3-da18-458f-b364-187dc0ddc676" (UID: "5d607db3-da18-458f-b364-187dc0ddc676"). InnerVolumeSpecName "kube-api-access-dmhr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:48:22.662260 master-0 kubenswrapper[29097]: E0312 18:48:22.647691 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d607db3-da18-458f-b364-187dc0ddc676" containerName="mariadb-account-create-update" Mar 12 18:48:22.662260 master-0 kubenswrapper[29097]: I0312 18:48:22.647714 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d607db3-da18-458f-b364-187dc0ddc676" containerName="mariadb-account-create-update" Mar 12 18:48:22.662260 master-0 kubenswrapper[29097]: E0312 18:48:22.647750 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e88358-2e50-4143-beec-5f4a0698dcca" containerName="mariadb-database-create" Mar 12 18:48:22.662260 master-0 kubenswrapper[29097]: I0312 18:48:22.647756 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e88358-2e50-4143-beec-5f4a0698dcca" containerName="mariadb-database-create" Mar 12 18:48:22.662260 master-0 kubenswrapper[29097]: I0312 18:48:22.647971 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d607db3-da18-458f-b364-187dc0ddc676" containerName="mariadb-account-create-update" Mar 12 18:48:22.662260 master-0 kubenswrapper[29097]: I0312 18:48:22.648008 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="07e88358-2e50-4143-beec-5f4a0698dcca" containerName="mariadb-database-create" Mar 12 18:48:22.662260 master-0 kubenswrapper[29097]: I0312 18:48:22.652085 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7b649cbbbb-tkhcf" Mar 12 18:48:22.662260 master-0 kubenswrapper[29097]: I0312 18:48:22.656528 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Mar 12 18:48:22.662260 master-0 kubenswrapper[29097]: I0312 18:48:22.657464 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 12 18:48:22.662260 master-0 kubenswrapper[29097]: I0312 18:48:22.657706 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Mar 12 18:48:22.677129 master-0 kubenswrapper[29097]: I0312 18:48:22.676796 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7b649cbbbb-tkhcf"] Mar 12 18:48:22.733635 master-0 kubenswrapper[29097]: I0312 18:48:22.733598 29097 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d607db3-da18-458f-b364-187dc0ddc676-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:22.733956 master-0 kubenswrapper[29097]: I0312 18:48:22.733941 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dz6qt\" (UniqueName: \"kubernetes.io/projected/07e88358-2e50-4143-beec-5f4a0698dcca-kube-api-access-dz6qt\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:22.734054 master-0 kubenswrapper[29097]: I0312 18:48:22.734042 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmhr2\" (UniqueName: \"kubernetes.io/projected/5d607db3-da18-458f-b364-187dc0ddc676-kube-api-access-dmhr2\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:22.734141 master-0 kubenswrapper[29097]: I0312 18:48:22.734130 29097 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07e88358-2e50-4143-beec-5f4a0698dcca-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:22.835891 master-0 kubenswrapper[29097]: I0312 18:48:22.835827 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfw4p\" (UniqueName: \"kubernetes.io/projected/c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da-kube-api-access-zfw4p\") pod \"swift-proxy-7b649cbbbb-tkhcf\" (UID: \"c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da\") " pod="openstack/swift-proxy-7b649cbbbb-tkhcf" Mar 12 18:48:22.836433 master-0 kubenswrapper[29097]: I0312 18:48:22.835909 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da-internal-tls-certs\") pod \"swift-proxy-7b649cbbbb-tkhcf\" (UID: \"c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da\") " pod="openstack/swift-proxy-7b649cbbbb-tkhcf" Mar 12 18:48:22.836433 master-0 kubenswrapper[29097]: I0312 18:48:22.835952 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da-etc-swift\") pod \"swift-proxy-7b649cbbbb-tkhcf\" (UID: \"c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da\") " pod="openstack/swift-proxy-7b649cbbbb-tkhcf" Mar 12 18:48:22.836433 master-0 kubenswrapper[29097]: I0312 18:48:22.835999 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da-run-httpd\") pod \"swift-proxy-7b649cbbbb-tkhcf\" (UID: \"c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da\") " pod="openstack/swift-proxy-7b649cbbbb-tkhcf" Mar 12 18:48:22.836433 master-0 kubenswrapper[29097]: I0312 18:48:22.836128 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da-public-tls-certs\") pod \"swift-proxy-7b649cbbbb-tkhcf\" (UID: \"c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da\") " pod="openstack/swift-proxy-7b649cbbbb-tkhcf" Mar 12 18:48:22.836433 master-0 kubenswrapper[29097]: I0312 18:48:22.836161 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da-config-data\") pod \"swift-proxy-7b649cbbbb-tkhcf\" (UID: \"c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da\") " pod="openstack/swift-proxy-7b649cbbbb-tkhcf" Mar 12 18:48:22.836433 master-0 kubenswrapper[29097]: I0312 18:48:22.836211 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da-combined-ca-bundle\") pod \"swift-proxy-7b649cbbbb-tkhcf\" (UID: \"c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da\") " pod="openstack/swift-proxy-7b649cbbbb-tkhcf" Mar 12 18:48:22.836433 master-0 kubenswrapper[29097]: I0312 18:48:22.836310 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da-log-httpd\") pod \"swift-proxy-7b649cbbbb-tkhcf\" (UID: \"c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da\") " pod="openstack/swift-proxy-7b649cbbbb-tkhcf" Mar 12 18:48:22.940130 master-0 kubenswrapper[29097]: I0312 18:48:22.939972 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da-public-tls-certs\") pod \"swift-proxy-7b649cbbbb-tkhcf\" (UID: \"c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da\") " pod="openstack/swift-proxy-7b649cbbbb-tkhcf" Mar 12 18:48:22.940130 master-0 kubenswrapper[29097]: I0312 18:48:22.940102 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da-config-data\") pod \"swift-proxy-7b649cbbbb-tkhcf\" (UID: \"c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da\") " pod="openstack/swift-proxy-7b649cbbbb-tkhcf" Mar 12 18:48:22.940390 master-0 kubenswrapper[29097]: I0312 18:48:22.940164 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da-combined-ca-bundle\") pod \"swift-proxy-7b649cbbbb-tkhcf\" (UID: \"c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da\") " pod="openstack/swift-proxy-7b649cbbbb-tkhcf" Mar 12 18:48:22.940390 master-0 kubenswrapper[29097]: I0312 18:48:22.940308 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da-log-httpd\") pod \"swift-proxy-7b649cbbbb-tkhcf\" (UID: \"c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da\") " pod="openstack/swift-proxy-7b649cbbbb-tkhcf" Mar 12 18:48:22.942128 master-0 kubenswrapper[29097]: I0312 18:48:22.941119 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfw4p\" (UniqueName: \"kubernetes.io/projected/c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da-kube-api-access-zfw4p\") pod \"swift-proxy-7b649cbbbb-tkhcf\" (UID: \"c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da\") " pod="openstack/swift-proxy-7b649cbbbb-tkhcf" Mar 12 18:48:22.942128 master-0 kubenswrapper[29097]: I0312 18:48:22.941247 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da-internal-tls-certs\") pod \"swift-proxy-7b649cbbbb-tkhcf\" (UID: \"c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da\") " pod="openstack/swift-proxy-7b649cbbbb-tkhcf" Mar 12 18:48:22.942128 master-0 kubenswrapper[29097]: I0312 18:48:22.941323 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da-etc-swift\") pod \"swift-proxy-7b649cbbbb-tkhcf\" (UID: \"c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da\") " pod="openstack/swift-proxy-7b649cbbbb-tkhcf" Mar 12 18:48:22.942128 master-0 kubenswrapper[29097]: I0312 18:48:22.941370 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da-run-httpd\") pod \"swift-proxy-7b649cbbbb-tkhcf\" (UID: \"c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da\") " pod="openstack/swift-proxy-7b649cbbbb-tkhcf" Mar 12 18:48:22.942128 master-0 kubenswrapper[29097]: I0312 18:48:22.941556 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da-log-httpd\") pod \"swift-proxy-7b649cbbbb-tkhcf\" (UID: \"c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da\") " pod="openstack/swift-proxy-7b649cbbbb-tkhcf" Mar 12 18:48:22.943760 master-0 kubenswrapper[29097]: I0312 18:48:22.943726 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da-run-httpd\") pod \"swift-proxy-7b649cbbbb-tkhcf\" (UID: \"c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da\") " pod="openstack/swift-proxy-7b649cbbbb-tkhcf" Mar 12 18:48:22.944625 master-0 kubenswrapper[29097]: I0312 18:48:22.944431 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da-public-tls-certs\") pod \"swift-proxy-7b649cbbbb-tkhcf\" (UID: \"c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da\") " pod="openstack/swift-proxy-7b649cbbbb-tkhcf" Mar 12 18:48:22.954569 master-0 kubenswrapper[29097]: I0312 18:48:22.949169 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da-combined-ca-bundle\") pod \"swift-proxy-7b649cbbbb-tkhcf\" (UID: \"c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da\") " pod="openstack/swift-proxy-7b649cbbbb-tkhcf" Mar 12 18:48:22.954569 master-0 kubenswrapper[29097]: I0312 18:48:22.949332 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da-internal-tls-certs\") pod \"swift-proxy-7b649cbbbb-tkhcf\" (UID: \"c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da\") " pod="openstack/swift-proxy-7b649cbbbb-tkhcf" Mar 12 18:48:22.954569 master-0 kubenswrapper[29097]: I0312 18:48:22.950752 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da-etc-swift\") pod \"swift-proxy-7b649cbbbb-tkhcf\" (UID: \"c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da\") " pod="openstack/swift-proxy-7b649cbbbb-tkhcf" Mar 12 18:48:22.954569 master-0 kubenswrapper[29097]: I0312 18:48:22.951423 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da-config-data\") pod \"swift-proxy-7b649cbbbb-tkhcf\" (UID: \"c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da\") " pod="openstack/swift-proxy-7b649cbbbb-tkhcf" Mar 12 18:48:22.959593 master-0 kubenswrapper[29097]: I0312 18:48:22.959307 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfw4p\" (UniqueName: \"kubernetes.io/projected/c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da-kube-api-access-zfw4p\") pod \"swift-proxy-7b649cbbbb-tkhcf\" (UID: \"c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da\") " pod="openstack/swift-proxy-7b649cbbbb-tkhcf" Mar 12 18:48:22.998836 master-0 kubenswrapper[29097]: I0312 18:48:22.998778 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7b649cbbbb-tkhcf" Mar 12 18:48:23.144323 master-0 kubenswrapper[29097]: I0312 18:48:23.144209 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-cea2-account-create-update-8lqsg" event={"ID":"5d607db3-da18-458f-b364-187dc0ddc676","Type":"ContainerDied","Data":"02c42f22ae94a7b34caa9cdbb99fa368a85a060a02e3d99282770ef3431b2383"} Mar 12 18:48:23.144323 master-0 kubenswrapper[29097]: I0312 18:48:23.144257 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02c42f22ae94a7b34caa9cdbb99fa368a85a060a02e3d99282770ef3431b2383" Mar 12 18:48:23.144323 master-0 kubenswrapper[29097]: I0312 18:48:23.144325 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-cea2-account-create-update-8lqsg" Mar 12 18:48:23.147575 master-0 kubenswrapper[29097]: I0312 18:48:23.145968 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 12 18:48:23.160838 master-0 kubenswrapper[29097]: I0312 18:48:23.160805 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-transport" Mar 12 18:48:23.168900 master-0 kubenswrapper[29097]: I0312 18:48:23.168868 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-d4xb7" Mar 12 18:48:23.169334 master-0 kubenswrapper[29097]: I0312 18:48:23.169311 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-d4xb7" event={"ID":"07e88358-2e50-4143-beec-5f4a0698dcca","Type":"ContainerDied","Data":"ad72d21227927379fd94951774435abc4d004dc80afd66abf63e9b37eda934c3"} Mar 12 18:48:23.169417 master-0 kubenswrapper[29097]: I0312 18:48:23.169339 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad72d21227927379fd94951774435abc4d004dc80afd66abf63e9b37eda934c3" Mar 12 18:48:23.391688 master-0 kubenswrapper[29097]: I0312 18:48:23.391615 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8655bff577-lrzbz" Mar 12 18:48:23.491138 master-0 kubenswrapper[29097]: I0312 18:48:23.490668 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-ffbf57c88-5pgzn"] Mar 12 18:48:23.491138 master-0 kubenswrapper[29097]: I0312 18:48:23.491087 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-ffbf57c88-5pgzn" podUID="185eee46-61f6-4cb6-8bed-9d63f1d448cc" containerName="neutron-httpd" containerID="cri-o://478103112f5e27d1ea066ae34a0d3308c8a174e07663a9b84a472fc5f0e2f9a2" gracePeriod=30 Mar 12 18:48:23.491334 master-0 kubenswrapper[29097]: I0312 18:48:23.491278 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-ffbf57c88-5pgzn" podUID="185eee46-61f6-4cb6-8bed-9d63f1d448cc" containerName="neutron-api" containerID="cri-o://7a00d292c2fc00c24eed8e5a385b815ce49f8dc2e1a7b9536068c0e3ac017e0c" gracePeriod=30 Mar 12 18:48:23.704935 master-0 kubenswrapper[29097]: I0312 18:48:23.704695 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-6b755b479c-nl884"] Mar 12 18:48:23.857534 master-0 kubenswrapper[29097]: I0312 18:48:23.852053 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 12 18:48:23.891946 master-0 kubenswrapper[29097]: I0312 18:48:23.861761 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7b649cbbbb-tkhcf"] Mar 12 18:48:24.180094 master-0 kubenswrapper[29097]: I0312 18:48:24.178402 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Mar 12 18:48:24.196867 master-0 kubenswrapper[29097]: I0312 18:48:24.196813 29097 generic.go:334] "Generic (PLEG): container finished" podID="1b2854ab-857a-4029-b03e-470c4452693e" containerID="b1b3f7b56278d3dc5db232f790596113a3c5ac1eeec6727f8c1ca9278a4cb759" exitCode=0 Mar 12 18:48:24.196995 master-0 kubenswrapper[29097]: I0312 18:48:24.196914 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-55ccfbf469-qsbxt" event={"ID":"1b2854ab-857a-4029-b03e-470c4452693e","Type":"ContainerDied","Data":"b1b3f7b56278d3dc5db232f790596113a3c5ac1eeec6727f8c1ca9278a4cb759"} Mar 12 18:48:24.207901 master-0 kubenswrapper[29097]: I0312 18:48:24.207855 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" event={"ID":"87a19dc7-5415-4d3d-a22e-9e2524a67e38","Type":"ContainerStarted","Data":"1bf4107df195448536c691333d2c9fb8c90be7fb80d6a5739f9adbca0f4a5df7"} Mar 12 18:48:24.208037 master-0 kubenswrapper[29097]: I0312 18:48:24.208023 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" Mar 12 18:48:24.214448 master-0 kubenswrapper[29097]: I0312 18:48:24.214406 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c9a54b83-be30-4fdb-a23c-1ad2ce020453","Type":"ContainerStarted","Data":"db4c488fbae31f4b13707b66d349d72f0f779507ecd817229a260f0dcdae031e"} Mar 12 18:48:24.240933 master-0 kubenswrapper[29097]: I0312 18:48:24.240854 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7b649cbbbb-tkhcf" event={"ID":"c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da","Type":"ContainerStarted","Data":"a2f3e822864829d40bd386b07874f29598ff1f0f04426e13f757aecded2e5166"} Mar 12 18:48:24.251155 master-0 kubenswrapper[29097]: I0312 18:48:24.251053 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" podStartSLOduration=3.25509211 podStartE2EDuration="7.251032445s" podCreationTimestamp="2026-03-12 18:48:17 +0000 UTC" firstStartedPulling="2026-03-12 18:48:19.128977132 +0000 UTC m=+1138.682957229" lastFinishedPulling="2026-03-12 18:48:23.124917467 +0000 UTC m=+1142.678897564" observedRunningTime="2026-03-12 18:48:24.248386699 +0000 UTC m=+1143.802366796" watchObservedRunningTime="2026-03-12 18:48:24.251032445 +0000 UTC m=+1143.805012552" Mar 12 18:48:24.273936 master-0 kubenswrapper[29097]: I0312 18:48:24.273610 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6b755b479c-nl884" event={"ID":"734222f4-1b3b-4cb5-9ad3-029a54640f81","Type":"ContainerStarted","Data":"56bf33ac70f84f9380f888b01eac5d4a4c957cc532ed6a34def7bfba1826dcda"} Mar 12 18:48:24.273936 master-0 kubenswrapper[29097]: I0312 18:48:24.273660 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6b755b479c-nl884" event={"ID":"734222f4-1b3b-4cb5-9ad3-029a54640f81","Type":"ContainerStarted","Data":"380fcb1501093acecd51c8b79711cdbd2e87c13d031d210ddb61cca36037188e"} Mar 12 18:48:24.282569 master-0 kubenswrapper[29097]: I0312 18:48:24.281216 29097 generic.go:334] "Generic (PLEG): container finished" podID="185eee46-61f6-4cb6-8bed-9d63f1d448cc" containerID="478103112f5e27d1ea066ae34a0d3308c8a174e07663a9b84a472fc5f0e2f9a2" exitCode=0 Mar 12 18:48:24.282569 master-0 kubenswrapper[29097]: I0312 18:48:24.281265 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ffbf57c88-5pgzn" event={"ID":"185eee46-61f6-4cb6-8bed-9d63f1d448cc","Type":"ContainerDied","Data":"478103112f5e27d1ea066ae34a0d3308c8a174e07663a9b84a472fc5f0e2f9a2"} Mar 12 18:48:25.331432 master-0 kubenswrapper[29097]: I0312 18:48:25.330420 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"7fb8fbc7-949d-4526-8456-fbf8277cee2f","Type":"ContainerStarted","Data":"1e2949dfc903ac60fba6982a7abf82438da2594db1dd78f6d431a8c4f99a4564"} Mar 12 18:48:25.331432 master-0 kubenswrapper[29097]: I0312 18:48:25.330487 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"7fb8fbc7-949d-4526-8456-fbf8277cee2f","Type":"ContainerStarted","Data":"5cbf77fddbd87d325568c0107f80506b04678df6fac1c5311f728a789dbca6ec"} Mar 12 18:48:25.336537 master-0 kubenswrapper[29097]: I0312 18:48:25.335486 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7b649cbbbb-tkhcf" event={"ID":"c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da","Type":"ContainerStarted","Data":"a5f009de7853b44e3a042dd8b157cb29a421d7426e8bbc3e49ebbbeed2182fcd"} Mar 12 18:48:25.336537 master-0 kubenswrapper[29097]: I0312 18:48:25.335559 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7b649cbbbb-tkhcf" event={"ID":"c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da","Type":"ContainerStarted","Data":"5237d65a6ea1adb93a3e1009ba276a10f945eefb29e43ccbce011adff9042811"} Mar 12 18:48:25.336537 master-0 kubenswrapper[29097]: I0312 18:48:25.335648 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7b649cbbbb-tkhcf" Mar 12 18:48:25.336537 master-0 kubenswrapper[29097]: I0312 18:48:25.335679 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7b649cbbbb-tkhcf" Mar 12 18:48:25.340538 master-0 kubenswrapper[29097]: I0312 18:48:25.337274 29097 generic.go:334] "Generic (PLEG): container finished" podID="734222f4-1b3b-4cb5-9ad3-029a54640f81" containerID="56bf33ac70f84f9380f888b01eac5d4a4c957cc532ed6a34def7bfba1826dcda" exitCode=0 Mar 12 18:48:25.340538 master-0 kubenswrapper[29097]: I0312 18:48:25.337330 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6b755b479c-nl884" event={"ID":"734222f4-1b3b-4cb5-9ad3-029a54640f81","Type":"ContainerDied","Data":"56bf33ac70f84f9380f888b01eac5d4a4c957cc532ed6a34def7bfba1826dcda"} Mar 12 18:48:25.357538 master-0 kubenswrapper[29097]: I0312 18:48:25.354348 29097 generic.go:334] "Generic (PLEG): container finished" podID="1b2854ab-857a-4029-b03e-470c4452693e" containerID="eeaca7ce516ddc8cff2d11765ec201f580d126a180716a8a341edf0ec4c78942" exitCode=1 Mar 12 18:48:25.357538 master-0 kubenswrapper[29097]: I0312 18:48:25.354531 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-55ccfbf469-qsbxt" event={"ID":"1b2854ab-857a-4029-b03e-470c4452693e","Type":"ContainerDied","Data":"eeaca7ce516ddc8cff2d11765ec201f580d126a180716a8a341edf0ec4c78942"} Mar 12 18:48:25.357538 master-0 kubenswrapper[29097]: I0312 18:48:25.354590 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-55ccfbf469-qsbxt" event={"ID":"1b2854ab-857a-4029-b03e-470c4452693e","Type":"ContainerStarted","Data":"1d4d3e82ce5a17f17c898eb5b4ebbffec1ae77f40a3c1074178fa2a9cb667403"} Mar 12 18:48:25.357538 master-0 kubenswrapper[29097]: I0312 18:48:25.355439 29097 scope.go:117] "RemoveContainer" containerID="eeaca7ce516ddc8cff2d11765ec201f580d126a180716a8a341edf0ec4c78942" Mar 12 18:48:25.464689 master-0 kubenswrapper[29097]: I0312 18:48:25.461695 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7b649cbbbb-tkhcf" podStartSLOduration=3.461675915 podStartE2EDuration="3.461675915s" podCreationTimestamp="2026-03-12 18:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:48:25.430262428 +0000 UTC m=+1144.984242535" watchObservedRunningTime="2026-03-12 18:48:25.461675915 +0000 UTC m=+1145.015656012" Mar 12 18:48:26.402988 master-0 kubenswrapper[29097]: I0312 18:48:26.402939 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6b755b479c-nl884" event={"ID":"734222f4-1b3b-4cb5-9ad3-029a54640f81","Type":"ContainerStarted","Data":"b2661b500aea3ea7d5999b6f5ce7868011c760f84349f41b4e881cb4d136b521"} Mar 12 18:48:26.407534 master-0 kubenswrapper[29097]: I0312 18:48:26.403992 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-6b755b479c-nl884" Mar 12 18:48:26.407534 master-0 kubenswrapper[29097]: I0312 18:48:26.404038 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6b755b479c-nl884" event={"ID":"734222f4-1b3b-4cb5-9ad3-029a54640f81","Type":"ContainerStarted","Data":"507159a2bbcedf74d68d673c73dcbe82b643692240a23dcbd1e0d07a691177a7"} Mar 12 18:48:26.421141 master-0 kubenswrapper[29097]: I0312 18:48:26.421063 29097 generic.go:334] "Generic (PLEG): container finished" podID="1b2854ab-857a-4029-b03e-470c4452693e" containerID="93a537970596b95a91a6486be6009923ac1cd4bce9add1b04addc18f1df3685e" exitCode=1 Mar 12 18:48:26.421677 master-0 kubenswrapper[29097]: I0312 18:48:26.421623 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-55ccfbf469-qsbxt" event={"ID":"1b2854ab-857a-4029-b03e-470c4452693e","Type":"ContainerDied","Data":"93a537970596b95a91a6486be6009923ac1cd4bce9add1b04addc18f1df3685e"} Mar 12 18:48:26.422116 master-0 kubenswrapper[29097]: I0312 18:48:26.421696 29097 scope.go:117] "RemoveContainer" containerID="eeaca7ce516ddc8cff2d11765ec201f580d126a180716a8a341edf0ec4c78942" Mar 12 18:48:26.422953 master-0 kubenswrapper[29097]: I0312 18:48:26.422924 29097 scope.go:117] "RemoveContainer" containerID="93a537970596b95a91a6486be6009923ac1cd4bce9add1b04addc18f1df3685e" Mar 12 18:48:26.423330 master-0 kubenswrapper[29097]: E0312 18:48:26.423299 29097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-55ccfbf469-qsbxt_openstack(1b2854ab-857a-4029-b03e-470c4452693e)\"" pod="openstack/ironic-55ccfbf469-qsbxt" podUID="1b2854ab-857a-4029-b03e-470c4452693e" Mar 12 18:48:26.700113 master-0 kubenswrapper[29097]: I0312 18:48:26.699974 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-6b755b479c-nl884" podStartSLOduration=5.699953849 podStartE2EDuration="5.699953849s" podCreationTimestamp="2026-03-12 18:48:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:48:26.69269896 +0000 UTC m=+1146.246679077" watchObservedRunningTime="2026-03-12 18:48:26.699953849 +0000 UTC m=+1146.253933956" Mar 12 18:48:27.434059 master-0 kubenswrapper[29097]: I0312 18:48:27.434005 29097 scope.go:117] "RemoveContainer" containerID="93a537970596b95a91a6486be6009923ac1cd4bce9add1b04addc18f1df3685e" Mar 12 18:48:27.434610 master-0 kubenswrapper[29097]: E0312 18:48:27.434274 29097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-55ccfbf469-qsbxt_openstack(1b2854ab-857a-4029-b03e-470c4452693e)\"" pod="openstack/ironic-55ccfbf469-qsbxt" podUID="1b2854ab-857a-4029-b03e-470c4452693e" Mar 12 18:48:27.436736 master-0 kubenswrapper[29097]: I0312 18:48:27.436687 29097 generic.go:334] "Generic (PLEG): container finished" podID="87a19dc7-5415-4d3d-a22e-9e2524a67e38" containerID="1bf4107df195448536c691333d2c9fb8c90be7fb80d6a5739f9adbca0f4a5df7" exitCode=1 Mar 12 18:48:27.436836 master-0 kubenswrapper[29097]: I0312 18:48:27.436794 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" event={"ID":"87a19dc7-5415-4d3d-a22e-9e2524a67e38","Type":"ContainerDied","Data":"1bf4107df195448536c691333d2c9fb8c90be7fb80d6a5739f9adbca0f4a5df7"} Mar 12 18:48:27.437352 master-0 kubenswrapper[29097]: I0312 18:48:27.437317 29097 scope.go:117] "RemoveContainer" containerID="1bf4107df195448536c691333d2c9fb8c90be7fb80d6a5739f9adbca0f4a5df7" Mar 12 18:48:27.610824 master-0 kubenswrapper[29097]: I0312 18:48:27.610756 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9c57cd77c-r4fqr" Mar 12 18:48:27.697488 master-0 kubenswrapper[29097]: I0312 18:48:27.697452 29097 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" Mar 12 18:48:27.704423 master-0 kubenswrapper[29097]: I0312 18:48:27.703120 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848b9c6b49-l9j7w"] Mar 12 18:48:27.704423 master-0 kubenswrapper[29097]: I0312 18:48:27.703377 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848b9c6b49-l9j7w" podUID="030364f8-b8e4-43b2-9597-5ca376e5f1a6" containerName="dnsmasq-dns" containerID="cri-o://d182439ec51902c12039297a878281f6f761bf7d7413d6aed470ccfae7ba8632" gracePeriod=10 Mar 12 18:48:27.776932 master-0 kubenswrapper[29097]: I0312 18:48:27.776690 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-55ccfbf469-qsbxt" Mar 12 18:48:27.776932 master-0 kubenswrapper[29097]: I0312 18:48:27.776756 29097 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-55ccfbf469-qsbxt" Mar 12 18:48:28.350479 master-0 kubenswrapper[29097]: I0312 18:48:28.350429 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848b9c6b49-l9j7w" Mar 12 18:48:28.438809 master-0 kubenswrapper[29097]: I0312 18:48:28.438127 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/030364f8-b8e4-43b2-9597-5ca376e5f1a6-ovsdbserver-nb\") pod \"030364f8-b8e4-43b2-9597-5ca376e5f1a6\" (UID: \"030364f8-b8e4-43b2-9597-5ca376e5f1a6\") " Mar 12 18:48:28.438809 master-0 kubenswrapper[29097]: I0312 18:48:28.438220 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/030364f8-b8e4-43b2-9597-5ca376e5f1a6-dns-svc\") pod \"030364f8-b8e4-43b2-9597-5ca376e5f1a6\" (UID: \"030364f8-b8e4-43b2-9597-5ca376e5f1a6\") " Mar 12 18:48:28.438809 master-0 kubenswrapper[29097]: I0312 18:48:28.438342 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/030364f8-b8e4-43b2-9597-5ca376e5f1a6-dns-swift-storage-0\") pod \"030364f8-b8e4-43b2-9597-5ca376e5f1a6\" (UID: \"030364f8-b8e4-43b2-9597-5ca376e5f1a6\") " Mar 12 18:48:28.446964 master-0 kubenswrapper[29097]: I0312 18:48:28.446898 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/030364f8-b8e4-43b2-9597-5ca376e5f1a6-ovsdbserver-sb\") pod \"030364f8-b8e4-43b2-9597-5ca376e5f1a6\" (UID: \"030364f8-b8e4-43b2-9597-5ca376e5f1a6\") " Mar 12 18:48:28.447215 master-0 kubenswrapper[29097]: I0312 18:48:28.447041 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-974mw\" (UniqueName: \"kubernetes.io/projected/030364f8-b8e4-43b2-9597-5ca376e5f1a6-kube-api-access-974mw\") pod \"030364f8-b8e4-43b2-9597-5ca376e5f1a6\" (UID: \"030364f8-b8e4-43b2-9597-5ca376e5f1a6\") " Mar 12 18:48:28.447215 master-0 kubenswrapper[29097]: I0312 18:48:28.447153 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/030364f8-b8e4-43b2-9597-5ca376e5f1a6-config\") pod \"030364f8-b8e4-43b2-9597-5ca376e5f1a6\" (UID: \"030364f8-b8e4-43b2-9597-5ca376e5f1a6\") " Mar 12 18:48:28.452640 master-0 kubenswrapper[29097]: I0312 18:48:28.452591 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/030364f8-b8e4-43b2-9597-5ca376e5f1a6-kube-api-access-974mw" (OuterVolumeSpecName: "kube-api-access-974mw") pod "030364f8-b8e4-43b2-9597-5ca376e5f1a6" (UID: "030364f8-b8e4-43b2-9597-5ca376e5f1a6"). InnerVolumeSpecName "kube-api-access-974mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:48:28.491258 master-0 kubenswrapper[29097]: I0312 18:48:28.491072 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/030364f8-b8e4-43b2-9597-5ca376e5f1a6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "030364f8-b8e4-43b2-9597-5ca376e5f1a6" (UID: "030364f8-b8e4-43b2-9597-5ca376e5f1a6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:48:28.498300 master-0 kubenswrapper[29097]: I0312 18:48:28.497247 29097 generic.go:334] "Generic (PLEG): container finished" podID="030364f8-b8e4-43b2-9597-5ca376e5f1a6" containerID="d182439ec51902c12039297a878281f6f761bf7d7413d6aed470ccfae7ba8632" exitCode=0 Mar 12 18:48:28.498300 master-0 kubenswrapper[29097]: I0312 18:48:28.497321 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848b9c6b49-l9j7w" event={"ID":"030364f8-b8e4-43b2-9597-5ca376e5f1a6","Type":"ContainerDied","Data":"d182439ec51902c12039297a878281f6f761bf7d7413d6aed470ccfae7ba8632"} Mar 12 18:48:28.498300 master-0 kubenswrapper[29097]: I0312 18:48:28.497347 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848b9c6b49-l9j7w" event={"ID":"030364f8-b8e4-43b2-9597-5ca376e5f1a6","Type":"ContainerDied","Data":"36d472a0f3653d05a5cbb507b1824c23da21bc519e01a36c3f944970e976b553"} Mar 12 18:48:28.498300 master-0 kubenswrapper[29097]: I0312 18:48:28.497364 29097 scope.go:117] "RemoveContainer" containerID="d182439ec51902c12039297a878281f6f761bf7d7413d6aed470ccfae7ba8632" Mar 12 18:48:28.498300 master-0 kubenswrapper[29097]: I0312 18:48:28.497472 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848b9c6b49-l9j7w" Mar 12 18:48:28.505246 master-0 kubenswrapper[29097]: I0312 18:48:28.505192 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/030364f8-b8e4-43b2-9597-5ca376e5f1a6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "030364f8-b8e4-43b2-9597-5ca376e5f1a6" (UID: "030364f8-b8e4-43b2-9597-5ca376e5f1a6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:48:28.507297 master-0 kubenswrapper[29097]: I0312 18:48:28.507114 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" event={"ID":"87a19dc7-5415-4d3d-a22e-9e2524a67e38","Type":"ContainerStarted","Data":"90050cff7b5852880030f3c0a5a083928d00c5d3dc28a22a53ca2f32e03f9bd8"} Mar 12 18:48:28.508576 master-0 kubenswrapper[29097]: I0312 18:48:28.508305 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" Mar 12 18:48:28.514260 master-0 kubenswrapper[29097]: I0312 18:48:28.514224 29097 generic.go:334] "Generic (PLEG): container finished" podID="7fb8fbc7-949d-4526-8456-fbf8277cee2f" containerID="1e2949dfc903ac60fba6982a7abf82438da2594db1dd78f6d431a8c4f99a4564" exitCode=0 Mar 12 18:48:28.515724 master-0 kubenswrapper[29097]: I0312 18:48:28.515704 29097 scope.go:117] "RemoveContainer" containerID="93a537970596b95a91a6486be6009923ac1cd4bce9add1b04addc18f1df3685e" Mar 12 18:48:28.516143 master-0 kubenswrapper[29097]: E0312 18:48:28.516114 29097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-55ccfbf469-qsbxt_openstack(1b2854ab-857a-4029-b03e-470c4452693e)\"" pod="openstack/ironic-55ccfbf469-qsbxt" podUID="1b2854ab-857a-4029-b03e-470c4452693e" Mar 12 18:48:28.516441 master-0 kubenswrapper[29097]: I0312 18:48:28.516418 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"7fb8fbc7-949d-4526-8456-fbf8277cee2f","Type":"ContainerDied","Data":"1e2949dfc903ac60fba6982a7abf82438da2594db1dd78f6d431a8c4f99a4564"} Mar 12 18:48:28.550580 master-0 kubenswrapper[29097]: I0312 18:48:28.547454 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/030364f8-b8e4-43b2-9597-5ca376e5f1a6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "030364f8-b8e4-43b2-9597-5ca376e5f1a6" (UID: "030364f8-b8e4-43b2-9597-5ca376e5f1a6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:48:28.561603 master-0 kubenswrapper[29097]: I0312 18:48:28.552367 29097 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/030364f8-b8e4-43b2-9597-5ca376e5f1a6-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:28.561603 master-0 kubenswrapper[29097]: I0312 18:48:28.552402 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-974mw\" (UniqueName: \"kubernetes.io/projected/030364f8-b8e4-43b2-9597-5ca376e5f1a6-kube-api-access-974mw\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:28.561603 master-0 kubenswrapper[29097]: I0312 18:48:28.552411 29097 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/030364f8-b8e4-43b2-9597-5ca376e5f1a6-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:28.561603 master-0 kubenswrapper[29097]: I0312 18:48:28.552419 29097 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/030364f8-b8e4-43b2-9597-5ca376e5f1a6-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:28.604599 master-0 kubenswrapper[29097]: I0312 18:48:28.604493 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/030364f8-b8e4-43b2-9597-5ca376e5f1a6-config" (OuterVolumeSpecName: "config") pod "030364f8-b8e4-43b2-9597-5ca376e5f1a6" (UID: "030364f8-b8e4-43b2-9597-5ca376e5f1a6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:48:28.622436 master-0 kubenswrapper[29097]: I0312 18:48:28.622374 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/030364f8-b8e4-43b2-9597-5ca376e5f1a6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "030364f8-b8e4-43b2-9597-5ca376e5f1a6" (UID: "030364f8-b8e4-43b2-9597-5ca376e5f1a6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:48:28.656039 master-0 kubenswrapper[29097]: I0312 18:48:28.655985 29097 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/030364f8-b8e4-43b2-9597-5ca376e5f1a6-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:28.656039 master-0 kubenswrapper[29097]: I0312 18:48:28.656023 29097 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/030364f8-b8e4-43b2-9597-5ca376e5f1a6-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:28.745676 master-0 kubenswrapper[29097]: I0312 18:48:28.744549 29097 scope.go:117] "RemoveContainer" containerID="eaf553f5c68e673c3e90222fc6c2b13f76305b342109dd80b6a2578d3f8b9d93" Mar 12 18:48:28.787986 master-0 kubenswrapper[29097]: I0312 18:48:28.787900 29097 scope.go:117] "RemoveContainer" containerID="d182439ec51902c12039297a878281f6f761bf7d7413d6aed470ccfae7ba8632" Mar 12 18:48:28.788293 master-0 kubenswrapper[29097]: E0312 18:48:28.788247 29097 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d182439ec51902c12039297a878281f6f761bf7d7413d6aed470ccfae7ba8632\": container with ID starting with d182439ec51902c12039297a878281f6f761bf7d7413d6aed470ccfae7ba8632 not found: ID does not exist" containerID="d182439ec51902c12039297a878281f6f761bf7d7413d6aed470ccfae7ba8632" Mar 12 18:48:28.788340 master-0 kubenswrapper[29097]: I0312 18:48:28.788282 29097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d182439ec51902c12039297a878281f6f761bf7d7413d6aed470ccfae7ba8632"} err="failed to get container status \"d182439ec51902c12039297a878281f6f761bf7d7413d6aed470ccfae7ba8632\": rpc error: code = NotFound desc = could not find container \"d182439ec51902c12039297a878281f6f761bf7d7413d6aed470ccfae7ba8632\": container with ID starting with d182439ec51902c12039297a878281f6f761bf7d7413d6aed470ccfae7ba8632 not found: ID does not exist" Mar 12 18:48:28.788340 master-0 kubenswrapper[29097]: I0312 18:48:28.788302 29097 scope.go:117] "RemoveContainer" containerID="eaf553f5c68e673c3e90222fc6c2b13f76305b342109dd80b6a2578d3f8b9d93" Mar 12 18:48:28.788727 master-0 kubenswrapper[29097]: E0312 18:48:28.788697 29097 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaf553f5c68e673c3e90222fc6c2b13f76305b342109dd80b6a2578d3f8b9d93\": container with ID starting with eaf553f5c68e673c3e90222fc6c2b13f76305b342109dd80b6a2578d3f8b9d93 not found: ID does not exist" containerID="eaf553f5c68e673c3e90222fc6c2b13f76305b342109dd80b6a2578d3f8b9d93" Mar 12 18:48:28.788772 master-0 kubenswrapper[29097]: I0312 18:48:28.788725 29097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaf553f5c68e673c3e90222fc6c2b13f76305b342109dd80b6a2578d3f8b9d93"} err="failed to get container status \"eaf553f5c68e673c3e90222fc6c2b13f76305b342109dd80b6a2578d3f8b9d93\": rpc error: code = NotFound desc = could not find container \"eaf553f5c68e673c3e90222fc6c2b13f76305b342109dd80b6a2578d3f8b9d93\": container with ID starting with eaf553f5c68e673c3e90222fc6c2b13f76305b342109dd80b6a2578d3f8b9d93 not found: ID does not exist" Mar 12 18:48:28.871710 master-0 kubenswrapper[29097]: I0312 18:48:28.871630 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848b9c6b49-l9j7w"] Mar 12 18:48:28.882013 master-0 kubenswrapper[29097]: I0312 18:48:28.881948 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848b9c6b49-l9j7w"] Mar 12 18:48:29.129382 master-0 kubenswrapper[29097]: I0312 18:48:29.129302 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ffbf57c88-5pgzn" Mar 12 18:48:29.167114 master-0 kubenswrapper[29097]: I0312 18:48:29.167055 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kp9ms\" (UniqueName: \"kubernetes.io/projected/185eee46-61f6-4cb6-8bed-9d63f1d448cc-kube-api-access-kp9ms\") pod \"185eee46-61f6-4cb6-8bed-9d63f1d448cc\" (UID: \"185eee46-61f6-4cb6-8bed-9d63f1d448cc\") " Mar 12 18:48:29.167355 master-0 kubenswrapper[29097]: I0312 18:48:29.167175 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/185eee46-61f6-4cb6-8bed-9d63f1d448cc-config\") pod \"185eee46-61f6-4cb6-8bed-9d63f1d448cc\" (UID: \"185eee46-61f6-4cb6-8bed-9d63f1d448cc\") " Mar 12 18:48:29.167355 master-0 kubenswrapper[29097]: I0312 18:48:29.167208 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/185eee46-61f6-4cb6-8bed-9d63f1d448cc-httpd-config\") pod \"185eee46-61f6-4cb6-8bed-9d63f1d448cc\" (UID: \"185eee46-61f6-4cb6-8bed-9d63f1d448cc\") " Mar 12 18:48:29.167453 master-0 kubenswrapper[29097]: I0312 18:48:29.167370 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/185eee46-61f6-4cb6-8bed-9d63f1d448cc-combined-ca-bundle\") pod \"185eee46-61f6-4cb6-8bed-9d63f1d448cc\" (UID: \"185eee46-61f6-4cb6-8bed-9d63f1d448cc\") " Mar 12 18:48:29.167453 master-0 kubenswrapper[29097]: I0312 18:48:29.167430 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/185eee46-61f6-4cb6-8bed-9d63f1d448cc-ovndb-tls-certs\") pod \"185eee46-61f6-4cb6-8bed-9d63f1d448cc\" (UID: \"185eee46-61f6-4cb6-8bed-9d63f1d448cc\") " Mar 12 18:48:29.171780 master-0 kubenswrapper[29097]: I0312 18:48:29.171732 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/185eee46-61f6-4cb6-8bed-9d63f1d448cc-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "185eee46-61f6-4cb6-8bed-9d63f1d448cc" (UID: "185eee46-61f6-4cb6-8bed-9d63f1d448cc"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:29.173480 master-0 kubenswrapper[29097]: I0312 18:48:29.173434 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/185eee46-61f6-4cb6-8bed-9d63f1d448cc-kube-api-access-kp9ms" (OuterVolumeSpecName: "kube-api-access-kp9ms") pod "185eee46-61f6-4cb6-8bed-9d63f1d448cc" (UID: "185eee46-61f6-4cb6-8bed-9d63f1d448cc"). InnerVolumeSpecName "kube-api-access-kp9ms". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:48:29.260072 master-0 kubenswrapper[29097]: I0312 18:48:29.260019 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/185eee46-61f6-4cb6-8bed-9d63f1d448cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "185eee46-61f6-4cb6-8bed-9d63f1d448cc" (UID: "185eee46-61f6-4cb6-8bed-9d63f1d448cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:29.269997 master-0 kubenswrapper[29097]: I0312 18:48:29.269935 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/185eee46-61f6-4cb6-8bed-9d63f1d448cc-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:29.269997 master-0 kubenswrapper[29097]: I0312 18:48:29.269966 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kp9ms\" (UniqueName: \"kubernetes.io/projected/185eee46-61f6-4cb6-8bed-9d63f1d448cc-kube-api-access-kp9ms\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:29.269997 master-0 kubenswrapper[29097]: I0312 18:48:29.269976 29097 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/185eee46-61f6-4cb6-8bed-9d63f1d448cc-httpd-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:29.275162 master-0 kubenswrapper[29097]: I0312 18:48:29.274920 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/185eee46-61f6-4cb6-8bed-9d63f1d448cc-config" (OuterVolumeSpecName: "config") pod "185eee46-61f6-4cb6-8bed-9d63f1d448cc" (UID: "185eee46-61f6-4cb6-8bed-9d63f1d448cc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:29.293612 master-0 kubenswrapper[29097]: I0312 18:48:29.293010 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/185eee46-61f6-4cb6-8bed-9d63f1d448cc-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "185eee46-61f6-4cb6-8bed-9d63f1d448cc" (UID: "185eee46-61f6-4cb6-8bed-9d63f1d448cc"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:29.372136 master-0 kubenswrapper[29097]: I0312 18:48:29.372035 29097 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/185eee46-61f6-4cb6-8bed-9d63f1d448cc-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:29.372136 master-0 kubenswrapper[29097]: I0312 18:48:29.372070 29097 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/185eee46-61f6-4cb6-8bed-9d63f1d448cc-ovndb-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:29.540184 master-0 kubenswrapper[29097]: I0312 18:48:29.540141 29097 generic.go:334] "Generic (PLEG): container finished" podID="185eee46-61f6-4cb6-8bed-9d63f1d448cc" containerID="7a00d292c2fc00c24eed8e5a385b815ce49f8dc2e1a7b9536068c0e3ac017e0c" exitCode=0 Mar 12 18:48:29.540891 master-0 kubenswrapper[29097]: I0312 18:48:29.540218 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ffbf57c88-5pgzn" event={"ID":"185eee46-61f6-4cb6-8bed-9d63f1d448cc","Type":"ContainerDied","Data":"7a00d292c2fc00c24eed8e5a385b815ce49f8dc2e1a7b9536068c0e3ac017e0c"} Mar 12 18:48:29.540891 master-0 kubenswrapper[29097]: I0312 18:48:29.540247 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ffbf57c88-5pgzn" Mar 12 18:48:29.540891 master-0 kubenswrapper[29097]: I0312 18:48:29.540253 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ffbf57c88-5pgzn" event={"ID":"185eee46-61f6-4cb6-8bed-9d63f1d448cc","Type":"ContainerDied","Data":"ca69568f14594891a3b272a51be2d6bb51e3ca5a768c2a3c5791d106872ef000"} Mar 12 18:48:29.540891 master-0 kubenswrapper[29097]: I0312 18:48:29.540266 29097 scope.go:117] "RemoveContainer" containerID="478103112f5e27d1ea066ae34a0d3308c8a174e07663a9b84a472fc5f0e2f9a2" Mar 12 18:48:29.595921 master-0 kubenswrapper[29097]: I0312 18:48:29.595832 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-ffbf57c88-5pgzn"] Mar 12 18:48:29.606549 master-0 kubenswrapper[29097]: I0312 18:48:29.606496 29097 scope.go:117] "RemoveContainer" containerID="7a00d292c2fc00c24eed8e5a385b815ce49f8dc2e1a7b9536068c0e3ac017e0c" Mar 12 18:48:29.609747 master-0 kubenswrapper[29097]: I0312 18:48:29.609380 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-ffbf57c88-5pgzn"] Mar 12 18:48:29.635866 master-0 kubenswrapper[29097]: I0312 18:48:29.635805 29097 scope.go:117] "RemoveContainer" containerID="478103112f5e27d1ea066ae34a0d3308c8a174e07663a9b84a472fc5f0e2f9a2" Mar 12 18:48:29.636671 master-0 kubenswrapper[29097]: E0312 18:48:29.636638 29097 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"478103112f5e27d1ea066ae34a0d3308c8a174e07663a9b84a472fc5f0e2f9a2\": container with ID starting with 478103112f5e27d1ea066ae34a0d3308c8a174e07663a9b84a472fc5f0e2f9a2 not found: ID does not exist" containerID="478103112f5e27d1ea066ae34a0d3308c8a174e07663a9b84a472fc5f0e2f9a2" Mar 12 18:48:29.636803 master-0 kubenswrapper[29097]: I0312 18:48:29.636675 29097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"478103112f5e27d1ea066ae34a0d3308c8a174e07663a9b84a472fc5f0e2f9a2"} err="failed to get container status \"478103112f5e27d1ea066ae34a0d3308c8a174e07663a9b84a472fc5f0e2f9a2\": rpc error: code = NotFound desc = could not find container \"478103112f5e27d1ea066ae34a0d3308c8a174e07663a9b84a472fc5f0e2f9a2\": container with ID starting with 478103112f5e27d1ea066ae34a0d3308c8a174e07663a9b84a472fc5f0e2f9a2 not found: ID does not exist" Mar 12 18:48:29.636803 master-0 kubenswrapper[29097]: I0312 18:48:29.636697 29097 scope.go:117] "RemoveContainer" containerID="7a00d292c2fc00c24eed8e5a385b815ce49f8dc2e1a7b9536068c0e3ac017e0c" Mar 12 18:48:29.636986 master-0 kubenswrapper[29097]: E0312 18:48:29.636960 29097 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a00d292c2fc00c24eed8e5a385b815ce49f8dc2e1a7b9536068c0e3ac017e0c\": container with ID starting with 7a00d292c2fc00c24eed8e5a385b815ce49f8dc2e1a7b9536068c0e3ac017e0c not found: ID does not exist" containerID="7a00d292c2fc00c24eed8e5a385b815ce49f8dc2e1a7b9536068c0e3ac017e0c" Mar 12 18:48:29.637067 master-0 kubenswrapper[29097]: I0312 18:48:29.636983 29097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a00d292c2fc00c24eed8e5a385b815ce49f8dc2e1a7b9536068c0e3ac017e0c"} err="failed to get container status \"7a00d292c2fc00c24eed8e5a385b815ce49f8dc2e1a7b9536068c0e3ac017e0c\": rpc error: code = NotFound desc = could not find container \"7a00d292c2fc00c24eed8e5a385b815ce49f8dc2e1a7b9536068c0e3ac017e0c\": container with ID starting with 7a00d292c2fc00c24eed8e5a385b815ce49f8dc2e1a7b9536068c0e3ac017e0c not found: ID does not exist" Mar 12 18:48:30.738614 master-0 kubenswrapper[29097]: I0312 18:48:30.737711 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="030364f8-b8e4-43b2-9597-5ca376e5f1a6" path="/var/lib/kubelet/pods/030364f8-b8e4-43b2-9597-5ca376e5f1a6/volumes" Mar 12 18:48:30.744601 master-0 kubenswrapper[29097]: I0312 18:48:30.743284 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="185eee46-61f6-4cb6-8bed-9d63f1d448cc" path="/var/lib/kubelet/pods/185eee46-61f6-4cb6-8bed-9d63f1d448cc/volumes" Mar 12 18:48:32.117472 master-0 kubenswrapper[29097]: I0312 18:48:32.116901 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-sync-pq2f7"] Mar 12 18:48:32.117472 master-0 kubenswrapper[29097]: E0312 18:48:32.117463 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="185eee46-61f6-4cb6-8bed-9d63f1d448cc" containerName="neutron-api" Mar 12 18:48:32.117472 master-0 kubenswrapper[29097]: I0312 18:48:32.117479 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="185eee46-61f6-4cb6-8bed-9d63f1d448cc" containerName="neutron-api" Mar 12 18:48:32.118056 master-0 kubenswrapper[29097]: E0312 18:48:32.117539 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="030364f8-b8e4-43b2-9597-5ca376e5f1a6" containerName="dnsmasq-dns" Mar 12 18:48:32.118056 master-0 kubenswrapper[29097]: I0312 18:48:32.117549 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="030364f8-b8e4-43b2-9597-5ca376e5f1a6" containerName="dnsmasq-dns" Mar 12 18:48:32.118056 master-0 kubenswrapper[29097]: E0312 18:48:32.117565 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="030364f8-b8e4-43b2-9597-5ca376e5f1a6" containerName="init" Mar 12 18:48:32.118056 master-0 kubenswrapper[29097]: I0312 18:48:32.117570 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="030364f8-b8e4-43b2-9597-5ca376e5f1a6" containerName="init" Mar 12 18:48:32.118056 master-0 kubenswrapper[29097]: E0312 18:48:32.117605 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="185eee46-61f6-4cb6-8bed-9d63f1d448cc" containerName="neutron-httpd" Mar 12 18:48:32.118056 master-0 kubenswrapper[29097]: I0312 18:48:32.117611 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="185eee46-61f6-4cb6-8bed-9d63f1d448cc" containerName="neutron-httpd" Mar 12 18:48:32.118056 master-0 kubenswrapper[29097]: I0312 18:48:32.117814 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="185eee46-61f6-4cb6-8bed-9d63f1d448cc" containerName="neutron-httpd" Mar 12 18:48:32.118056 master-0 kubenswrapper[29097]: I0312 18:48:32.117843 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="030364f8-b8e4-43b2-9597-5ca376e5f1a6" containerName="dnsmasq-dns" Mar 12 18:48:32.118056 master-0 kubenswrapper[29097]: I0312 18:48:32.117863 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="185eee46-61f6-4cb6-8bed-9d63f1d448cc" containerName="neutron-api" Mar 12 18:48:32.118620 master-0 kubenswrapper[29097]: I0312 18:48:32.118595 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-pq2f7" Mar 12 18:48:32.121175 master-0 kubenswrapper[29097]: I0312 18:48:32.121107 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Mar 12 18:48:32.123360 master-0 kubenswrapper[29097]: I0312 18:48:32.122203 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Mar 12 18:48:32.130471 master-0 kubenswrapper[29097]: I0312 18:48:32.130421 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-pq2f7"] Mar 12 18:48:32.330860 master-0 kubenswrapper[29097]: I0312 18:48:32.330787 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/15059f12-694b-4ece-9321-0d18e7c95c04-etc-podinfo\") pod \"ironic-inspector-db-sync-pq2f7\" (UID: \"15059f12-694b-4ece-9321-0d18e7c95c04\") " pod="openstack/ironic-inspector-db-sync-pq2f7" Mar 12 18:48:32.330860 master-0 kubenswrapper[29097]: I0312 18:48:32.330854 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15059f12-694b-4ece-9321-0d18e7c95c04-combined-ca-bundle\") pod \"ironic-inspector-db-sync-pq2f7\" (UID: \"15059f12-694b-4ece-9321-0d18e7c95c04\") " pod="openstack/ironic-inspector-db-sync-pq2f7" Mar 12 18:48:32.331106 master-0 kubenswrapper[29097]: I0312 18:48:32.330874 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjrpb\" (UniqueName: \"kubernetes.io/projected/15059f12-694b-4ece-9321-0d18e7c95c04-kube-api-access-cjrpb\") pod \"ironic-inspector-db-sync-pq2f7\" (UID: \"15059f12-694b-4ece-9321-0d18e7c95c04\") " pod="openstack/ironic-inspector-db-sync-pq2f7" Mar 12 18:48:32.331303 master-0 kubenswrapper[29097]: I0312 18:48:32.331254 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15059f12-694b-4ece-9321-0d18e7c95c04-scripts\") pod \"ironic-inspector-db-sync-pq2f7\" (UID: \"15059f12-694b-4ece-9321-0d18e7c95c04\") " pod="openstack/ironic-inspector-db-sync-pq2f7" Mar 12 18:48:32.331563 master-0 kubenswrapper[29097]: I0312 18:48:32.331504 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/15059f12-694b-4ece-9321-0d18e7c95c04-config\") pod \"ironic-inspector-db-sync-pq2f7\" (UID: \"15059f12-694b-4ece-9321-0d18e7c95c04\") " pod="openstack/ironic-inspector-db-sync-pq2f7" Mar 12 18:48:32.331603 master-0 kubenswrapper[29097]: I0312 18:48:32.331561 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/15059f12-694b-4ece-9321-0d18e7c95c04-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-pq2f7\" (UID: \"15059f12-694b-4ece-9321-0d18e7c95c04\") " pod="openstack/ironic-inspector-db-sync-pq2f7" Mar 12 18:48:32.331661 master-0 kubenswrapper[29097]: I0312 18:48:32.331638 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/15059f12-694b-4ece-9321-0d18e7c95c04-var-lib-ironic\") pod \"ironic-inspector-db-sync-pq2f7\" (UID: \"15059f12-694b-4ece-9321-0d18e7c95c04\") " pod="openstack/ironic-inspector-db-sync-pq2f7" Mar 12 18:48:32.683455 master-0 kubenswrapper[29097]: I0312 18:48:32.683385 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/15059f12-694b-4ece-9321-0d18e7c95c04-config\") pod \"ironic-inspector-db-sync-pq2f7\" (UID: \"15059f12-694b-4ece-9321-0d18e7c95c04\") " pod="openstack/ironic-inspector-db-sync-pq2f7" Mar 12 18:48:32.683455 master-0 kubenswrapper[29097]: I0312 18:48:32.683447 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/15059f12-694b-4ece-9321-0d18e7c95c04-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-pq2f7\" (UID: \"15059f12-694b-4ece-9321-0d18e7c95c04\") " pod="openstack/ironic-inspector-db-sync-pq2f7" Mar 12 18:48:32.683764 master-0 kubenswrapper[29097]: I0312 18:48:32.683483 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/15059f12-694b-4ece-9321-0d18e7c95c04-var-lib-ironic\") pod \"ironic-inspector-db-sync-pq2f7\" (UID: \"15059f12-694b-4ece-9321-0d18e7c95c04\") " pod="openstack/ironic-inspector-db-sync-pq2f7" Mar 12 18:48:32.683764 master-0 kubenswrapper[29097]: I0312 18:48:32.683546 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/15059f12-694b-4ece-9321-0d18e7c95c04-etc-podinfo\") pod \"ironic-inspector-db-sync-pq2f7\" (UID: \"15059f12-694b-4ece-9321-0d18e7c95c04\") " pod="openstack/ironic-inspector-db-sync-pq2f7" Mar 12 18:48:32.683764 master-0 kubenswrapper[29097]: I0312 18:48:32.683568 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15059f12-694b-4ece-9321-0d18e7c95c04-combined-ca-bundle\") pod \"ironic-inspector-db-sync-pq2f7\" (UID: \"15059f12-694b-4ece-9321-0d18e7c95c04\") " pod="openstack/ironic-inspector-db-sync-pq2f7" Mar 12 18:48:32.683764 master-0 kubenswrapper[29097]: I0312 18:48:32.683583 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjrpb\" (UniqueName: \"kubernetes.io/projected/15059f12-694b-4ece-9321-0d18e7c95c04-kube-api-access-cjrpb\") pod \"ironic-inspector-db-sync-pq2f7\" (UID: \"15059f12-694b-4ece-9321-0d18e7c95c04\") " pod="openstack/ironic-inspector-db-sync-pq2f7" Mar 12 18:48:32.683764 master-0 kubenswrapper[29097]: I0312 18:48:32.683648 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15059f12-694b-4ece-9321-0d18e7c95c04-scripts\") pod \"ironic-inspector-db-sync-pq2f7\" (UID: \"15059f12-694b-4ece-9321-0d18e7c95c04\") " pod="openstack/ironic-inspector-db-sync-pq2f7" Mar 12 18:48:32.689836 master-0 kubenswrapper[29097]: I0312 18:48:32.689799 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/15059f12-694b-4ece-9321-0d18e7c95c04-config\") pod \"ironic-inspector-db-sync-pq2f7\" (UID: \"15059f12-694b-4ece-9321-0d18e7c95c04\") " pod="openstack/ironic-inspector-db-sync-pq2f7" Mar 12 18:48:32.690070 master-0 kubenswrapper[29097]: I0312 18:48:32.690037 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/15059f12-694b-4ece-9321-0d18e7c95c04-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-pq2f7\" (UID: \"15059f12-694b-4ece-9321-0d18e7c95c04\") " pod="openstack/ironic-inspector-db-sync-pq2f7" Mar 12 18:48:32.690286 master-0 kubenswrapper[29097]: I0312 18:48:32.690260 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/15059f12-694b-4ece-9321-0d18e7c95c04-var-lib-ironic\") pod \"ironic-inspector-db-sync-pq2f7\" (UID: \"15059f12-694b-4ece-9321-0d18e7c95c04\") " pod="openstack/ironic-inspector-db-sync-pq2f7" Mar 12 18:48:32.724747 master-0 kubenswrapper[29097]: E0312 18:48:32.724493 29097 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 90050cff7b5852880030f3c0a5a083928d00c5d3dc28a22a53ca2f32e03f9bd8 is running failed: container process not found" containerID="90050cff7b5852880030f3c0a5a083928d00c5d3dc28a22a53ca2f32e03f9bd8" cmd=["/bin/true"] Mar 12 18:48:32.724747 master-0 kubenswrapper[29097]: E0312 18:48:32.724718 29097 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 90050cff7b5852880030f3c0a5a083928d00c5d3dc28a22a53ca2f32e03f9bd8 is running failed: container process not found" containerID="90050cff7b5852880030f3c0a5a083928d00c5d3dc28a22a53ca2f32e03f9bd8" cmd=["/bin/true"] Mar 12 18:48:32.725009 master-0 kubenswrapper[29097]: E0312 18:48:32.724820 29097 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 90050cff7b5852880030f3c0a5a083928d00c5d3dc28a22a53ca2f32e03f9bd8 is running failed: container process not found" containerID="90050cff7b5852880030f3c0a5a083928d00c5d3dc28a22a53ca2f32e03f9bd8" cmd=["/bin/true"] Mar 12 18:48:32.725355 master-0 kubenswrapper[29097]: E0312 18:48:32.725253 29097 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 90050cff7b5852880030f3c0a5a083928d00c5d3dc28a22a53ca2f32e03f9bd8 is running failed: container process not found" containerID="90050cff7b5852880030f3c0a5a083928d00c5d3dc28a22a53ca2f32e03f9bd8" cmd=["/bin/true"] Mar 12 18:48:32.725355 master-0 kubenswrapper[29097]: E0312 18:48:32.725334 29097 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 90050cff7b5852880030f3c0a5a083928d00c5d3dc28a22a53ca2f32e03f9bd8 is running failed: container process not found" containerID="90050cff7b5852880030f3c0a5a083928d00c5d3dc28a22a53ca2f32e03f9bd8" cmd=["/bin/true"] Mar 12 18:48:32.725496 master-0 kubenswrapper[29097]: E0312 18:48:32.725359 29097 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 90050cff7b5852880030f3c0a5a083928d00c5d3dc28a22a53ca2f32e03f9bd8 is running failed: container process not found" probeType="Liveness" pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" podUID="87a19dc7-5415-4d3d-a22e-9e2524a67e38" containerName="ironic-neutron-agent" Mar 12 18:48:32.725772 master-0 kubenswrapper[29097]: E0312 18:48:32.725741 29097 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 90050cff7b5852880030f3c0a5a083928d00c5d3dc28a22a53ca2f32e03f9bd8 is running failed: container process not found" containerID="90050cff7b5852880030f3c0a5a083928d00c5d3dc28a22a53ca2f32e03f9bd8" cmd=["/bin/true"] Mar 12 18:48:32.725863 master-0 kubenswrapper[29097]: E0312 18:48:32.725773 29097 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 90050cff7b5852880030f3c0a5a083928d00c5d3dc28a22a53ca2f32e03f9bd8 is running failed: container process not found" probeType="Readiness" pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" podUID="87a19dc7-5415-4d3d-a22e-9e2524a67e38" containerName="ironic-neutron-agent" Mar 12 18:48:32.726921 master-0 kubenswrapper[29097]: I0312 18:48:32.726885 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15059f12-694b-4ece-9321-0d18e7c95c04-scripts\") pod \"ironic-inspector-db-sync-pq2f7\" (UID: \"15059f12-694b-4ece-9321-0d18e7c95c04\") " pod="openstack/ironic-inspector-db-sync-pq2f7" Mar 12 18:48:32.727027 master-0 kubenswrapper[29097]: I0312 18:48:32.726946 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/15059f12-694b-4ece-9321-0d18e7c95c04-etc-podinfo\") pod \"ironic-inspector-db-sync-pq2f7\" (UID: \"15059f12-694b-4ece-9321-0d18e7c95c04\") " pod="openstack/ironic-inspector-db-sync-pq2f7" Mar 12 18:48:32.741406 master-0 kubenswrapper[29097]: I0312 18:48:32.741311 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15059f12-694b-4ece-9321-0d18e7c95c04-combined-ca-bundle\") pod \"ironic-inspector-db-sync-pq2f7\" (UID: \"15059f12-694b-4ece-9321-0d18e7c95c04\") " pod="openstack/ironic-inspector-db-sync-pq2f7" Mar 12 18:48:32.766902 master-0 kubenswrapper[29097]: I0312 18:48:32.765939 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjrpb\" (UniqueName: \"kubernetes.io/projected/15059f12-694b-4ece-9321-0d18e7c95c04-kube-api-access-cjrpb\") pod \"ironic-inspector-db-sync-pq2f7\" (UID: \"15059f12-694b-4ece-9321-0d18e7c95c04\") " pod="openstack/ironic-inspector-db-sync-pq2f7" Mar 12 18:48:32.844683 master-0 kubenswrapper[29097]: I0312 18:48:32.844398 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-pq2f7" Mar 12 18:48:33.005479 master-0 kubenswrapper[29097]: I0312 18:48:33.005361 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7b649cbbbb-tkhcf" Mar 12 18:48:33.099064 master-0 kubenswrapper[29097]: I0312 18:48:33.098995 29097 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-848b9c6b49-l9j7w" podUID="030364f8-b8e4-43b2-9597-5ca376e5f1a6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.226:5353: i/o timeout" Mar 12 18:48:33.253196 master-0 kubenswrapper[29097]: I0312 18:48:33.253143 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7b649cbbbb-tkhcf" Mar 12 18:48:34.128558 master-0 kubenswrapper[29097]: I0312 18:48:34.127234 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-6b755b479c-nl884" Mar 12 18:48:34.368298 master-0 kubenswrapper[29097]: I0312 18:48:34.368246 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-55ccfbf469-qsbxt"] Mar 12 18:48:34.369206 master-0 kubenswrapper[29097]: I0312 18:48:34.369147 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-55ccfbf469-qsbxt" podUID="1b2854ab-857a-4029-b03e-470c4452693e" containerName="ironic-api-log" containerID="cri-o://1d4d3e82ce5a17f17c898eb5b4ebbffec1ae77f40a3c1074178fa2a9cb667403" gracePeriod=60 Mar 12 18:48:35.435701 master-0 kubenswrapper[29097]: I0312 18:48:35.435587 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-fkj7f"] Mar 12 18:48:35.437990 master-0 kubenswrapper[29097]: I0312 18:48:35.437966 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fkj7f" Mar 12 18:48:35.455865 master-0 kubenswrapper[29097]: I0312 18:48:35.455817 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-fkj7f"] Mar 12 18:48:35.553266 master-0 kubenswrapper[29097]: I0312 18:48:35.552800 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-tx4kq"] Mar 12 18:48:35.554617 master-0 kubenswrapper[29097]: I0312 18:48:35.554570 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tx4kq" Mar 12 18:48:35.570388 master-0 kubenswrapper[29097]: I0312 18:48:35.569653 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-tx4kq"] Mar 12 18:48:35.582465 master-0 kubenswrapper[29097]: I0312 18:48:35.582410 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-9671-account-create-update-hzxlx"] Mar 12 18:48:35.585029 master-0 kubenswrapper[29097]: I0312 18:48:35.584971 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40cf0854-144e-474c-a1ad-e588b7df2c68-operator-scripts\") pod \"nova-api-db-create-fkj7f\" (UID: \"40cf0854-144e-474c-a1ad-e588b7df2c68\") " pod="openstack/nova-api-db-create-fkj7f" Mar 12 18:48:35.585253 master-0 kubenswrapper[29097]: I0312 18:48:35.585222 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmv4r\" (UniqueName: \"kubernetes.io/projected/40cf0854-144e-474c-a1ad-e588b7df2c68-kube-api-access-rmv4r\") pod \"nova-api-db-create-fkj7f\" (UID: \"40cf0854-144e-474c-a1ad-e588b7df2c68\") " pod="openstack/nova-api-db-create-fkj7f" Mar 12 18:48:35.585323 master-0 kubenswrapper[29097]: I0312 18:48:35.585290 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q6vf\" (UniqueName: \"kubernetes.io/projected/1b23c624-7392-41d0-a909-67f71b5e16ce-kube-api-access-5q6vf\") pod \"nova-cell0-db-create-tx4kq\" (UID: \"1b23c624-7392-41d0-a909-67f71b5e16ce\") " pod="openstack/nova-cell0-db-create-tx4kq" Mar 12 18:48:35.585376 master-0 kubenswrapper[29097]: I0312 18:48:35.585334 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b23c624-7392-41d0-a909-67f71b5e16ce-operator-scripts\") pod \"nova-cell0-db-create-tx4kq\" (UID: \"1b23c624-7392-41d0-a909-67f71b5e16ce\") " pod="openstack/nova-cell0-db-create-tx4kq" Mar 12 18:48:35.606125 master-0 kubenswrapper[29097]: I0312 18:48:35.606064 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9671-account-create-update-hzxlx" Mar 12 18:48:35.610062 master-0 kubenswrapper[29097]: I0312 18:48:35.609738 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 12 18:48:35.629588 master-0 kubenswrapper[29097]: I0312 18:48:35.628639 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9671-account-create-update-hzxlx"] Mar 12 18:48:35.667556 master-0 kubenswrapper[29097]: I0312 18:48:35.666492 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-trlwx"] Mar 12 18:48:35.669203 master-0 kubenswrapper[29097]: I0312 18:48:35.668946 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-trlwx" Mar 12 18:48:35.687423 master-0 kubenswrapper[29097]: I0312 18:48:35.687210 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q6vf\" (UniqueName: \"kubernetes.io/projected/1b23c624-7392-41d0-a909-67f71b5e16ce-kube-api-access-5q6vf\") pod \"nova-cell0-db-create-tx4kq\" (UID: \"1b23c624-7392-41d0-a909-67f71b5e16ce\") " pod="openstack/nova-cell0-db-create-tx4kq" Mar 12 18:48:35.687423 master-0 kubenswrapper[29097]: I0312 18:48:35.687287 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b23c624-7392-41d0-a909-67f71b5e16ce-operator-scripts\") pod \"nova-cell0-db-create-tx4kq\" (UID: \"1b23c624-7392-41d0-a909-67f71b5e16ce\") " pod="openstack/nova-cell0-db-create-tx4kq" Mar 12 18:48:35.687423 master-0 kubenswrapper[29097]: I0312 18:48:35.687356 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40cf0854-144e-474c-a1ad-e588b7df2c68-operator-scripts\") pod \"nova-api-db-create-fkj7f\" (UID: \"40cf0854-144e-474c-a1ad-e588b7df2c68\") " pod="openstack/nova-api-db-create-fkj7f" Mar 12 18:48:35.688057 master-0 kubenswrapper[29097]: I0312 18:48:35.687569 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmv4r\" (UniqueName: \"kubernetes.io/projected/40cf0854-144e-474c-a1ad-e588b7df2c68-kube-api-access-rmv4r\") pod \"nova-api-db-create-fkj7f\" (UID: \"40cf0854-144e-474c-a1ad-e588b7df2c68\") " pod="openstack/nova-api-db-create-fkj7f" Mar 12 18:48:35.699205 master-0 kubenswrapper[29097]: I0312 18:48:35.694385 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40cf0854-144e-474c-a1ad-e588b7df2c68-operator-scripts\") pod \"nova-api-db-create-fkj7f\" (UID: \"40cf0854-144e-474c-a1ad-e588b7df2c68\") " pod="openstack/nova-api-db-create-fkj7f" Mar 12 18:48:35.699205 master-0 kubenswrapper[29097]: I0312 18:48:35.694445 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-trlwx"] Mar 12 18:48:35.699205 master-0 kubenswrapper[29097]: I0312 18:48:35.697815 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b23c624-7392-41d0-a909-67f71b5e16ce-operator-scripts\") pod \"nova-cell0-db-create-tx4kq\" (UID: \"1b23c624-7392-41d0-a909-67f71b5e16ce\") " pod="openstack/nova-cell0-db-create-tx4kq" Mar 12 18:48:35.710990 master-0 kubenswrapper[29097]: I0312 18:48:35.710926 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmv4r\" (UniqueName: \"kubernetes.io/projected/40cf0854-144e-474c-a1ad-e588b7df2c68-kube-api-access-rmv4r\") pod \"nova-api-db-create-fkj7f\" (UID: \"40cf0854-144e-474c-a1ad-e588b7df2c68\") " pod="openstack/nova-api-db-create-fkj7f" Mar 12 18:48:35.714937 master-0 kubenswrapper[29097]: I0312 18:48:35.714861 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q6vf\" (UniqueName: \"kubernetes.io/projected/1b23c624-7392-41d0-a909-67f71b5e16ce-kube-api-access-5q6vf\") pod \"nova-cell0-db-create-tx4kq\" (UID: \"1b23c624-7392-41d0-a909-67f71b5e16ce\") " pod="openstack/nova-cell0-db-create-tx4kq" Mar 12 18:48:35.759540 master-0 kubenswrapper[29097]: I0312 18:48:35.758731 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-703e-account-create-update-wl26r"] Mar 12 18:48:35.764358 master-0 kubenswrapper[29097]: I0312 18:48:35.760214 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-703e-account-create-update-wl26r" Mar 12 18:48:35.776689 master-0 kubenswrapper[29097]: I0312 18:48:35.775713 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fkj7f" Mar 12 18:48:35.776689 master-0 kubenswrapper[29097]: I0312 18:48:35.776658 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 12 18:48:35.783553 master-0 kubenswrapper[29097]: I0312 18:48:35.781636 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-703e-account-create-update-wl26r"] Mar 12 18:48:35.791498 master-0 kubenswrapper[29097]: I0312 18:48:35.791429 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9855cba-3217-439f-8c5e-32b3064a3330-operator-scripts\") pod \"nova-api-9671-account-create-update-hzxlx\" (UID: \"a9855cba-3217-439f-8c5e-32b3064a3330\") " pod="openstack/nova-api-9671-account-create-update-hzxlx" Mar 12 18:48:35.792571 master-0 kubenswrapper[29097]: I0312 18:48:35.792531 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8136b672-acef-4cb2-8316-358d20c26489-operator-scripts\") pod \"nova-cell1-db-create-trlwx\" (UID: \"8136b672-acef-4cb2-8316-358d20c26489\") " pod="openstack/nova-cell1-db-create-trlwx" Mar 12 18:48:35.792698 master-0 kubenswrapper[29097]: I0312 18:48:35.792631 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pznt\" (UniqueName: \"kubernetes.io/projected/8136b672-acef-4cb2-8316-358d20c26489-kube-api-access-2pznt\") pod \"nova-cell1-db-create-trlwx\" (UID: \"8136b672-acef-4cb2-8316-358d20c26489\") " pod="openstack/nova-cell1-db-create-trlwx" Mar 12 18:48:35.792806 master-0 kubenswrapper[29097]: I0312 18:48:35.792777 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx7ft\" (UniqueName: \"kubernetes.io/projected/a9855cba-3217-439f-8c5e-32b3064a3330-kube-api-access-gx7ft\") pod \"nova-api-9671-account-create-update-hzxlx\" (UID: \"a9855cba-3217-439f-8c5e-32b3064a3330\") " pod="openstack/nova-api-9671-account-create-update-hzxlx" Mar 12 18:48:35.895541 master-0 kubenswrapper[29097]: I0312 18:48:35.895396 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx7ft\" (UniqueName: \"kubernetes.io/projected/a9855cba-3217-439f-8c5e-32b3064a3330-kube-api-access-gx7ft\") pod \"nova-api-9671-account-create-update-hzxlx\" (UID: \"a9855cba-3217-439f-8c5e-32b3064a3330\") " pod="openstack/nova-api-9671-account-create-update-hzxlx" Mar 12 18:48:35.896276 master-0 kubenswrapper[29097]: I0312 18:48:35.896172 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g86ln\" (UniqueName: \"kubernetes.io/projected/62db1d8c-39c2-47ea-bac4-e9a0d7febb99-kube-api-access-g86ln\") pod \"nova-cell0-703e-account-create-update-wl26r\" (UID: \"62db1d8c-39c2-47ea-bac4-e9a0d7febb99\") " pod="openstack/nova-cell0-703e-account-create-update-wl26r" Mar 12 18:48:35.896347 master-0 kubenswrapper[29097]: I0312 18:48:35.896289 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9855cba-3217-439f-8c5e-32b3064a3330-operator-scripts\") pod \"nova-api-9671-account-create-update-hzxlx\" (UID: \"a9855cba-3217-439f-8c5e-32b3064a3330\") " pod="openstack/nova-api-9671-account-create-update-hzxlx" Mar 12 18:48:35.898574 master-0 kubenswrapper[29097]: I0312 18:48:35.898373 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9855cba-3217-439f-8c5e-32b3064a3330-operator-scripts\") pod \"nova-api-9671-account-create-update-hzxlx\" (UID: \"a9855cba-3217-439f-8c5e-32b3064a3330\") " pod="openstack/nova-api-9671-account-create-update-hzxlx" Mar 12 18:48:35.900789 master-0 kubenswrapper[29097]: I0312 18:48:35.899274 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8136b672-acef-4cb2-8316-358d20c26489-operator-scripts\") pod \"nova-cell1-db-create-trlwx\" (UID: \"8136b672-acef-4cb2-8316-358d20c26489\") " pod="openstack/nova-cell1-db-create-trlwx" Mar 12 18:48:35.900789 master-0 kubenswrapper[29097]: I0312 18:48:35.899849 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62db1d8c-39c2-47ea-bac4-e9a0d7febb99-operator-scripts\") pod \"nova-cell0-703e-account-create-update-wl26r\" (UID: \"62db1d8c-39c2-47ea-bac4-e9a0d7febb99\") " pod="openstack/nova-cell0-703e-account-create-update-wl26r" Mar 12 18:48:35.900789 master-0 kubenswrapper[29097]: I0312 18:48:35.899965 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pznt\" (UniqueName: \"kubernetes.io/projected/8136b672-acef-4cb2-8316-358d20c26489-kube-api-access-2pznt\") pod \"nova-cell1-db-create-trlwx\" (UID: \"8136b672-acef-4cb2-8316-358d20c26489\") " pod="openstack/nova-cell1-db-create-trlwx" Mar 12 18:48:35.904966 master-0 kubenswrapper[29097]: I0312 18:48:35.904915 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8136b672-acef-4cb2-8316-358d20c26489-operator-scripts\") pod \"nova-cell1-db-create-trlwx\" (UID: \"8136b672-acef-4cb2-8316-358d20c26489\") " pod="openstack/nova-cell1-db-create-trlwx" Mar 12 18:48:35.912749 master-0 kubenswrapper[29097]: I0312 18:48:35.912034 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tx4kq" Mar 12 18:48:35.926383 master-0 kubenswrapper[29097]: I0312 18:48:35.925337 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx7ft\" (UniqueName: \"kubernetes.io/projected/a9855cba-3217-439f-8c5e-32b3064a3330-kube-api-access-gx7ft\") pod \"nova-api-9671-account-create-update-hzxlx\" (UID: \"a9855cba-3217-439f-8c5e-32b3064a3330\") " pod="openstack/nova-api-9671-account-create-update-hzxlx" Mar 12 18:48:35.926709 master-0 kubenswrapper[29097]: I0312 18:48:35.926580 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pznt\" (UniqueName: \"kubernetes.io/projected/8136b672-acef-4cb2-8316-358d20c26489-kube-api-access-2pznt\") pod \"nova-cell1-db-create-trlwx\" (UID: \"8136b672-acef-4cb2-8316-358d20c26489\") " pod="openstack/nova-cell1-db-create-trlwx" Mar 12 18:48:35.953009 master-0 kubenswrapper[29097]: I0312 18:48:35.952885 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9671-account-create-update-hzxlx" Mar 12 18:48:35.959047 master-0 kubenswrapper[29097]: I0312 18:48:35.958759 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-67d1-account-create-update-pllmv"] Mar 12 18:48:35.964378 master-0 kubenswrapper[29097]: I0312 18:48:35.961849 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-67d1-account-create-update-pllmv" Mar 12 18:48:35.976051 master-0 kubenswrapper[29097]: I0312 18:48:35.967151 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 12 18:48:35.986857 master-0 kubenswrapper[29097]: I0312 18:48:35.981627 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-67d1-account-create-update-pllmv"] Mar 12 18:48:36.001604 master-0 kubenswrapper[29097]: I0312 18:48:35.992828 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-trlwx" Mar 12 18:48:36.002337 master-0 kubenswrapper[29097]: I0312 18:48:36.002295 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxxnm\" (UniqueName: \"kubernetes.io/projected/88b711ca-1327-4ce4-83c1-c5bf5b42cc5a-kube-api-access-mxxnm\") pod \"nova-cell1-67d1-account-create-update-pllmv\" (UID: \"88b711ca-1327-4ce4-83c1-c5bf5b42cc5a\") " pod="openstack/nova-cell1-67d1-account-create-update-pllmv" Mar 12 18:48:36.002409 master-0 kubenswrapper[29097]: I0312 18:48:36.002386 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g86ln\" (UniqueName: \"kubernetes.io/projected/62db1d8c-39c2-47ea-bac4-e9a0d7febb99-kube-api-access-g86ln\") pod \"nova-cell0-703e-account-create-update-wl26r\" (UID: \"62db1d8c-39c2-47ea-bac4-e9a0d7febb99\") " pod="openstack/nova-cell0-703e-account-create-update-wl26r" Mar 12 18:48:36.003104 master-0 kubenswrapper[29097]: I0312 18:48:36.003073 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62db1d8c-39c2-47ea-bac4-e9a0d7febb99-operator-scripts\") pod \"nova-cell0-703e-account-create-update-wl26r\" (UID: \"62db1d8c-39c2-47ea-bac4-e9a0d7febb99\") " pod="openstack/nova-cell0-703e-account-create-update-wl26r" Mar 12 18:48:36.003170 master-0 kubenswrapper[29097]: I0312 18:48:36.003148 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88b711ca-1327-4ce4-83c1-c5bf5b42cc5a-operator-scripts\") pod \"nova-cell1-67d1-account-create-update-pllmv\" (UID: \"88b711ca-1327-4ce4-83c1-c5bf5b42cc5a\") " pod="openstack/nova-cell1-67d1-account-create-update-pllmv" Mar 12 18:48:36.012642 master-0 kubenswrapper[29097]: I0312 18:48:36.005500 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62db1d8c-39c2-47ea-bac4-e9a0d7febb99-operator-scripts\") pod \"nova-cell0-703e-account-create-update-wl26r\" (UID: \"62db1d8c-39c2-47ea-bac4-e9a0d7febb99\") " pod="openstack/nova-cell0-703e-account-create-update-wl26r" Mar 12 18:48:36.026129 master-0 kubenswrapper[29097]: I0312 18:48:36.025552 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g86ln\" (UniqueName: \"kubernetes.io/projected/62db1d8c-39c2-47ea-bac4-e9a0d7febb99-kube-api-access-g86ln\") pod \"nova-cell0-703e-account-create-update-wl26r\" (UID: \"62db1d8c-39c2-47ea-bac4-e9a0d7febb99\") " pod="openstack/nova-cell0-703e-account-create-update-wl26r" Mar 12 18:48:36.106227 master-0 kubenswrapper[29097]: I0312 18:48:36.106177 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88b711ca-1327-4ce4-83c1-c5bf5b42cc5a-operator-scripts\") pod \"nova-cell1-67d1-account-create-update-pllmv\" (UID: \"88b711ca-1327-4ce4-83c1-c5bf5b42cc5a\") " pod="openstack/nova-cell1-67d1-account-create-update-pllmv" Mar 12 18:48:36.106501 master-0 kubenswrapper[29097]: I0312 18:48:36.106234 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxxnm\" (UniqueName: \"kubernetes.io/projected/88b711ca-1327-4ce4-83c1-c5bf5b42cc5a-kube-api-access-mxxnm\") pod \"nova-cell1-67d1-account-create-update-pllmv\" (UID: \"88b711ca-1327-4ce4-83c1-c5bf5b42cc5a\") " pod="openstack/nova-cell1-67d1-account-create-update-pllmv" Mar 12 18:48:36.107582 master-0 kubenswrapper[29097]: I0312 18:48:36.107277 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88b711ca-1327-4ce4-83c1-c5bf5b42cc5a-operator-scripts\") pod \"nova-cell1-67d1-account-create-update-pllmv\" (UID: \"88b711ca-1327-4ce4-83c1-c5bf5b42cc5a\") " pod="openstack/nova-cell1-67d1-account-create-update-pllmv" Mar 12 18:48:36.114857 master-0 kubenswrapper[29097]: I0312 18:48:36.113844 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-703e-account-create-update-wl26r" Mar 12 18:48:36.124777 master-0 kubenswrapper[29097]: I0312 18:48:36.124502 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxxnm\" (UniqueName: \"kubernetes.io/projected/88b711ca-1327-4ce4-83c1-c5bf5b42cc5a-kube-api-access-mxxnm\") pod \"nova-cell1-67d1-account-create-update-pllmv\" (UID: \"88b711ca-1327-4ce4-83c1-c5bf5b42cc5a\") " pod="openstack/nova-cell1-67d1-account-create-update-pllmv" Mar 12 18:48:36.371366 master-0 kubenswrapper[29097]: I0312 18:48:36.371289 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-67d1-account-create-update-pllmv" Mar 12 18:48:37.698567 master-0 kubenswrapper[29097]: E0312 18:48:37.698493 29097 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 90050cff7b5852880030f3c0a5a083928d00c5d3dc28a22a53ca2f32e03f9bd8 is running failed: container process not found" containerID="90050cff7b5852880030f3c0a5a083928d00c5d3dc28a22a53ca2f32e03f9bd8" cmd=["/bin/true"] Mar 12 18:48:37.699388 master-0 kubenswrapper[29097]: E0312 18:48:37.698506 29097 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 90050cff7b5852880030f3c0a5a083928d00c5d3dc28a22a53ca2f32e03f9bd8 is running failed: container process not found" containerID="90050cff7b5852880030f3c0a5a083928d00c5d3dc28a22a53ca2f32e03f9bd8" cmd=["/bin/true"] Mar 12 18:48:37.699501 master-0 kubenswrapper[29097]: E0312 18:48:37.699450 29097 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 90050cff7b5852880030f3c0a5a083928d00c5d3dc28a22a53ca2f32e03f9bd8 is running failed: container process not found" containerID="90050cff7b5852880030f3c0a5a083928d00c5d3dc28a22a53ca2f32e03f9bd8" cmd=["/bin/true"] Mar 12 18:48:37.699890 master-0 kubenswrapper[29097]: E0312 18:48:37.699833 29097 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 90050cff7b5852880030f3c0a5a083928d00c5d3dc28a22a53ca2f32e03f9bd8 is running failed: container process not found" containerID="90050cff7b5852880030f3c0a5a083928d00c5d3dc28a22a53ca2f32e03f9bd8" cmd=["/bin/true"] Mar 12 18:48:37.700131 master-0 kubenswrapper[29097]: E0312 18:48:37.699982 29097 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 90050cff7b5852880030f3c0a5a083928d00c5d3dc28a22a53ca2f32e03f9bd8 is running failed: container process not found" containerID="90050cff7b5852880030f3c0a5a083928d00c5d3dc28a22a53ca2f32e03f9bd8" cmd=["/bin/true"] Mar 12 18:48:37.700131 master-0 kubenswrapper[29097]: E0312 18:48:37.700055 29097 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 90050cff7b5852880030f3c0a5a083928d00c5d3dc28a22a53ca2f32e03f9bd8 is running failed: container process not found" probeType="Liveness" pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" podUID="87a19dc7-5415-4d3d-a22e-9e2524a67e38" containerName="ironic-neutron-agent" Mar 12 18:48:37.700244 master-0 kubenswrapper[29097]: E0312 18:48:37.700170 29097 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 90050cff7b5852880030f3c0a5a083928d00c5d3dc28a22a53ca2f32e03f9bd8 is running failed: container process not found" containerID="90050cff7b5852880030f3c0a5a083928d00c5d3dc28a22a53ca2f32e03f9bd8" cmd=["/bin/true"] Mar 12 18:48:37.700244 master-0 kubenswrapper[29097]: E0312 18:48:37.700209 29097 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 90050cff7b5852880030f3c0a5a083928d00c5d3dc28a22a53ca2f32e03f9bd8 is running failed: container process not found" probeType="Readiness" pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" podUID="87a19dc7-5415-4d3d-a22e-9e2524a67e38" containerName="ironic-neutron-agent" Mar 12 18:48:37.821142 master-0 kubenswrapper[29097]: I0312 18:48:37.821037 29097 generic.go:334] "Generic (PLEG): container finished" podID="1b2854ab-857a-4029-b03e-470c4452693e" containerID="1d4d3e82ce5a17f17c898eb5b4ebbffec1ae77f40a3c1074178fa2a9cb667403" exitCode=143 Mar 12 18:48:37.821142 master-0 kubenswrapper[29097]: I0312 18:48:37.821091 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-55ccfbf469-qsbxt" event={"ID":"1b2854ab-857a-4029-b03e-470c4452693e","Type":"ContainerDied","Data":"1d4d3e82ce5a17f17c898eb5b4ebbffec1ae77f40a3c1074178fa2a9cb667403"} Mar 12 18:48:41.058639 master-0 kubenswrapper[29097]: I0312 18:48:41.058490 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-55ccfbf469-qsbxt" Mar 12 18:48:41.126293 master-0 kubenswrapper[29097]: I0312 18:48:41.126187 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhmlp\" (UniqueName: \"kubernetes.io/projected/1b2854ab-857a-4029-b03e-470c4452693e-kube-api-access-fhmlp\") pod \"1b2854ab-857a-4029-b03e-470c4452693e\" (UID: \"1b2854ab-857a-4029-b03e-470c4452693e\") " Mar 12 18:48:41.126293 master-0 kubenswrapper[29097]: I0312 18:48:41.126251 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1b2854ab-857a-4029-b03e-470c4452693e-etc-podinfo\") pod \"1b2854ab-857a-4029-b03e-470c4452693e\" (UID: \"1b2854ab-857a-4029-b03e-470c4452693e\") " Mar 12 18:48:41.126293 master-0 kubenswrapper[29097]: I0312 18:48:41.126309 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b2854ab-857a-4029-b03e-470c4452693e-combined-ca-bundle\") pod \"1b2854ab-857a-4029-b03e-470c4452693e\" (UID: \"1b2854ab-857a-4029-b03e-470c4452693e\") " Mar 12 18:48:41.126667 master-0 kubenswrapper[29097]: I0312 18:48:41.126327 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b2854ab-857a-4029-b03e-470c4452693e-config-data-custom\") pod \"1b2854ab-857a-4029-b03e-470c4452693e\" (UID: \"1b2854ab-857a-4029-b03e-470c4452693e\") " Mar 12 18:48:41.126667 master-0 kubenswrapper[29097]: I0312 18:48:41.126356 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b2854ab-857a-4029-b03e-470c4452693e-logs\") pod \"1b2854ab-857a-4029-b03e-470c4452693e\" (UID: \"1b2854ab-857a-4029-b03e-470c4452693e\") " Mar 12 18:48:41.126667 master-0 kubenswrapper[29097]: I0312 18:48:41.126424 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b2854ab-857a-4029-b03e-470c4452693e-scripts\") pod \"1b2854ab-857a-4029-b03e-470c4452693e\" (UID: \"1b2854ab-857a-4029-b03e-470c4452693e\") " Mar 12 18:48:41.126667 master-0 kubenswrapper[29097]: I0312 18:48:41.126446 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b2854ab-857a-4029-b03e-470c4452693e-config-data\") pod \"1b2854ab-857a-4029-b03e-470c4452693e\" (UID: \"1b2854ab-857a-4029-b03e-470c4452693e\") " Mar 12 18:48:41.126667 master-0 kubenswrapper[29097]: I0312 18:48:41.126495 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1b2854ab-857a-4029-b03e-470c4452693e-config-data-merged\") pod \"1b2854ab-857a-4029-b03e-470c4452693e\" (UID: \"1b2854ab-857a-4029-b03e-470c4452693e\") " Mar 12 18:48:41.128201 master-0 kubenswrapper[29097]: I0312 18:48:41.127802 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b2854ab-857a-4029-b03e-470c4452693e-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "1b2854ab-857a-4029-b03e-470c4452693e" (UID: "1b2854ab-857a-4029-b03e-470c4452693e"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:48:41.129905 master-0 kubenswrapper[29097]: I0312 18:48:41.129858 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b2854ab-857a-4029-b03e-470c4452693e-logs" (OuterVolumeSpecName: "logs") pod "1b2854ab-857a-4029-b03e-470c4452693e" (UID: "1b2854ab-857a-4029-b03e-470c4452693e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:48:41.131649 master-0 kubenswrapper[29097]: I0312 18:48:41.131569 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b2854ab-857a-4029-b03e-470c4452693e-kube-api-access-fhmlp" (OuterVolumeSpecName: "kube-api-access-fhmlp") pod "1b2854ab-857a-4029-b03e-470c4452693e" (UID: "1b2854ab-857a-4029-b03e-470c4452693e"). InnerVolumeSpecName "kube-api-access-fhmlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:48:41.151236 master-0 kubenswrapper[29097]: I0312 18:48:41.151095 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b2854ab-857a-4029-b03e-470c4452693e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1b2854ab-857a-4029-b03e-470c4452693e" (UID: "1b2854ab-857a-4029-b03e-470c4452693e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:41.151621 master-0 kubenswrapper[29097]: I0312 18:48:41.151582 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b2854ab-857a-4029-b03e-470c4452693e-scripts" (OuterVolumeSpecName: "scripts") pod "1b2854ab-857a-4029-b03e-470c4452693e" (UID: "1b2854ab-857a-4029-b03e-470c4452693e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:41.179939 master-0 kubenswrapper[29097]: I0312 18:48:41.179881 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/1b2854ab-857a-4029-b03e-470c4452693e-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "1b2854ab-857a-4029-b03e-470c4452693e" (UID: "1b2854ab-857a-4029-b03e-470c4452693e"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 12 18:48:41.203757 master-0 kubenswrapper[29097]: I0312 18:48:41.203688 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b2854ab-857a-4029-b03e-470c4452693e-config-data" (OuterVolumeSpecName: "config-data") pod "1b2854ab-857a-4029-b03e-470c4452693e" (UID: "1b2854ab-857a-4029-b03e-470c4452693e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:41.236858 master-0 kubenswrapper[29097]: I0312 18:48:41.228974 29097 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b2854ab-857a-4029-b03e-470c4452693e-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:41.236858 master-0 kubenswrapper[29097]: I0312 18:48:41.229011 29097 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b2854ab-857a-4029-b03e-470c4452693e-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:41.236858 master-0 kubenswrapper[29097]: I0312 18:48:41.229024 29097 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1b2854ab-857a-4029-b03e-470c4452693e-config-data-merged\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:41.236858 master-0 kubenswrapper[29097]: I0312 18:48:41.229032 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhmlp\" (UniqueName: \"kubernetes.io/projected/1b2854ab-857a-4029-b03e-470c4452693e-kube-api-access-fhmlp\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:41.236858 master-0 kubenswrapper[29097]: I0312 18:48:41.229041 29097 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1b2854ab-857a-4029-b03e-470c4452693e-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:41.236858 master-0 kubenswrapper[29097]: I0312 18:48:41.229049 29097 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b2854ab-857a-4029-b03e-470c4452693e-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:41.236858 master-0 kubenswrapper[29097]: I0312 18:48:41.229057 29097 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b2854ab-857a-4029-b03e-470c4452693e-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:41.271597 master-0 kubenswrapper[29097]: I0312 18:48:41.270209 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b2854ab-857a-4029-b03e-470c4452693e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b2854ab-857a-4029-b03e-470c4452693e" (UID: "1b2854ab-857a-4029-b03e-470c4452693e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:41.353424 master-0 kubenswrapper[29097]: I0312 18:48:41.338400 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b2854ab-857a-4029-b03e-470c4452693e-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:41.401077 master-0 kubenswrapper[29097]: I0312 18:48:41.401025 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5664b69d46-kq48m" Mar 12 18:48:41.406006 master-0 kubenswrapper[29097]: I0312 18:48:41.404426 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5664b69d46-kq48m" Mar 12 18:48:41.554711 master-0 kubenswrapper[29097]: I0312 18:48:41.548057 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6b99b5d8f4-vzr8p"] Mar 12 18:48:41.554711 master-0 kubenswrapper[29097]: I0312 18:48:41.548336 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6b99b5d8f4-vzr8p" podUID="2a88a474-9ce7-41df-9e1b-b1d78fe4c476" containerName="placement-log" containerID="cri-o://c52fc3dc2c25aed0273af45409d12c36842ca54255b24b4af72e10290786131a" gracePeriod=30 Mar 12 18:48:41.554711 master-0 kubenswrapper[29097]: I0312 18:48:41.548797 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6b99b5d8f4-vzr8p" podUID="2a88a474-9ce7-41df-9e1b-b1d78fe4c476" containerName="placement-api" containerID="cri-o://d4e28b6422d75d9d0d2d71ea5a2c1bdb041052b43c3dee32cd9728e1a9f073e1" gracePeriod=30 Mar 12 18:48:41.881903 master-0 kubenswrapper[29097]: I0312 18:48:41.881854 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-55ccfbf469-qsbxt" event={"ID":"1b2854ab-857a-4029-b03e-470c4452693e","Type":"ContainerDied","Data":"44cd5045eef6be158a58220799de6c214609df152467126f9f0ac639ed8c2b88"} Mar 12 18:48:41.882213 master-0 kubenswrapper[29097]: I0312 18:48:41.882195 29097 scope.go:117] "RemoveContainer" containerID="93a537970596b95a91a6486be6009923ac1cd4bce9add1b04addc18f1df3685e" Mar 12 18:48:41.882440 master-0 kubenswrapper[29097]: I0312 18:48:41.882421 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-55ccfbf469-qsbxt" Mar 12 18:48:41.888959 master-0 kubenswrapper[29097]: I0312 18:48:41.888910 29097 generic.go:334] "Generic (PLEG): container finished" podID="2a88a474-9ce7-41df-9e1b-b1d78fe4c476" containerID="c52fc3dc2c25aed0273af45409d12c36842ca54255b24b4af72e10290786131a" exitCode=143 Mar 12 18:48:41.889083 master-0 kubenswrapper[29097]: I0312 18:48:41.889014 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b99b5d8f4-vzr8p" event={"ID":"2a88a474-9ce7-41df-9e1b-b1d78fe4c476","Type":"ContainerDied","Data":"c52fc3dc2c25aed0273af45409d12c36842ca54255b24b4af72e10290786131a"} Mar 12 18:48:41.890605 master-0 kubenswrapper[29097]: I0312 18:48:41.890576 29097 generic.go:334] "Generic (PLEG): container finished" podID="87a19dc7-5415-4d3d-a22e-9e2524a67e38" containerID="90050cff7b5852880030f3c0a5a083928d00c5d3dc28a22a53ca2f32e03f9bd8" exitCode=1 Mar 12 18:48:41.891367 master-0 kubenswrapper[29097]: I0312 18:48:41.891305 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" event={"ID":"87a19dc7-5415-4d3d-a22e-9e2524a67e38","Type":"ContainerDied","Data":"90050cff7b5852880030f3c0a5a083928d00c5d3dc28a22a53ca2f32e03f9bd8"} Mar 12 18:48:41.892659 master-0 kubenswrapper[29097]: I0312 18:48:41.892569 29097 scope.go:117] "RemoveContainer" containerID="90050cff7b5852880030f3c0a5a083928d00c5d3dc28a22a53ca2f32e03f9bd8" Mar 12 18:48:41.948300 master-0 kubenswrapper[29097]: I0312 18:48:41.948249 29097 scope.go:117] "RemoveContainer" containerID="1d4d3e82ce5a17f17c898eb5b4ebbffec1ae77f40a3c1074178fa2a9cb667403" Mar 12 18:48:42.001344 master-0 kubenswrapper[29097]: E0312 18:48:42.000871 29097 kuberuntime_gc.go:389] "Failed to remove container log dead symlink" err="remove /var/log/containers/ironic-55ccfbf469-qsbxt_openstack_ironic-api-log-1d4d3e82ce5a17f17c898eb5b4ebbffec1ae77f40a3c1074178fa2a9cb667403.log: no such file or directory" path="/var/log/containers/ironic-55ccfbf469-qsbxt_openstack_ironic-api-log-1d4d3e82ce5a17f17c898eb5b4ebbffec1ae77f40a3c1074178fa2a9cb667403.log" Mar 12 18:48:42.019084 master-0 kubenswrapper[29097]: I0312 18:48:42.019013 29097 scope.go:117] "RemoveContainer" containerID="b1b3f7b56278d3dc5db232f790596113a3c5ac1eeec6727f8c1ca9278a4cb759" Mar 12 18:48:42.080313 master-0 kubenswrapper[29097]: I0312 18:48:42.080275 29097 scope.go:117] "RemoveContainer" containerID="1bf4107df195448536c691333d2c9fb8c90be7fb80d6a5739f9adbca0f4a5df7" Mar 12 18:48:42.551566 master-0 kubenswrapper[29097]: I0312 18:48:42.551086 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-703e-account-create-update-wl26r"] Mar 12 18:48:42.583366 master-0 kubenswrapper[29097]: I0312 18:48:42.582503 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-67d1-account-create-update-pllmv"] Mar 12 18:48:42.594571 master-0 kubenswrapper[29097]: I0312 18:48:42.594266 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-fkj7f"] Mar 12 18:48:42.672201 master-0 kubenswrapper[29097]: I0312 18:48:42.672148 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-pq2f7"] Mar 12 18:48:42.713485 master-0 kubenswrapper[29097]: I0312 18:48:42.711734 29097 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" Mar 12 18:48:42.816660 master-0 kubenswrapper[29097]: I0312 18:48:42.791279 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-trlwx"] Mar 12 18:48:42.816660 master-0 kubenswrapper[29097]: I0312 18:48:42.803891 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-tx4kq"] Mar 12 18:48:42.897735 master-0 kubenswrapper[29097]: I0312 18:48:42.897611 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-55ccfbf469-qsbxt"] Mar 12 18:48:42.984989 master-0 kubenswrapper[29097]: I0312 18:48:42.970620 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9671-account-create-update-hzxlx"] Mar 12 18:48:43.048952 master-0 kubenswrapper[29097]: I0312 18:48:43.048200 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-55ccfbf469-qsbxt"] Mar 12 18:48:43.059631 master-0 kubenswrapper[29097]: I0312 18:48:43.059573 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-tx4kq" event={"ID":"1b23c624-7392-41d0-a909-67f71b5e16ce","Type":"ContainerStarted","Data":"e80ef4e09727d54021f5d89a165f2b924d94bd9da6f83e5eb068d23720413866"} Mar 12 18:48:43.076963 master-0 kubenswrapper[29097]: I0312 18:48:43.076699 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-67d1-account-create-update-pllmv" event={"ID":"88b711ca-1327-4ce4-83c1-c5bf5b42cc5a","Type":"ContainerStarted","Data":"8b4db7ac9c6f5cf6f54614aec01ee36477ee990b6716ca019ebfe83f826531de"} Mar 12 18:48:43.079615 master-0 kubenswrapper[29097]: I0312 18:48:43.079579 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c9a54b83-be30-4fdb-a23c-1ad2ce020453","Type":"ContainerStarted","Data":"38441558dd8f03ae3a2c5d761262fb24be694d5231103048feba135b8c8627f1"} Mar 12 18:48:43.086991 master-0 kubenswrapper[29097]: I0312 18:48:43.086941 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-fkj7f" event={"ID":"40cf0854-144e-474c-a1ad-e588b7df2c68","Type":"ContainerStarted","Data":"35d69179bfb51522b9e625661da610476d5315913eb88086f93b8e0c346e3cda"} Mar 12 18:48:43.091637 master-0 kubenswrapper[29097]: I0312 18:48:43.091537 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-pq2f7" event={"ID":"15059f12-694b-4ece-9321-0d18e7c95c04","Type":"ContainerStarted","Data":"a55c0b2fc5de4998bde7c4263cbe72222f8be86ebbfd70794e32c56efd858c47"} Mar 12 18:48:43.100931 master-0 kubenswrapper[29097]: I0312 18:48:43.100750 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9671-account-create-update-hzxlx" event={"ID":"a9855cba-3217-439f-8c5e-32b3064a3330","Type":"ContainerStarted","Data":"ec6e096befd3e0c4abc4bde614e7f9bffa14d8545046307689c0243cc52abe02"} Mar 12 18:48:43.102428 master-0 kubenswrapper[29097]: I0312 18:48:43.102399 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-trlwx" event={"ID":"8136b672-acef-4cb2-8316-358d20c26489","Type":"ContainerStarted","Data":"ebd9646724dbf9defeaa50f59db5a635076ad2f063b971666666e1309dc4242f"} Mar 12 18:48:43.103378 master-0 kubenswrapper[29097]: I0312 18:48:43.103345 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-703e-account-create-update-wl26r" event={"ID":"62db1d8c-39c2-47ea-bac4-e9a0d7febb99","Type":"ContainerStarted","Data":"28922dcec81e97fda23898b583a8c7cb53b310a10484b23b5552f11b37b3f680"} Mar 12 18:48:43.112293 master-0 kubenswrapper[29097]: I0312 18:48:43.112225 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=5.76466748 podStartE2EDuration="23.112210669s" podCreationTimestamp="2026-03-12 18:48:20 +0000 UTC" firstStartedPulling="2026-03-12 18:48:23.898412681 +0000 UTC m=+1143.452392768" lastFinishedPulling="2026-03-12 18:48:41.24595586 +0000 UTC m=+1160.799935957" observedRunningTime="2026-03-12 18:48:43.105617356 +0000 UTC m=+1162.659597453" watchObservedRunningTime="2026-03-12 18:48:43.112210669 +0000 UTC m=+1162.666190756" Mar 12 18:48:43.145032 master-0 kubenswrapper[29097]: I0312 18:48:43.144878 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" event={"ID":"87a19dc7-5415-4d3d-a22e-9e2524a67e38","Type":"ContainerStarted","Data":"b8aaa63468c9fd987f471541a3a63ebc1775a5b66d6068f313f8da7cd0af22e7"} Mar 12 18:48:43.148002 master-0 kubenswrapper[29097]: I0312 18:48:43.147913 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" Mar 12 18:48:44.176789 master-0 kubenswrapper[29097]: I0312 18:48:44.176735 29097 generic.go:334] "Generic (PLEG): container finished" podID="40cf0854-144e-474c-a1ad-e588b7df2c68" containerID="375f45edf51df9c280f4073b7e1e36dcdef6e230b24c80cd7f0b7ddaa870f519" exitCode=0 Mar 12 18:48:44.177342 master-0 kubenswrapper[29097]: I0312 18:48:44.176834 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-fkj7f" event={"ID":"40cf0854-144e-474c-a1ad-e588b7df2c68","Type":"ContainerDied","Data":"375f45edf51df9c280f4073b7e1e36dcdef6e230b24c80cd7f0b7ddaa870f519"} Mar 12 18:48:44.188418 master-0 kubenswrapper[29097]: I0312 18:48:44.185590 29097 generic.go:334] "Generic (PLEG): container finished" podID="a9855cba-3217-439f-8c5e-32b3064a3330" containerID="4bf5df41e1d86492934fa83b79dc48963d75c0170075e617d8840a95808df6b8" exitCode=0 Mar 12 18:48:44.190164 master-0 kubenswrapper[29097]: I0312 18:48:44.185628 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9671-account-create-update-hzxlx" event={"ID":"a9855cba-3217-439f-8c5e-32b3064a3330","Type":"ContainerDied","Data":"4bf5df41e1d86492934fa83b79dc48963d75c0170075e617d8840a95808df6b8"} Mar 12 18:48:44.197884 master-0 kubenswrapper[29097]: I0312 18:48:44.197158 29097 generic.go:334] "Generic (PLEG): container finished" podID="8136b672-acef-4cb2-8316-358d20c26489" containerID="9e9855b34390a438d3e7666818fbacb201478bab40e465d0f2225acef3783e3e" exitCode=0 Mar 12 18:48:44.197884 master-0 kubenswrapper[29097]: I0312 18:48:44.197237 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-trlwx" event={"ID":"8136b672-acef-4cb2-8316-358d20c26489","Type":"ContainerDied","Data":"9e9855b34390a438d3e7666818fbacb201478bab40e465d0f2225acef3783e3e"} Mar 12 18:48:44.202716 master-0 kubenswrapper[29097]: I0312 18:48:44.202544 29097 generic.go:334] "Generic (PLEG): container finished" podID="62db1d8c-39c2-47ea-bac4-e9a0d7febb99" containerID="5e6380a9b35e480c7d01e847a864ba84f2ed02c8b3d4a6acf28ffeab8d571097" exitCode=0 Mar 12 18:48:44.202716 master-0 kubenswrapper[29097]: I0312 18:48:44.202605 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-703e-account-create-update-wl26r" event={"ID":"62db1d8c-39c2-47ea-bac4-e9a0d7febb99","Type":"ContainerDied","Data":"5e6380a9b35e480c7d01e847a864ba84f2ed02c8b3d4a6acf28ffeab8d571097"} Mar 12 18:48:44.205763 master-0 kubenswrapper[29097]: I0312 18:48:44.205628 29097 generic.go:334] "Generic (PLEG): container finished" podID="1b23c624-7392-41d0-a909-67f71b5e16ce" containerID="5f53bfa25e7536fd4908e292b3a374bb79a94cc63ea5ca21252397fe0cb448c3" exitCode=0 Mar 12 18:48:44.206788 master-0 kubenswrapper[29097]: I0312 18:48:44.205745 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-tx4kq" event={"ID":"1b23c624-7392-41d0-a909-67f71b5e16ce","Type":"ContainerDied","Data":"5f53bfa25e7536fd4908e292b3a374bb79a94cc63ea5ca21252397fe0cb448c3"} Mar 12 18:48:44.208298 master-0 kubenswrapper[29097]: I0312 18:48:44.208266 29097 generic.go:334] "Generic (PLEG): container finished" podID="88b711ca-1327-4ce4-83c1-c5bf5b42cc5a" containerID="b0e3b06f6ad1ca72f2703de061ad962e54d5af2d1c5f16ecdbd50e7aa9a2e276" exitCode=0 Mar 12 18:48:44.209332 master-0 kubenswrapper[29097]: I0312 18:48:44.209300 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-67d1-account-create-update-pllmv" event={"ID":"88b711ca-1327-4ce4-83c1-c5bf5b42cc5a","Type":"ContainerDied","Data":"b0e3b06f6ad1ca72f2703de061ad962e54d5af2d1c5f16ecdbd50e7aa9a2e276"} Mar 12 18:48:44.740531 master-0 kubenswrapper[29097]: I0312 18:48:44.740479 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b2854ab-857a-4029-b03e-470c4452693e" path="/var/lib/kubelet/pods/1b2854ab-857a-4029-b03e-470c4452693e/volumes" Mar 12 18:48:45.230862 master-0 kubenswrapper[29097]: I0312 18:48:45.230040 29097 generic.go:334] "Generic (PLEG): container finished" podID="2a88a474-9ce7-41df-9e1b-b1d78fe4c476" containerID="d4e28b6422d75d9d0d2d71ea5a2c1bdb041052b43c3dee32cd9728e1a9f073e1" exitCode=0 Mar 12 18:48:45.230862 master-0 kubenswrapper[29097]: I0312 18:48:45.230285 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b99b5d8f4-vzr8p" event={"ID":"2a88a474-9ce7-41df-9e1b-b1d78fe4c476","Type":"ContainerDied","Data":"d4e28b6422d75d9d0d2d71ea5a2c1bdb041052b43c3dee32cd9728e1a9f073e1"} Mar 12 18:48:45.726484 master-0 kubenswrapper[29097]: I0312 18:48:45.726334 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b99b5d8f4-vzr8p" Mar 12 18:48:45.837569 master-0 kubenswrapper[29097]: I0312 18:48:45.837523 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-combined-ca-bundle\") pod \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\" (UID: \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\") " Mar 12 18:48:45.837931 master-0 kubenswrapper[29097]: I0312 18:48:45.837914 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-public-tls-certs\") pod \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\" (UID: \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\") " Mar 12 18:48:45.838167 master-0 kubenswrapper[29097]: I0312 18:48:45.838142 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-logs\") pod \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\" (UID: \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\") " Mar 12 18:48:45.838266 master-0 kubenswrapper[29097]: I0312 18:48:45.838251 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-internal-tls-certs\") pod \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\" (UID: \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\") " Mar 12 18:48:45.838380 master-0 kubenswrapper[29097]: I0312 18:48:45.838366 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-config-data\") pod \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\" (UID: \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\") " Mar 12 18:48:45.838668 master-0 kubenswrapper[29097]: I0312 18:48:45.838570 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlqsv\" (UniqueName: \"kubernetes.io/projected/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-kube-api-access-hlqsv\") pod \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\" (UID: \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\") " Mar 12 18:48:45.838915 master-0 kubenswrapper[29097]: I0312 18:48:45.838895 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-scripts\") pod \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\" (UID: \"2a88a474-9ce7-41df-9e1b-b1d78fe4c476\") " Mar 12 18:48:45.839451 master-0 kubenswrapper[29097]: I0312 18:48:45.839410 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-logs" (OuterVolumeSpecName: "logs") pod "2a88a474-9ce7-41df-9e1b-b1d78fe4c476" (UID: "2a88a474-9ce7-41df-9e1b-b1d78fe4c476"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:48:45.840122 master-0 kubenswrapper[29097]: I0312 18:48:45.840103 29097 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:45.848392 master-0 kubenswrapper[29097]: I0312 18:48:45.848356 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-scripts" (OuterVolumeSpecName: "scripts") pod "2a88a474-9ce7-41df-9e1b-b1d78fe4c476" (UID: "2a88a474-9ce7-41df-9e1b-b1d78fe4c476"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:45.851053 master-0 kubenswrapper[29097]: I0312 18:48:45.851025 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-kube-api-access-hlqsv" (OuterVolumeSpecName: "kube-api-access-hlqsv") pod "2a88a474-9ce7-41df-9e1b-b1d78fe4c476" (UID: "2a88a474-9ce7-41df-9e1b-b1d78fe4c476"). InnerVolumeSpecName "kube-api-access-hlqsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:48:45.946393 master-0 kubenswrapper[29097]: I0312 18:48:45.945746 29097 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:45.946393 master-0 kubenswrapper[29097]: I0312 18:48:45.945780 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlqsv\" (UniqueName: \"kubernetes.io/projected/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-kube-api-access-hlqsv\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:46.062334 master-0 kubenswrapper[29097]: I0312 18:48:46.062266 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-config-data" (OuterVolumeSpecName: "config-data") pod "2a88a474-9ce7-41df-9e1b-b1d78fe4c476" (UID: "2a88a474-9ce7-41df-9e1b-b1d78fe4c476"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:46.072736 master-0 kubenswrapper[29097]: I0312 18:48:46.072683 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a88a474-9ce7-41df-9e1b-b1d78fe4c476" (UID: "2a88a474-9ce7-41df-9e1b-b1d78fe4c476"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:46.072973 master-0 kubenswrapper[29097]: I0312 18:48:46.072933 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2a88a474-9ce7-41df-9e1b-b1d78fe4c476" (UID: "2a88a474-9ce7-41df-9e1b-b1d78fe4c476"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:46.095963 master-0 kubenswrapper[29097]: I0312 18:48:46.095071 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2a88a474-9ce7-41df-9e1b-b1d78fe4c476" (UID: "2a88a474-9ce7-41df-9e1b-b1d78fe4c476"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:46.096255 master-0 kubenswrapper[29097]: I0312 18:48:46.096155 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-67d1-account-create-update-pllmv" Mar 12 18:48:46.124043 master-0 kubenswrapper[29097]: I0312 18:48:46.123883 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-trlwx" Mar 12 18:48:46.159771 master-0 kubenswrapper[29097]: I0312 18:48:46.159723 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88b711ca-1327-4ce4-83c1-c5bf5b42cc5a-operator-scripts\") pod \"88b711ca-1327-4ce4-83c1-c5bf5b42cc5a\" (UID: \"88b711ca-1327-4ce4-83c1-c5bf5b42cc5a\") " Mar 12 18:48:46.160413 master-0 kubenswrapper[29097]: I0312 18:48:46.160377 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88b711ca-1327-4ce4-83c1-c5bf5b42cc5a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "88b711ca-1327-4ce4-83c1-c5bf5b42cc5a" (UID: "88b711ca-1327-4ce4-83c1-c5bf5b42cc5a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:48:46.160631 master-0 kubenswrapper[29097]: I0312 18:48:46.160589 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxxnm\" (UniqueName: \"kubernetes.io/projected/88b711ca-1327-4ce4-83c1-c5bf5b42cc5a-kube-api-access-mxxnm\") pod \"88b711ca-1327-4ce4-83c1-c5bf5b42cc5a\" (UID: \"88b711ca-1327-4ce4-83c1-c5bf5b42cc5a\") " Mar 12 18:48:46.162018 master-0 kubenswrapper[29097]: I0312 18:48:46.161988 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:46.162157 master-0 kubenswrapper[29097]: I0312 18:48:46.162084 29097 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:46.162157 master-0 kubenswrapper[29097]: I0312 18:48:46.162101 29097 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:46.162157 master-0 kubenswrapper[29097]: I0312 18:48:46.162117 29097 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88b711ca-1327-4ce4-83c1-c5bf5b42cc5a-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:46.162157 master-0 kubenswrapper[29097]: I0312 18:48:46.162131 29097 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a88a474-9ce7-41df-9e1b-b1d78fe4c476-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:46.164733 master-0 kubenswrapper[29097]: I0312 18:48:46.164643 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88b711ca-1327-4ce4-83c1-c5bf5b42cc5a-kube-api-access-mxxnm" (OuterVolumeSpecName: "kube-api-access-mxxnm") pod "88b711ca-1327-4ce4-83c1-c5bf5b42cc5a" (UID: "88b711ca-1327-4ce4-83c1-c5bf5b42cc5a"). InnerVolumeSpecName "kube-api-access-mxxnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:48:46.264764 master-0 kubenswrapper[29097]: I0312 18:48:46.263993 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8136b672-acef-4cb2-8316-358d20c26489-operator-scripts\") pod \"8136b672-acef-4cb2-8316-358d20c26489\" (UID: \"8136b672-acef-4cb2-8316-358d20c26489\") " Mar 12 18:48:46.264764 master-0 kubenswrapper[29097]: I0312 18:48:46.264070 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pznt\" (UniqueName: \"kubernetes.io/projected/8136b672-acef-4cb2-8316-358d20c26489-kube-api-access-2pznt\") pod \"8136b672-acef-4cb2-8316-358d20c26489\" (UID: \"8136b672-acef-4cb2-8316-358d20c26489\") " Mar 12 18:48:46.264764 master-0 kubenswrapper[29097]: I0312 18:48:46.264635 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxxnm\" (UniqueName: \"kubernetes.io/projected/88b711ca-1327-4ce4-83c1-c5bf5b42cc5a-kube-api-access-mxxnm\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:46.266345 master-0 kubenswrapper[29097]: I0312 18:48:46.265302 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8136b672-acef-4cb2-8316-358d20c26489-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8136b672-acef-4cb2-8316-358d20c26489" (UID: "8136b672-acef-4cb2-8316-358d20c26489"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:48:46.272464 master-0 kubenswrapper[29097]: I0312 18:48:46.271943 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9671-account-create-update-hzxlx" Mar 12 18:48:46.278888 master-0 kubenswrapper[29097]: I0312 18:48:46.278556 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tx4kq" Mar 12 18:48:46.281293 master-0 kubenswrapper[29097]: I0312 18:48:46.281244 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8136b672-acef-4cb2-8316-358d20c26489-kube-api-access-2pznt" (OuterVolumeSpecName: "kube-api-access-2pznt") pod "8136b672-acef-4cb2-8316-358d20c26489" (UID: "8136b672-acef-4cb2-8316-358d20c26489"). InnerVolumeSpecName "kube-api-access-2pznt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:48:46.291278 master-0 kubenswrapper[29097]: I0312 18:48:46.291219 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-pq2f7" event={"ID":"15059f12-694b-4ece-9321-0d18e7c95c04","Type":"ContainerStarted","Data":"a7597aacf9feb046c46db7226db8e45e2018f368fc2366dd3e037b08974b1823"} Mar 12 18:48:46.294432 master-0 kubenswrapper[29097]: I0312 18:48:46.294395 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9671-account-create-update-hzxlx" event={"ID":"a9855cba-3217-439f-8c5e-32b3064a3330","Type":"ContainerDied","Data":"ec6e096befd3e0c4abc4bde614e7f9bffa14d8545046307689c0243cc52abe02"} Mar 12 18:48:46.294432 master-0 kubenswrapper[29097]: I0312 18:48:46.294428 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec6e096befd3e0c4abc4bde614e7f9bffa14d8545046307689c0243cc52abe02" Mar 12 18:48:46.294542 master-0 kubenswrapper[29097]: I0312 18:48:46.294473 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9671-account-create-update-hzxlx" Mar 12 18:48:46.302041 master-0 kubenswrapper[29097]: I0312 18:48:46.302004 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-trlwx" event={"ID":"8136b672-acef-4cb2-8316-358d20c26489","Type":"ContainerDied","Data":"ebd9646724dbf9defeaa50f59db5a635076ad2f063b971666666e1309dc4242f"} Mar 12 18:48:46.302041 master-0 kubenswrapper[29097]: I0312 18:48:46.302038 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebd9646724dbf9defeaa50f59db5a635076ad2f063b971666666e1309dc4242f" Mar 12 18:48:46.302133 master-0 kubenswrapper[29097]: I0312 18:48:46.302072 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-trlwx" Mar 12 18:48:46.325009 master-0 kubenswrapper[29097]: I0312 18:48:46.324950 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-703e-account-create-update-wl26r" event={"ID":"62db1d8c-39c2-47ea-bac4-e9a0d7febb99","Type":"ContainerDied","Data":"28922dcec81e97fda23898b583a8c7cb53b310a10484b23b5552f11b37b3f680"} Mar 12 18:48:46.325009 master-0 kubenswrapper[29097]: I0312 18:48:46.325006 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28922dcec81e97fda23898b583a8c7cb53b310a10484b23b5552f11b37b3f680" Mar 12 18:48:46.326331 master-0 kubenswrapper[29097]: I0312 18:48:46.326308 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-703e-account-create-update-wl26r" Mar 12 18:48:46.327555 master-0 kubenswrapper[29097]: I0312 18:48:46.327506 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tx4kq" Mar 12 18:48:46.327635 master-0 kubenswrapper[29097]: I0312 18:48:46.327552 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-tx4kq" event={"ID":"1b23c624-7392-41d0-a909-67f71b5e16ce","Type":"ContainerDied","Data":"e80ef4e09727d54021f5d89a165f2b924d94bd9da6f83e5eb068d23720413866"} Mar 12 18:48:46.327635 master-0 kubenswrapper[29097]: I0312 18:48:46.327594 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e80ef4e09727d54021f5d89a165f2b924d94bd9da6f83e5eb068d23720413866" Mar 12 18:48:46.329904 master-0 kubenswrapper[29097]: I0312 18:48:46.329879 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b99b5d8f4-vzr8p" event={"ID":"2a88a474-9ce7-41df-9e1b-b1d78fe4c476","Type":"ContainerDied","Data":"e3f0c0762369d201942b1271c0083c0ab314c0806695dd6f381c0d7433bc04f1"} Mar 12 18:48:46.329992 master-0 kubenswrapper[29097]: I0312 18:48:46.329921 29097 scope.go:117] "RemoveContainer" containerID="d4e28b6422d75d9d0d2d71ea5a2c1bdb041052b43c3dee32cd9728e1a9f073e1" Mar 12 18:48:46.329992 master-0 kubenswrapper[29097]: I0312 18:48:46.329887 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b99b5d8f4-vzr8p" Mar 12 18:48:46.336272 master-0 kubenswrapper[29097]: I0312 18:48:46.336241 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-67d1-account-create-update-pllmv" event={"ID":"88b711ca-1327-4ce4-83c1-c5bf5b42cc5a","Type":"ContainerDied","Data":"8b4db7ac9c6f5cf6f54614aec01ee36477ee990b6716ca019ebfe83f826531de"} Mar 12 18:48:46.336814 master-0 kubenswrapper[29097]: I0312 18:48:46.336797 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b4db7ac9c6f5cf6f54614aec01ee36477ee990b6716ca019ebfe83f826531de" Mar 12 18:48:46.336913 master-0 kubenswrapper[29097]: I0312 18:48:46.336255 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-67d1-account-create-update-pllmv" Mar 12 18:48:46.337836 master-0 kubenswrapper[29097]: I0312 18:48:46.337815 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fkj7f" Mar 12 18:48:46.339931 master-0 kubenswrapper[29097]: I0312 18:48:46.339894 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-fkj7f" event={"ID":"40cf0854-144e-474c-a1ad-e588b7df2c68","Type":"ContainerDied","Data":"35d69179bfb51522b9e625661da610476d5315913eb88086f93b8e0c346e3cda"} Mar 12 18:48:46.339998 master-0 kubenswrapper[29097]: I0312 18:48:46.339936 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35d69179bfb51522b9e625661da610476d5315913eb88086f93b8e0c346e3cda" Mar 12 18:48:46.367330 master-0 kubenswrapper[29097]: I0312 18:48:46.365754 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5q6vf\" (UniqueName: \"kubernetes.io/projected/1b23c624-7392-41d0-a909-67f71b5e16ce-kube-api-access-5q6vf\") pod \"1b23c624-7392-41d0-a909-67f71b5e16ce\" (UID: \"1b23c624-7392-41d0-a909-67f71b5e16ce\") " Mar 12 18:48:46.367330 master-0 kubenswrapper[29097]: I0312 18:48:46.365880 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b23c624-7392-41d0-a909-67f71b5e16ce-operator-scripts\") pod \"1b23c624-7392-41d0-a909-67f71b5e16ce\" (UID: \"1b23c624-7392-41d0-a909-67f71b5e16ce\") " Mar 12 18:48:46.367330 master-0 kubenswrapper[29097]: I0312 18:48:46.365920 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9855cba-3217-439f-8c5e-32b3064a3330-operator-scripts\") pod \"a9855cba-3217-439f-8c5e-32b3064a3330\" (UID: \"a9855cba-3217-439f-8c5e-32b3064a3330\") " Mar 12 18:48:46.367330 master-0 kubenswrapper[29097]: I0312 18:48:46.367327 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b23c624-7392-41d0-a909-67f71b5e16ce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1b23c624-7392-41d0-a909-67f71b5e16ce" (UID: "1b23c624-7392-41d0-a909-67f71b5e16ce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:48:46.367656 master-0 kubenswrapper[29097]: I0312 18:48:46.367354 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx7ft\" (UniqueName: \"kubernetes.io/projected/a9855cba-3217-439f-8c5e-32b3064a3330-kube-api-access-gx7ft\") pod \"a9855cba-3217-439f-8c5e-32b3064a3330\" (UID: \"a9855cba-3217-439f-8c5e-32b3064a3330\") " Mar 12 18:48:46.367882 master-0 kubenswrapper[29097]: I0312 18:48:46.367845 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9855cba-3217-439f-8c5e-32b3064a3330-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9855cba-3217-439f-8c5e-32b3064a3330" (UID: "a9855cba-3217-439f-8c5e-32b3064a3330"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:48:46.368274 master-0 kubenswrapper[29097]: I0312 18:48:46.368228 29097 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b23c624-7392-41d0-a909-67f71b5e16ce-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:46.368274 master-0 kubenswrapper[29097]: I0312 18:48:46.368270 29097 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9855cba-3217-439f-8c5e-32b3064a3330-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:46.368345 master-0 kubenswrapper[29097]: I0312 18:48:46.368281 29097 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8136b672-acef-4cb2-8316-358d20c26489-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:46.368345 master-0 kubenswrapper[29097]: I0312 18:48:46.368293 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pznt\" (UniqueName: \"kubernetes.io/projected/8136b672-acef-4cb2-8316-358d20c26489-kube-api-access-2pznt\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:46.399659 master-0 kubenswrapper[29097]: I0312 18:48:46.397859 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b23c624-7392-41d0-a909-67f71b5e16ce-kube-api-access-5q6vf" (OuterVolumeSpecName: "kube-api-access-5q6vf") pod "1b23c624-7392-41d0-a909-67f71b5e16ce" (UID: "1b23c624-7392-41d0-a909-67f71b5e16ce"). InnerVolumeSpecName "kube-api-access-5q6vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:48:46.401587 master-0 kubenswrapper[29097]: I0312 18:48:46.400562 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9855cba-3217-439f-8c5e-32b3064a3330-kube-api-access-gx7ft" (OuterVolumeSpecName: "kube-api-access-gx7ft") pod "a9855cba-3217-439f-8c5e-32b3064a3330" (UID: "a9855cba-3217-439f-8c5e-32b3064a3330"). InnerVolumeSpecName "kube-api-access-gx7ft". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:48:46.415321 master-0 kubenswrapper[29097]: I0312 18:48:46.414869 29097 scope.go:117] "RemoveContainer" containerID="c52fc3dc2c25aed0273af45409d12c36842ca54255b24b4af72e10290786131a" Mar 12 18:48:46.469536 master-0 kubenswrapper[29097]: I0312 18:48:46.469487 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40cf0854-144e-474c-a1ad-e588b7df2c68-operator-scripts\") pod \"40cf0854-144e-474c-a1ad-e588b7df2c68\" (UID: \"40cf0854-144e-474c-a1ad-e588b7df2c68\") " Mar 12 18:48:46.469698 master-0 kubenswrapper[29097]: I0312 18:48:46.469684 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62db1d8c-39c2-47ea-bac4-e9a0d7febb99-operator-scripts\") pod \"62db1d8c-39c2-47ea-bac4-e9a0d7febb99\" (UID: \"62db1d8c-39c2-47ea-bac4-e9a0d7febb99\") " Mar 12 18:48:46.469958 master-0 kubenswrapper[29097]: I0312 18:48:46.469943 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmv4r\" (UniqueName: \"kubernetes.io/projected/40cf0854-144e-474c-a1ad-e588b7df2c68-kube-api-access-rmv4r\") pod \"40cf0854-144e-474c-a1ad-e588b7df2c68\" (UID: \"40cf0854-144e-474c-a1ad-e588b7df2c68\") " Mar 12 18:48:46.470169 master-0 kubenswrapper[29097]: I0312 18:48:46.470114 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40cf0854-144e-474c-a1ad-e588b7df2c68-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "40cf0854-144e-474c-a1ad-e588b7df2c68" (UID: "40cf0854-144e-474c-a1ad-e588b7df2c68"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:48:46.470243 master-0 kubenswrapper[29097]: I0312 18:48:46.470200 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62db1d8c-39c2-47ea-bac4-e9a0d7febb99-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "62db1d8c-39c2-47ea-bac4-e9a0d7febb99" (UID: "62db1d8c-39c2-47ea-bac4-e9a0d7febb99"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:48:46.470310 master-0 kubenswrapper[29097]: I0312 18:48:46.470294 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g86ln\" (UniqueName: \"kubernetes.io/projected/62db1d8c-39c2-47ea-bac4-e9a0d7febb99-kube-api-access-g86ln\") pod \"62db1d8c-39c2-47ea-bac4-e9a0d7febb99\" (UID: \"62db1d8c-39c2-47ea-bac4-e9a0d7febb99\") " Mar 12 18:48:46.470836 master-0 kubenswrapper[29097]: I0312 18:48:46.470820 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx7ft\" (UniqueName: \"kubernetes.io/projected/a9855cba-3217-439f-8c5e-32b3064a3330-kube-api-access-gx7ft\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:46.470913 master-0 kubenswrapper[29097]: I0312 18:48:46.470901 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5q6vf\" (UniqueName: \"kubernetes.io/projected/1b23c624-7392-41d0-a909-67f71b5e16ce-kube-api-access-5q6vf\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:46.470977 master-0 kubenswrapper[29097]: I0312 18:48:46.470967 29097 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40cf0854-144e-474c-a1ad-e588b7df2c68-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:46.471045 master-0 kubenswrapper[29097]: I0312 18:48:46.471035 29097 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62db1d8c-39c2-47ea-bac4-e9a0d7febb99-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:46.473093 master-0 kubenswrapper[29097]: I0312 18:48:46.473056 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62db1d8c-39c2-47ea-bac4-e9a0d7febb99-kube-api-access-g86ln" (OuterVolumeSpecName: "kube-api-access-g86ln") pod "62db1d8c-39c2-47ea-bac4-e9a0d7febb99" (UID: "62db1d8c-39c2-47ea-bac4-e9a0d7febb99"). InnerVolumeSpecName "kube-api-access-g86ln". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:48:46.473624 master-0 kubenswrapper[29097]: I0312 18:48:46.473578 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40cf0854-144e-474c-a1ad-e588b7df2c68-kube-api-access-rmv4r" (OuterVolumeSpecName: "kube-api-access-rmv4r") pod "40cf0854-144e-474c-a1ad-e588b7df2c68" (UID: "40cf0854-144e-474c-a1ad-e588b7df2c68"). InnerVolumeSpecName "kube-api-access-rmv4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:48:46.574018 master-0 kubenswrapper[29097]: I0312 18:48:46.573955 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g86ln\" (UniqueName: \"kubernetes.io/projected/62db1d8c-39c2-47ea-bac4-e9a0d7febb99-kube-api-access-g86ln\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:46.574264 master-0 kubenswrapper[29097]: I0312 18:48:46.574069 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmv4r\" (UniqueName: \"kubernetes.io/projected/40cf0854-144e-474c-a1ad-e588b7df2c68-kube-api-access-rmv4r\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:47.012445 master-0 kubenswrapper[29097]: I0312 18:48:47.012296 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6b99b5d8f4-vzr8p"] Mar 12 18:48:47.220794 master-0 kubenswrapper[29097]: I0312 18:48:47.220648 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6b99b5d8f4-vzr8p"] Mar 12 18:48:47.299605 master-0 kubenswrapper[29097]: I0312 18:48:47.294613 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-db-sync-pq2f7" podStartSLOduration=12.697101707 podStartE2EDuration="15.294590706s" podCreationTimestamp="2026-03-12 18:48:32 +0000 UTC" firstStartedPulling="2026-03-12 18:48:42.616883445 +0000 UTC m=+1162.170863542" lastFinishedPulling="2026-03-12 18:48:45.214372444 +0000 UTC m=+1164.768352541" observedRunningTime="2026-03-12 18:48:47.277137064 +0000 UTC m=+1166.831117171" watchObservedRunningTime="2026-03-12 18:48:47.294590706 +0000 UTC m=+1166.848570803" Mar 12 18:48:47.391050 master-0 kubenswrapper[29097]: I0312 18:48:47.390996 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-703e-account-create-update-wl26r" Mar 12 18:48:47.392327 master-0 kubenswrapper[29097]: I0312 18:48:47.391966 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fkj7f" Mar 12 18:48:47.764269 master-0 kubenswrapper[29097]: I0312 18:48:47.764167 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" Mar 12 18:48:47.952164 master-0 kubenswrapper[29097]: I0312 18:48:47.952099 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-16afb-default-external-api-0"] Mar 12 18:48:47.952442 master-0 kubenswrapper[29097]: I0312 18:48:47.952411 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-16afb-default-external-api-0" podUID="c5395ae3-4fe1-4ada-85ee-30841c1ad513" containerName="glance-log" containerID="cri-o://cc2f30d353ab6bd817917a3b359d2e1b91746e6c161b42424dab9cef450ca08e" gracePeriod=30 Mar 12 18:48:47.955548 master-0 kubenswrapper[29097]: I0312 18:48:47.952560 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-16afb-default-external-api-0" podUID="c5395ae3-4fe1-4ada-85ee-30841c1ad513" containerName="glance-httpd" containerID="cri-o://f762a2f6987698bc02d141b61a07e1fd7ad2686796e1e73a12727367dc957574" gracePeriod=30 Mar 12 18:48:48.407273 master-0 kubenswrapper[29097]: I0312 18:48:48.407210 29097 generic.go:334] "Generic (PLEG): container finished" podID="c5395ae3-4fe1-4ada-85ee-30841c1ad513" containerID="cc2f30d353ab6bd817917a3b359d2e1b91746e6c161b42424dab9cef450ca08e" exitCode=143 Mar 12 18:48:48.407856 master-0 kubenswrapper[29097]: I0312 18:48:48.407295 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-16afb-default-external-api-0" event={"ID":"c5395ae3-4fe1-4ada-85ee-30841c1ad513","Type":"ContainerDied","Data":"cc2f30d353ab6bd817917a3b359d2e1b91746e6c161b42424dab9cef450ca08e"} Mar 12 18:48:48.410276 master-0 kubenswrapper[29097]: I0312 18:48:48.410234 29097 generic.go:334] "Generic (PLEG): container finished" podID="87a19dc7-5415-4d3d-a22e-9e2524a67e38" containerID="b8aaa63468c9fd987f471541a3a63ebc1775a5b66d6068f313f8da7cd0af22e7" exitCode=1 Mar 12 18:48:48.410398 master-0 kubenswrapper[29097]: I0312 18:48:48.410272 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" event={"ID":"87a19dc7-5415-4d3d-a22e-9e2524a67e38","Type":"ContainerDied","Data":"b8aaa63468c9fd987f471541a3a63ebc1775a5b66d6068f313f8da7cd0af22e7"} Mar 12 18:48:48.410501 master-0 kubenswrapper[29097]: I0312 18:48:48.410488 29097 scope.go:117] "RemoveContainer" containerID="90050cff7b5852880030f3c0a5a083928d00c5d3dc28a22a53ca2f32e03f9bd8" Mar 12 18:48:48.411256 master-0 kubenswrapper[29097]: I0312 18:48:48.411225 29097 scope.go:117] "RemoveContainer" containerID="b8aaa63468c9fd987f471541a3a63ebc1775a5b66d6068f313f8da7cd0af22e7" Mar 12 18:48:48.411552 master-0 kubenswrapper[29097]: E0312 18:48:48.411497 29097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-7cb69d965b-d79tc_openstack(87a19dc7-5415-4d3d-a22e-9e2524a67e38)\"" pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" podUID="87a19dc7-5415-4d3d-a22e-9e2524a67e38" Mar 12 18:48:48.413607 master-0 kubenswrapper[29097]: I0312 18:48:48.413582 29097 generic.go:334] "Generic (PLEG): container finished" podID="15059f12-694b-4ece-9321-0d18e7c95c04" containerID="a7597aacf9feb046c46db7226db8e45e2018f368fc2366dd3e037b08974b1823" exitCode=0 Mar 12 18:48:48.413693 master-0 kubenswrapper[29097]: I0312 18:48:48.413679 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-pq2f7" event={"ID":"15059f12-694b-4ece-9321-0d18e7c95c04","Type":"ContainerDied","Data":"a7597aacf9feb046c46db7226db8e45e2018f368fc2366dd3e037b08974b1823"} Mar 12 18:48:48.735385 master-0 kubenswrapper[29097]: I0312 18:48:48.735224 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a88a474-9ce7-41df-9e1b-b1d78fe4c476" path="/var/lib/kubelet/pods/2a88a474-9ce7-41df-9e1b-b1d78fe4c476/volumes" Mar 12 18:48:49.906459 master-0 kubenswrapper[29097]: I0312 18:48:49.906393 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-16afb-default-internal-api-0"] Mar 12 18:48:49.907108 master-0 kubenswrapper[29097]: I0312 18:48:49.906654 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-16afb-default-internal-api-0" podUID="a97f2967-17f2-42cc-91b6-37f26b1a6964" containerName="glance-log" containerID="cri-o://ae5de7d9bd60ff489b22e6c977fb1766db39f08cfc50c3c13d856d5ea7934c54" gracePeriod=30 Mar 12 18:48:49.907156 master-0 kubenswrapper[29097]: I0312 18:48:49.907103 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-16afb-default-internal-api-0" podUID="a97f2967-17f2-42cc-91b6-37f26b1a6964" containerName="glance-httpd" containerID="cri-o://8c43ebb930a5dde6a879f0f926d002f1286c3d2bef69ff7801797e8735bb677b" gracePeriod=30 Mar 12 18:48:52.697432 master-0 kubenswrapper[29097]: I0312 18:48:52.697310 29097 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" Mar 12 18:48:52.699035 master-0 kubenswrapper[29097]: I0312 18:48:52.698093 29097 scope.go:117] "RemoveContainer" containerID="b8aaa63468c9fd987f471541a3a63ebc1775a5b66d6068f313f8da7cd0af22e7" Mar 12 18:48:52.699035 master-0 kubenswrapper[29097]: E0312 18:48:52.698331 29097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-7cb69d965b-d79tc_openstack(87a19dc7-5415-4d3d-a22e-9e2524a67e38)\"" pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" podUID="87a19dc7-5415-4d3d-a22e-9e2524a67e38" Mar 12 18:48:52.699035 master-0 kubenswrapper[29097]: I0312 18:48:52.698486 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" Mar 12 18:48:53.475332 master-0 kubenswrapper[29097]: I0312 18:48:53.475220 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-pq2f7" event={"ID":"15059f12-694b-4ece-9321-0d18e7c95c04","Type":"ContainerDied","Data":"a55c0b2fc5de4998bde7c4263cbe72222f8be86ebbfd70794e32c56efd858c47"} Mar 12 18:48:53.475332 master-0 kubenswrapper[29097]: I0312 18:48:53.475317 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a55c0b2fc5de4998bde7c4263cbe72222f8be86ebbfd70794e32c56efd858c47" Mar 12 18:48:53.476032 master-0 kubenswrapper[29097]: I0312 18:48:53.475991 29097 scope.go:117] "RemoveContainer" containerID="b8aaa63468c9fd987f471541a3a63ebc1775a5b66d6068f313f8da7cd0af22e7" Mar 12 18:48:53.476352 master-0 kubenswrapper[29097]: E0312 18:48:53.476316 29097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-7cb69d965b-d79tc_openstack(87a19dc7-5415-4d3d-a22e-9e2524a67e38)\"" pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" podUID="87a19dc7-5415-4d3d-a22e-9e2524a67e38" Mar 12 18:48:53.616387 master-0 kubenswrapper[29097]: I0312 18:48:53.616321 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-pq2f7" Mar 12 18:48:53.686797 master-0 kubenswrapper[29097]: I0312 18:48:53.686648 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15059f12-694b-4ece-9321-0d18e7c95c04-combined-ca-bundle\") pod \"15059f12-694b-4ece-9321-0d18e7c95c04\" (UID: \"15059f12-694b-4ece-9321-0d18e7c95c04\") " Mar 12 18:48:53.686797 master-0 kubenswrapper[29097]: I0312 18:48:53.686763 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjrpb\" (UniqueName: \"kubernetes.io/projected/15059f12-694b-4ece-9321-0d18e7c95c04-kube-api-access-cjrpb\") pod \"15059f12-694b-4ece-9321-0d18e7c95c04\" (UID: \"15059f12-694b-4ece-9321-0d18e7c95c04\") " Mar 12 18:48:53.687043 master-0 kubenswrapper[29097]: I0312 18:48:53.686838 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/15059f12-694b-4ece-9321-0d18e7c95c04-var-lib-ironic\") pod \"15059f12-694b-4ece-9321-0d18e7c95c04\" (UID: \"15059f12-694b-4ece-9321-0d18e7c95c04\") " Mar 12 18:48:53.687043 master-0 kubenswrapper[29097]: I0312 18:48:53.686943 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/15059f12-694b-4ece-9321-0d18e7c95c04-etc-podinfo\") pod \"15059f12-694b-4ece-9321-0d18e7c95c04\" (UID: \"15059f12-694b-4ece-9321-0d18e7c95c04\") " Mar 12 18:48:53.687159 master-0 kubenswrapper[29097]: I0312 18:48:53.687127 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15059f12-694b-4ece-9321-0d18e7c95c04-scripts\") pod \"15059f12-694b-4ece-9321-0d18e7c95c04\" (UID: \"15059f12-694b-4ece-9321-0d18e7c95c04\") " Mar 12 18:48:53.687216 master-0 kubenswrapper[29097]: I0312 18:48:53.687187 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/15059f12-694b-4ece-9321-0d18e7c95c04-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"15059f12-694b-4ece-9321-0d18e7c95c04\" (UID: \"15059f12-694b-4ece-9321-0d18e7c95c04\") " Mar 12 18:48:53.687302 master-0 kubenswrapper[29097]: I0312 18:48:53.687231 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/15059f12-694b-4ece-9321-0d18e7c95c04-config\") pod \"15059f12-694b-4ece-9321-0d18e7c95c04\" (UID: \"15059f12-694b-4ece-9321-0d18e7c95c04\") " Mar 12 18:48:53.691870 master-0 kubenswrapper[29097]: I0312 18:48:53.691755 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15059f12-694b-4ece-9321-0d18e7c95c04-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "15059f12-694b-4ece-9321-0d18e7c95c04" (UID: "15059f12-694b-4ece-9321-0d18e7c95c04"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:48:53.692047 master-0 kubenswrapper[29097]: I0312 18:48:53.691961 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15059f12-694b-4ece-9321-0d18e7c95c04-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "15059f12-694b-4ece-9321-0d18e7c95c04" (UID: "15059f12-694b-4ece-9321-0d18e7c95c04"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:48:53.722744 master-0 kubenswrapper[29097]: I0312 18:48:53.721966 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/15059f12-694b-4ece-9321-0d18e7c95c04-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "15059f12-694b-4ece-9321-0d18e7c95c04" (UID: "15059f12-694b-4ece-9321-0d18e7c95c04"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 12 18:48:53.723314 master-0 kubenswrapper[29097]: I0312 18:48:53.722779 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15059f12-694b-4ece-9321-0d18e7c95c04-kube-api-access-cjrpb" (OuterVolumeSpecName: "kube-api-access-cjrpb") pod "15059f12-694b-4ece-9321-0d18e7c95c04" (UID: "15059f12-694b-4ece-9321-0d18e7c95c04"). InnerVolumeSpecName "kube-api-access-cjrpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:48:53.725416 master-0 kubenswrapper[29097]: I0312 18:48:53.725314 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15059f12-694b-4ece-9321-0d18e7c95c04-scripts" (OuterVolumeSpecName: "scripts") pod "15059f12-694b-4ece-9321-0d18e7c95c04" (UID: "15059f12-694b-4ece-9321-0d18e7c95c04"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:53.766766 master-0 kubenswrapper[29097]: I0312 18:48:53.766686 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15059f12-694b-4ece-9321-0d18e7c95c04-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15059f12-694b-4ece-9321-0d18e7c95c04" (UID: "15059f12-694b-4ece-9321-0d18e7c95c04"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:53.767085 master-0 kubenswrapper[29097]: I0312 18:48:53.767012 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15059f12-694b-4ece-9321-0d18e7c95c04-config" (OuterVolumeSpecName: "config") pod "15059f12-694b-4ece-9321-0d18e7c95c04" (UID: "15059f12-694b-4ece-9321-0d18e7c95c04"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:53.791851 master-0 kubenswrapper[29097]: I0312 18:48:53.791567 29097 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15059f12-694b-4ece-9321-0d18e7c95c04-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:53.791851 master-0 kubenswrapper[29097]: I0312 18:48:53.791643 29097 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/15059f12-694b-4ece-9321-0d18e7c95c04-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:53.791851 master-0 kubenswrapper[29097]: I0312 18:48:53.791662 29097 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/15059f12-694b-4ece-9321-0d18e7c95c04-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:53.791851 master-0 kubenswrapper[29097]: I0312 18:48:53.791676 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15059f12-694b-4ece-9321-0d18e7c95c04-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:53.791851 master-0 kubenswrapper[29097]: I0312 18:48:53.791690 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjrpb\" (UniqueName: \"kubernetes.io/projected/15059f12-694b-4ece-9321-0d18e7c95c04-kube-api-access-cjrpb\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:53.791851 master-0 kubenswrapper[29097]: I0312 18:48:53.791701 29097 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/15059f12-694b-4ece-9321-0d18e7c95c04-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:53.791851 master-0 kubenswrapper[29097]: I0312 18:48:53.791713 29097 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/15059f12-694b-4ece-9321-0d18e7c95c04-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:54.494749 master-0 kubenswrapper[29097]: I0312 18:48:54.494697 29097 generic.go:334] "Generic (PLEG): container finished" podID="c5395ae3-4fe1-4ada-85ee-30841c1ad513" containerID="f762a2f6987698bc02d141b61a07e1fd7ad2686796e1e73a12727367dc957574" exitCode=0 Mar 12 18:48:54.494970 master-0 kubenswrapper[29097]: I0312 18:48:54.494804 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-16afb-default-external-api-0" event={"ID":"c5395ae3-4fe1-4ada-85ee-30841c1ad513","Type":"ContainerDied","Data":"f762a2f6987698bc02d141b61a07e1fd7ad2686796e1e73a12727367dc957574"} Mar 12 18:48:54.506019 master-0 kubenswrapper[29097]: I0312 18:48:54.505954 29097 generic.go:334] "Generic (PLEG): container finished" podID="a97f2967-17f2-42cc-91b6-37f26b1a6964" containerID="8c43ebb930a5dde6a879f0f926d002f1286c3d2bef69ff7801797e8735bb677b" exitCode=0 Mar 12 18:48:54.506019 master-0 kubenswrapper[29097]: I0312 18:48:54.505995 29097 generic.go:334] "Generic (PLEG): container finished" podID="a97f2967-17f2-42cc-91b6-37f26b1a6964" containerID="ae5de7d9bd60ff489b22e6c977fb1766db39f08cfc50c3c13d856d5ea7934c54" exitCode=143 Mar 12 18:48:54.506275 master-0 kubenswrapper[29097]: I0312 18:48:54.506261 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-pq2f7" Mar 12 18:48:54.506407 master-0 kubenswrapper[29097]: I0312 18:48:54.506380 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-16afb-default-internal-api-0" event={"ID":"a97f2967-17f2-42cc-91b6-37f26b1a6964","Type":"ContainerDied","Data":"8c43ebb930a5dde6a879f0f926d002f1286c3d2bef69ff7801797e8735bb677b"} Mar 12 18:48:54.506541 master-0 kubenswrapper[29097]: I0312 18:48:54.506507 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-16afb-default-internal-api-0" event={"ID":"a97f2967-17f2-42cc-91b6-37f26b1a6964","Type":"ContainerDied","Data":"ae5de7d9bd60ff489b22e6c977fb1766db39f08cfc50c3c13d856d5ea7934c54"} Mar 12 18:48:54.614495 master-0 kubenswrapper[29097]: I0312 18:48:54.614387 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-pz2nb"] Mar 12 18:48:54.629894 master-0 kubenswrapper[29097]: E0312 18:48:54.629720 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40cf0854-144e-474c-a1ad-e588b7df2c68" containerName="mariadb-database-create" Mar 12 18:48:54.629894 master-0 kubenswrapper[29097]: I0312 18:48:54.629888 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="40cf0854-144e-474c-a1ad-e588b7df2c68" containerName="mariadb-database-create" Mar 12 18:48:54.630247 master-0 kubenswrapper[29097]: E0312 18:48:54.629927 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b2854ab-857a-4029-b03e-470c4452693e" containerName="init" Mar 12 18:48:54.630247 master-0 kubenswrapper[29097]: I0312 18:48:54.629936 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b2854ab-857a-4029-b03e-470c4452693e" containerName="init" Mar 12 18:48:54.630247 master-0 kubenswrapper[29097]: E0312 18:48:54.629944 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62db1d8c-39c2-47ea-bac4-e9a0d7febb99" containerName="mariadb-account-create-update" Mar 12 18:48:54.630247 master-0 kubenswrapper[29097]: I0312 18:48:54.629952 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="62db1d8c-39c2-47ea-bac4-e9a0d7febb99" containerName="mariadb-account-create-update" Mar 12 18:48:54.630247 master-0 kubenswrapper[29097]: E0312 18:48:54.629975 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15059f12-694b-4ece-9321-0d18e7c95c04" containerName="ironic-inspector-db-sync" Mar 12 18:48:54.630247 master-0 kubenswrapper[29097]: I0312 18:48:54.629983 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="15059f12-694b-4ece-9321-0d18e7c95c04" containerName="ironic-inspector-db-sync" Mar 12 18:48:54.630247 master-0 kubenswrapper[29097]: E0312 18:48:54.630007 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a88a474-9ce7-41df-9e1b-b1d78fe4c476" containerName="placement-log" Mar 12 18:48:54.630247 master-0 kubenswrapper[29097]: I0312 18:48:54.630017 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a88a474-9ce7-41df-9e1b-b1d78fe4c476" containerName="placement-log" Mar 12 18:48:54.630247 master-0 kubenswrapper[29097]: E0312 18:48:54.630038 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a88a474-9ce7-41df-9e1b-b1d78fe4c476" containerName="placement-api" Mar 12 18:48:54.630247 master-0 kubenswrapper[29097]: I0312 18:48:54.630045 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a88a474-9ce7-41df-9e1b-b1d78fe4c476" containerName="placement-api" Mar 12 18:48:54.630247 master-0 kubenswrapper[29097]: E0312 18:48:54.630063 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b23c624-7392-41d0-a909-67f71b5e16ce" containerName="mariadb-database-create" Mar 12 18:48:54.630247 master-0 kubenswrapper[29097]: I0312 18:48:54.630070 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b23c624-7392-41d0-a909-67f71b5e16ce" containerName="mariadb-database-create" Mar 12 18:48:54.630247 master-0 kubenswrapper[29097]: E0312 18:48:54.630083 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9855cba-3217-439f-8c5e-32b3064a3330" containerName="mariadb-account-create-update" Mar 12 18:48:54.630247 master-0 kubenswrapper[29097]: I0312 18:48:54.630091 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9855cba-3217-439f-8c5e-32b3064a3330" containerName="mariadb-account-create-update" Mar 12 18:48:54.630247 master-0 kubenswrapper[29097]: E0312 18:48:54.630109 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8136b672-acef-4cb2-8316-358d20c26489" containerName="mariadb-database-create" Mar 12 18:48:54.630247 master-0 kubenswrapper[29097]: I0312 18:48:54.630116 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="8136b672-acef-4cb2-8316-358d20c26489" containerName="mariadb-database-create" Mar 12 18:48:54.630247 master-0 kubenswrapper[29097]: E0312 18:48:54.630127 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b2854ab-857a-4029-b03e-470c4452693e" containerName="ironic-api-log" Mar 12 18:48:54.630247 master-0 kubenswrapper[29097]: I0312 18:48:54.630135 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b2854ab-857a-4029-b03e-470c4452693e" containerName="ironic-api-log" Mar 12 18:48:54.630247 master-0 kubenswrapper[29097]: E0312 18:48:54.630147 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b2854ab-857a-4029-b03e-470c4452693e" containerName="ironic-api" Mar 12 18:48:54.630247 master-0 kubenswrapper[29097]: I0312 18:48:54.630154 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b2854ab-857a-4029-b03e-470c4452693e" containerName="ironic-api" Mar 12 18:48:54.630247 master-0 kubenswrapper[29097]: E0312 18:48:54.630172 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b2854ab-857a-4029-b03e-470c4452693e" containerName="ironic-api" Mar 12 18:48:54.630247 master-0 kubenswrapper[29097]: I0312 18:48:54.630179 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b2854ab-857a-4029-b03e-470c4452693e" containerName="ironic-api" Mar 12 18:48:54.630247 master-0 kubenswrapper[29097]: E0312 18:48:54.630192 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88b711ca-1327-4ce4-83c1-c5bf5b42cc5a" containerName="mariadb-account-create-update" Mar 12 18:48:54.630247 master-0 kubenswrapper[29097]: I0312 18:48:54.630207 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="88b711ca-1327-4ce4-83c1-c5bf5b42cc5a" containerName="mariadb-account-create-update" Mar 12 18:48:54.633537 master-0 kubenswrapper[29097]: I0312 18:48:54.633483 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b2854ab-857a-4029-b03e-470c4452693e" containerName="ironic-api" Mar 12 18:48:54.633612 master-0 kubenswrapper[29097]: I0312 18:48:54.633549 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="40cf0854-144e-474c-a1ad-e588b7df2c68" containerName="mariadb-database-create" Mar 12 18:48:54.633612 master-0 kubenswrapper[29097]: I0312 18:48:54.633583 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9855cba-3217-439f-8c5e-32b3064a3330" containerName="mariadb-account-create-update" Mar 12 18:48:54.633678 master-0 kubenswrapper[29097]: I0312 18:48:54.633612 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b2854ab-857a-4029-b03e-470c4452693e" containerName="ironic-api-log" Mar 12 18:48:54.633678 master-0 kubenswrapper[29097]: I0312 18:48:54.633637 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b2854ab-857a-4029-b03e-470c4452693e" containerName="ironic-api" Mar 12 18:48:54.633678 master-0 kubenswrapper[29097]: I0312 18:48:54.633652 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a88a474-9ce7-41df-9e1b-b1d78fe4c476" containerName="placement-api" Mar 12 18:48:54.633678 master-0 kubenswrapper[29097]: I0312 18:48:54.633666 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="8136b672-acef-4cb2-8316-358d20c26489" containerName="mariadb-database-create" Mar 12 18:48:54.633797 master-0 kubenswrapper[29097]: I0312 18:48:54.633681 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="15059f12-694b-4ece-9321-0d18e7c95c04" containerName="ironic-inspector-db-sync" Mar 12 18:48:54.634305 master-0 kubenswrapper[29097]: I0312 18:48:54.634267 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b23c624-7392-41d0-a909-67f71b5e16ce" containerName="mariadb-database-create" Mar 12 18:48:54.634355 master-0 kubenswrapper[29097]: I0312 18:48:54.634320 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="62db1d8c-39c2-47ea-bac4-e9a0d7febb99" containerName="mariadb-account-create-update" Mar 12 18:48:54.634355 master-0 kubenswrapper[29097]: I0312 18:48:54.634335 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a88a474-9ce7-41df-9e1b-b1d78fe4c476" containerName="placement-log" Mar 12 18:48:54.634355 master-0 kubenswrapper[29097]: I0312 18:48:54.634349 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="88b711ca-1327-4ce4-83c1-c5bf5b42cc5a" containerName="mariadb-account-create-update" Mar 12 18:48:54.635347 master-0 kubenswrapper[29097]: I0312 18:48:54.635316 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-pz2nb"] Mar 12 18:48:54.635433 master-0 kubenswrapper[29097]: I0312 18:48:54.635415 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-pz2nb" Mar 12 18:48:54.640696 master-0 kubenswrapper[29097]: I0312 18:48:54.639993 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 12 18:48:54.640942 master-0 kubenswrapper[29097]: I0312 18:48:54.640296 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 12 18:48:54.713094 master-0 kubenswrapper[29097]: I0312 18:48:54.713036 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04fa4fda-ab43-4c4e-be26-48fc6ef9fc48-scripts\") pod \"nova-cell0-conductor-db-sync-pz2nb\" (UID: \"04fa4fda-ab43-4c4e-be26-48fc6ef9fc48\") " pod="openstack/nova-cell0-conductor-db-sync-pz2nb" Mar 12 18:48:54.713502 master-0 kubenswrapper[29097]: I0312 18:48:54.713467 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04fa4fda-ab43-4c4e-be26-48fc6ef9fc48-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-pz2nb\" (UID: \"04fa4fda-ab43-4c4e-be26-48fc6ef9fc48\") " pod="openstack/nova-cell0-conductor-db-sync-pz2nb" Mar 12 18:48:54.713790 master-0 kubenswrapper[29097]: I0312 18:48:54.713769 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04fa4fda-ab43-4c4e-be26-48fc6ef9fc48-config-data\") pod \"nova-cell0-conductor-db-sync-pz2nb\" (UID: \"04fa4fda-ab43-4c4e-be26-48fc6ef9fc48\") " pod="openstack/nova-cell0-conductor-db-sync-pz2nb" Mar 12 18:48:54.714784 master-0 kubenswrapper[29097]: I0312 18:48:54.714757 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxctn\" (UniqueName: \"kubernetes.io/projected/04fa4fda-ab43-4c4e-be26-48fc6ef9fc48-kube-api-access-nxctn\") pod \"nova-cell0-conductor-db-sync-pz2nb\" (UID: \"04fa4fda-ab43-4c4e-be26-48fc6ef9fc48\") " pod="openstack/nova-cell0-conductor-db-sync-pz2nb" Mar 12 18:48:54.822315 master-0 kubenswrapper[29097]: I0312 18:48:54.821221 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04fa4fda-ab43-4c4e-be26-48fc6ef9fc48-scripts\") pod \"nova-cell0-conductor-db-sync-pz2nb\" (UID: \"04fa4fda-ab43-4c4e-be26-48fc6ef9fc48\") " pod="openstack/nova-cell0-conductor-db-sync-pz2nb" Mar 12 18:48:54.824870 master-0 kubenswrapper[29097]: I0312 18:48:54.823887 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04fa4fda-ab43-4c4e-be26-48fc6ef9fc48-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-pz2nb\" (UID: \"04fa4fda-ab43-4c4e-be26-48fc6ef9fc48\") " pod="openstack/nova-cell0-conductor-db-sync-pz2nb" Mar 12 18:48:54.824870 master-0 kubenswrapper[29097]: I0312 18:48:54.823998 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04fa4fda-ab43-4c4e-be26-48fc6ef9fc48-config-data\") pod \"nova-cell0-conductor-db-sync-pz2nb\" (UID: \"04fa4fda-ab43-4c4e-be26-48fc6ef9fc48\") " pod="openstack/nova-cell0-conductor-db-sync-pz2nb" Mar 12 18:48:54.824870 master-0 kubenswrapper[29097]: I0312 18:48:54.824023 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxctn\" (UniqueName: \"kubernetes.io/projected/04fa4fda-ab43-4c4e-be26-48fc6ef9fc48-kube-api-access-nxctn\") pod \"nova-cell0-conductor-db-sync-pz2nb\" (UID: \"04fa4fda-ab43-4c4e-be26-48fc6ef9fc48\") " pod="openstack/nova-cell0-conductor-db-sync-pz2nb" Mar 12 18:48:54.832828 master-0 kubenswrapper[29097]: I0312 18:48:54.832577 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04fa4fda-ab43-4c4e-be26-48fc6ef9fc48-scripts\") pod \"nova-cell0-conductor-db-sync-pz2nb\" (UID: \"04fa4fda-ab43-4c4e-be26-48fc6ef9fc48\") " pod="openstack/nova-cell0-conductor-db-sync-pz2nb" Mar 12 18:48:54.832828 master-0 kubenswrapper[29097]: I0312 18:48:54.832757 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04fa4fda-ab43-4c4e-be26-48fc6ef9fc48-config-data\") pod \"nova-cell0-conductor-db-sync-pz2nb\" (UID: \"04fa4fda-ab43-4c4e-be26-48fc6ef9fc48\") " pod="openstack/nova-cell0-conductor-db-sync-pz2nb" Mar 12 18:48:54.832931 master-0 kubenswrapper[29097]: I0312 18:48:54.832846 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04fa4fda-ab43-4c4e-be26-48fc6ef9fc48-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-pz2nb\" (UID: \"04fa4fda-ab43-4c4e-be26-48fc6ef9fc48\") " pod="openstack/nova-cell0-conductor-db-sync-pz2nb" Mar 12 18:48:54.846952 master-0 kubenswrapper[29097]: I0312 18:48:54.846918 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxctn\" (UniqueName: \"kubernetes.io/projected/04fa4fda-ab43-4c4e-be26-48fc6ef9fc48-kube-api-access-nxctn\") pod \"nova-cell0-conductor-db-sync-pz2nb\" (UID: \"04fa4fda-ab43-4c4e-be26-48fc6ef9fc48\") " pod="openstack/nova-cell0-conductor-db-sync-pz2nb" Mar 12 18:48:54.956653 master-0 kubenswrapper[29097]: I0312 18:48:54.956600 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-pz2nb" Mar 12 18:48:55.128896 master-0 kubenswrapper[29097]: I0312 18:48:55.128857 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:48:55.238544 master-0 kubenswrapper[29097]: I0312 18:48:55.237266 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5395ae3-4fe1-4ada-85ee-30841c1ad513-logs\") pod \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\" (UID: \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\") " Mar 12 18:48:55.238544 master-0 kubenswrapper[29097]: I0312 18:48:55.237346 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5395ae3-4fe1-4ada-85ee-30841c1ad513-config-data\") pod \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\" (UID: \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\") " Mar 12 18:48:55.238544 master-0 kubenswrapper[29097]: I0312 18:48:55.237535 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^310fb28b-3797-432d-a3e8-4c4039b4350a\") pod \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\" (UID: \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\") " Mar 12 18:48:55.238544 master-0 kubenswrapper[29097]: I0312 18:48:55.237561 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5395ae3-4fe1-4ada-85ee-30841c1ad513-public-tls-certs\") pod \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\" (UID: \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\") " Mar 12 18:48:55.238544 master-0 kubenswrapper[29097]: I0312 18:48:55.237582 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fbpx\" (UniqueName: \"kubernetes.io/projected/c5395ae3-4fe1-4ada-85ee-30841c1ad513-kube-api-access-4fbpx\") pod \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\" (UID: \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\") " Mar 12 18:48:55.238544 master-0 kubenswrapper[29097]: I0312 18:48:55.237694 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5395ae3-4fe1-4ada-85ee-30841c1ad513-combined-ca-bundle\") pod \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\" (UID: \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\") " Mar 12 18:48:55.238544 master-0 kubenswrapper[29097]: I0312 18:48:55.237718 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5395ae3-4fe1-4ada-85ee-30841c1ad513-scripts\") pod \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\" (UID: \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\") " Mar 12 18:48:55.238544 master-0 kubenswrapper[29097]: I0312 18:48:55.237762 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c5395ae3-4fe1-4ada-85ee-30841c1ad513-httpd-run\") pod \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\" (UID: \"c5395ae3-4fe1-4ada-85ee-30841c1ad513\") " Mar 12 18:48:55.239052 master-0 kubenswrapper[29097]: I0312 18:48:55.238578 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5395ae3-4fe1-4ada-85ee-30841c1ad513-logs" (OuterVolumeSpecName: "logs") pod "c5395ae3-4fe1-4ada-85ee-30841c1ad513" (UID: "c5395ae3-4fe1-4ada-85ee-30841c1ad513"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:48:55.266551 master-0 kubenswrapper[29097]: I0312 18:48:55.263822 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5395ae3-4fe1-4ada-85ee-30841c1ad513-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c5395ae3-4fe1-4ada-85ee-30841c1ad513" (UID: "c5395ae3-4fe1-4ada-85ee-30841c1ad513"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:48:55.266551 master-0 kubenswrapper[29097]: I0312 18:48:55.263853 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5395ae3-4fe1-4ada-85ee-30841c1ad513-scripts" (OuterVolumeSpecName: "scripts") pod "c5395ae3-4fe1-4ada-85ee-30841c1ad513" (UID: "c5395ae3-4fe1-4ada-85ee-30841c1ad513"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:55.266551 master-0 kubenswrapper[29097]: I0312 18:48:55.264014 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5395ae3-4fe1-4ada-85ee-30841c1ad513-kube-api-access-4fbpx" (OuterVolumeSpecName: "kube-api-access-4fbpx") pod "c5395ae3-4fe1-4ada-85ee-30841c1ad513" (UID: "c5395ae3-4fe1-4ada-85ee-30841c1ad513"). InnerVolumeSpecName "kube-api-access-4fbpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:48:55.269736 master-0 kubenswrapper[29097]: I0312 18:48:55.268172 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^310fb28b-3797-432d-a3e8-4c4039b4350a" (OuterVolumeSpecName: "glance") pod "c5395ae3-4fe1-4ada-85ee-30841c1ad513" (UID: "c5395ae3-4fe1-4ada-85ee-30841c1ad513"). InnerVolumeSpecName "pvc-ec8aab56-f74f-48cf-9ada-d06acc37cf58". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 18:48:55.342553 master-0 kubenswrapper[29097]: I0312 18:48:55.342470 29097 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5395ae3-4fe1-4ada-85ee-30841c1ad513-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:55.342553 master-0 kubenswrapper[29097]: I0312 18:48:55.342536 29097 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c5395ae3-4fe1-4ada-85ee-30841c1ad513-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:55.342553 master-0 kubenswrapper[29097]: I0312 18:48:55.342550 29097 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5395ae3-4fe1-4ada-85ee-30841c1ad513-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:55.342838 master-0 kubenswrapper[29097]: I0312 18:48:55.342581 29097 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-ec8aab56-f74f-48cf-9ada-d06acc37cf58\" (UniqueName: \"kubernetes.io/csi/topolvm.io^310fb28b-3797-432d-a3e8-4c4039b4350a\") on node \"master-0\" " Mar 12 18:48:55.342838 master-0 kubenswrapper[29097]: I0312 18:48:55.342599 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fbpx\" (UniqueName: \"kubernetes.io/projected/c5395ae3-4fe1-4ada-85ee-30841c1ad513-kube-api-access-4fbpx\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:55.354535 master-0 kubenswrapper[29097]: I0312 18:48:55.351867 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5395ae3-4fe1-4ada-85ee-30841c1ad513-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5395ae3-4fe1-4ada-85ee-30841c1ad513" (UID: "c5395ae3-4fe1-4ada-85ee-30841c1ad513"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:55.382562 master-0 kubenswrapper[29097]: I0312 18:48:55.379636 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5395ae3-4fe1-4ada-85ee-30841c1ad513-config-data" (OuterVolumeSpecName: "config-data") pod "c5395ae3-4fe1-4ada-85ee-30841c1ad513" (UID: "c5395ae3-4fe1-4ada-85ee-30841c1ad513"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:55.450602 master-0 kubenswrapper[29097]: I0312 18:48:55.445317 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5395ae3-4fe1-4ada-85ee-30841c1ad513-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:55.450602 master-0 kubenswrapper[29097]: I0312 18:48:55.445356 29097 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5395ae3-4fe1-4ada-85ee-30841c1ad513-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:55.470733 master-0 kubenswrapper[29097]: I0312 18:48:55.470676 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5395ae3-4fe1-4ada-85ee-30841c1ad513-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c5395ae3-4fe1-4ada-85ee-30841c1ad513" (UID: "c5395ae3-4fe1-4ada-85ee-30841c1ad513"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:55.478365 master-0 kubenswrapper[29097]: I0312 18:48:55.478178 29097 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 18:48:55.479853 master-0 kubenswrapper[29097]: I0312 18:48:55.479046 29097 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-ec8aab56-f74f-48cf-9ada-d06acc37cf58" (UniqueName: "kubernetes.io/csi/topolvm.io^310fb28b-3797-432d-a3e8-4c4039b4350a") on node "master-0" Mar 12 18:48:55.507004 master-0 kubenswrapper[29097]: I0312 18:48:55.501608 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-pz2nb"] Mar 12 18:48:55.529327 master-0 kubenswrapper[29097]: I0312 18:48:55.529260 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"7fb8fbc7-949d-4526-8456-fbf8277cee2f","Type":"ContainerStarted","Data":"e457718832c4d7068ea141ddf5b82409ac2608b68231e092a2367adc52322b3f"} Mar 12 18:48:55.535279 master-0 kubenswrapper[29097]: I0312 18:48:55.535218 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-16afb-default-external-api-0" event={"ID":"c5395ae3-4fe1-4ada-85ee-30841c1ad513","Type":"ContainerDied","Data":"38f910c3a707c752a06e1f75b31c52b08092efae711d5ccb0f4fee701040ce09"} Mar 12 18:48:55.535570 master-0 kubenswrapper[29097]: I0312 18:48:55.535285 29097 scope.go:117] "RemoveContainer" containerID="f762a2f6987698bc02d141b61a07e1fd7ad2686796e1e73a12727367dc957574" Mar 12 18:48:55.536232 master-0 kubenswrapper[29097]: I0312 18:48:55.535618 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:48:55.540632 master-0 kubenswrapper[29097]: I0312 18:48:55.539446 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-pz2nb" event={"ID":"04fa4fda-ab43-4c4e-be26-48fc6ef9fc48","Type":"ContainerStarted","Data":"36ba084c9555bfa5ee036f5b5e5de0d4d6e988cc0af3a30337282c10d9d8a95a"} Mar 12 18:48:55.549553 master-0 kubenswrapper[29097]: I0312 18:48:55.547148 29097 reconciler_common.go:293] "Volume detached for volume \"pvc-ec8aab56-f74f-48cf-9ada-d06acc37cf58\" (UniqueName: \"kubernetes.io/csi/topolvm.io^310fb28b-3797-432d-a3e8-4c4039b4350a\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:55.549553 master-0 kubenswrapper[29097]: I0312 18:48:55.547189 29097 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5395ae3-4fe1-4ada-85ee-30841c1ad513-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:55.590209 master-0 kubenswrapper[29097]: I0312 18:48:55.590171 29097 scope.go:117] "RemoveContainer" containerID="cc2f30d353ab6bd817917a3b359d2e1b91746e6c161b42424dab9cef450ca08e" Mar 12 18:48:55.605149 master-0 kubenswrapper[29097]: I0312 18:48:55.604582 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-16afb-default-external-api-0"] Mar 12 18:48:55.623241 master-0 kubenswrapper[29097]: I0312 18:48:55.622738 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-16afb-default-external-api-0"] Mar 12 18:48:55.667717 master-0 kubenswrapper[29097]: I0312 18:48:55.667648 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-16afb-default-external-api-0"] Mar 12 18:48:55.668473 master-0 kubenswrapper[29097]: E0312 18:48:55.668442 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5395ae3-4fe1-4ada-85ee-30841c1ad513" containerName="glance-httpd" Mar 12 18:48:55.668473 master-0 kubenswrapper[29097]: I0312 18:48:55.668464 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5395ae3-4fe1-4ada-85ee-30841c1ad513" containerName="glance-httpd" Mar 12 18:48:55.668581 master-0 kubenswrapper[29097]: E0312 18:48:55.668523 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5395ae3-4fe1-4ada-85ee-30841c1ad513" containerName="glance-log" Mar 12 18:48:55.668581 master-0 kubenswrapper[29097]: I0312 18:48:55.668532 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5395ae3-4fe1-4ada-85ee-30841c1ad513" containerName="glance-log" Mar 12 18:48:55.668959 master-0 kubenswrapper[29097]: I0312 18:48:55.668929 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5395ae3-4fe1-4ada-85ee-30841c1ad513" containerName="glance-httpd" Mar 12 18:48:55.669010 master-0 kubenswrapper[29097]: I0312 18:48:55.668992 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5395ae3-4fe1-4ada-85ee-30841c1ad513" containerName="glance-log" Mar 12 18:48:55.675139 master-0 kubenswrapper[29097]: I0312 18:48:55.674315 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:48:55.679608 master-0 kubenswrapper[29097]: I0312 18:48:55.678452 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 12 18:48:55.679608 master-0 kubenswrapper[29097]: I0312 18:48:55.678858 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-16afb-default-external-config-data" Mar 12 18:48:55.686976 master-0 kubenswrapper[29097]: I0312 18:48:55.686926 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-16afb-default-external-api-0"] Mar 12 18:48:55.857487 master-0 kubenswrapper[29097]: I0312 18:48:55.855180 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/884251a4-69be-405c-90a2-b75d9970b52e-httpd-run\") pod \"glance-16afb-default-external-api-0\" (UID: \"884251a4-69be-405c-90a2-b75d9970b52e\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:48:55.857487 master-0 kubenswrapper[29097]: I0312 18:48:55.855238 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/884251a4-69be-405c-90a2-b75d9970b52e-logs\") pod \"glance-16afb-default-external-api-0\" (UID: \"884251a4-69be-405c-90a2-b75d9970b52e\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:48:55.857487 master-0 kubenswrapper[29097]: I0312 18:48:55.855345 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/884251a4-69be-405c-90a2-b75d9970b52e-public-tls-certs\") pod \"glance-16afb-default-external-api-0\" (UID: \"884251a4-69be-405c-90a2-b75d9970b52e\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:48:55.857487 master-0 kubenswrapper[29097]: I0312 18:48:55.855533 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/884251a4-69be-405c-90a2-b75d9970b52e-config-data\") pod \"glance-16afb-default-external-api-0\" (UID: \"884251a4-69be-405c-90a2-b75d9970b52e\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:48:55.857487 master-0 kubenswrapper[29097]: I0312 18:48:55.856577 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtbb6\" (UniqueName: \"kubernetes.io/projected/884251a4-69be-405c-90a2-b75d9970b52e-kube-api-access-mtbb6\") pod \"glance-16afb-default-external-api-0\" (UID: \"884251a4-69be-405c-90a2-b75d9970b52e\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:48:55.857487 master-0 kubenswrapper[29097]: I0312 18:48:55.856676 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ec8aab56-f74f-48cf-9ada-d06acc37cf58\" (UniqueName: \"kubernetes.io/csi/topolvm.io^310fb28b-3797-432d-a3e8-4c4039b4350a\") pod \"glance-16afb-default-external-api-0\" (UID: \"884251a4-69be-405c-90a2-b75d9970b52e\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:48:55.857487 master-0 kubenswrapper[29097]: I0312 18:48:55.856704 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/884251a4-69be-405c-90a2-b75d9970b52e-scripts\") pod \"glance-16afb-default-external-api-0\" (UID: \"884251a4-69be-405c-90a2-b75d9970b52e\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:48:55.857487 master-0 kubenswrapper[29097]: I0312 18:48:55.857029 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/884251a4-69be-405c-90a2-b75d9970b52e-combined-ca-bundle\") pod \"glance-16afb-default-external-api-0\" (UID: \"884251a4-69be-405c-90a2-b75d9970b52e\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:48:55.959011 master-0 kubenswrapper[29097]: I0312 18:48:55.958941 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/884251a4-69be-405c-90a2-b75d9970b52e-public-tls-certs\") pod \"glance-16afb-default-external-api-0\" (UID: \"884251a4-69be-405c-90a2-b75d9970b52e\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:48:55.959281 master-0 kubenswrapper[29097]: I0312 18:48:55.959038 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/884251a4-69be-405c-90a2-b75d9970b52e-config-data\") pod \"glance-16afb-default-external-api-0\" (UID: \"884251a4-69be-405c-90a2-b75d9970b52e\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:48:55.959281 master-0 kubenswrapper[29097]: I0312 18:48:55.959191 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtbb6\" (UniqueName: \"kubernetes.io/projected/884251a4-69be-405c-90a2-b75d9970b52e-kube-api-access-mtbb6\") pod \"glance-16afb-default-external-api-0\" (UID: \"884251a4-69be-405c-90a2-b75d9970b52e\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:48:55.959281 master-0 kubenswrapper[29097]: I0312 18:48:55.959242 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ec8aab56-f74f-48cf-9ada-d06acc37cf58\" (UniqueName: \"kubernetes.io/csi/topolvm.io^310fb28b-3797-432d-a3e8-4c4039b4350a\") pod \"glance-16afb-default-external-api-0\" (UID: \"884251a4-69be-405c-90a2-b75d9970b52e\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:48:55.959281 master-0 kubenswrapper[29097]: I0312 18:48:55.959270 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/884251a4-69be-405c-90a2-b75d9970b52e-scripts\") pod \"glance-16afb-default-external-api-0\" (UID: \"884251a4-69be-405c-90a2-b75d9970b52e\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:48:55.959439 master-0 kubenswrapper[29097]: I0312 18:48:55.959309 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/884251a4-69be-405c-90a2-b75d9970b52e-combined-ca-bundle\") pod \"glance-16afb-default-external-api-0\" (UID: \"884251a4-69be-405c-90a2-b75d9970b52e\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:48:55.959439 master-0 kubenswrapper[29097]: I0312 18:48:55.959357 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/884251a4-69be-405c-90a2-b75d9970b52e-httpd-run\") pod \"glance-16afb-default-external-api-0\" (UID: \"884251a4-69be-405c-90a2-b75d9970b52e\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:48:55.959439 master-0 kubenswrapper[29097]: I0312 18:48:55.959373 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/884251a4-69be-405c-90a2-b75d9970b52e-logs\") pod \"glance-16afb-default-external-api-0\" (UID: \"884251a4-69be-405c-90a2-b75d9970b52e\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:48:55.959882 master-0 kubenswrapper[29097]: I0312 18:48:55.959847 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/884251a4-69be-405c-90a2-b75d9970b52e-logs\") pod \"glance-16afb-default-external-api-0\" (UID: \"884251a4-69be-405c-90a2-b75d9970b52e\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:48:55.961023 master-0 kubenswrapper[29097]: I0312 18:48:55.960982 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/884251a4-69be-405c-90a2-b75d9970b52e-httpd-run\") pod \"glance-16afb-default-external-api-0\" (UID: \"884251a4-69be-405c-90a2-b75d9970b52e\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:48:55.963632 master-0 kubenswrapper[29097]: I0312 18:48:55.963597 29097 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 18:48:55.963632 master-0 kubenswrapper[29097]: I0312 18:48:55.963626 29097 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ec8aab56-f74f-48cf-9ada-d06acc37cf58\" (UniqueName: \"kubernetes.io/csi/topolvm.io^310fb28b-3797-432d-a3e8-4c4039b4350a\") pod \"glance-16afb-default-external-api-0\" (UID: \"884251a4-69be-405c-90a2-b75d9970b52e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/a675104d86a4ce743943f7962ef4d34dd002b87ad3cb26bbb0067dde16060ad0/globalmount\"" pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:48:55.964170 master-0 kubenswrapper[29097]: I0312 18:48:55.964132 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/884251a4-69be-405c-90a2-b75d9970b52e-public-tls-certs\") pod \"glance-16afb-default-external-api-0\" (UID: \"884251a4-69be-405c-90a2-b75d9970b52e\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:48:55.964213 master-0 kubenswrapper[29097]: I0312 18:48:55.964188 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/884251a4-69be-405c-90a2-b75d9970b52e-scripts\") pod \"glance-16afb-default-external-api-0\" (UID: \"884251a4-69be-405c-90a2-b75d9970b52e\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:48:55.970757 master-0 kubenswrapper[29097]: I0312 18:48:55.970714 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/884251a4-69be-405c-90a2-b75d9970b52e-combined-ca-bundle\") pod \"glance-16afb-default-external-api-0\" (UID: \"884251a4-69be-405c-90a2-b75d9970b52e\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:48:55.974559 master-0 kubenswrapper[29097]: I0312 18:48:55.973645 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/884251a4-69be-405c-90a2-b75d9970b52e-config-data\") pod \"glance-16afb-default-external-api-0\" (UID: \"884251a4-69be-405c-90a2-b75d9970b52e\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:48:55.976700 master-0 kubenswrapper[29097]: I0312 18:48:55.976122 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtbb6\" (UniqueName: \"kubernetes.io/projected/884251a4-69be-405c-90a2-b75d9970b52e-kube-api-access-mtbb6\") pod \"glance-16afb-default-external-api-0\" (UID: \"884251a4-69be-405c-90a2-b75d9970b52e\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:48:56.068681 master-0 kubenswrapper[29097]: I0312 18:48:56.068645 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:48:56.266603 master-0 kubenswrapper[29097]: I0312 18:48:56.266166 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a97f2967-17f2-42cc-91b6-37f26b1a6964-httpd-run\") pod \"a97f2967-17f2-42cc-91b6-37f26b1a6964\" (UID: \"a97f2967-17f2-42cc-91b6-37f26b1a6964\") " Mar 12 18:48:56.266603 master-0 kubenswrapper[29097]: I0312 18:48:56.266250 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a97f2967-17f2-42cc-91b6-37f26b1a6964-internal-tls-certs\") pod \"a97f2967-17f2-42cc-91b6-37f26b1a6964\" (UID: \"a97f2967-17f2-42cc-91b6-37f26b1a6964\") " Mar 12 18:48:56.266845 master-0 kubenswrapper[29097]: I0312 18:48:56.266717 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a97f2967-17f2-42cc-91b6-37f26b1a6964-logs\") pod \"a97f2967-17f2-42cc-91b6-37f26b1a6964\" (UID: \"a97f2967-17f2-42cc-91b6-37f26b1a6964\") " Mar 12 18:48:56.266845 master-0 kubenswrapper[29097]: I0312 18:48:56.266745 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a97f2967-17f2-42cc-91b6-37f26b1a6964-scripts\") pod \"a97f2967-17f2-42cc-91b6-37f26b1a6964\" (UID: \"a97f2967-17f2-42cc-91b6-37f26b1a6964\") " Mar 12 18:48:56.266915 master-0 kubenswrapper[29097]: I0312 18:48:56.266877 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a97f2967-17f2-42cc-91b6-37f26b1a6964-combined-ca-bundle\") pod \"a97f2967-17f2-42cc-91b6-37f26b1a6964\" (UID: \"a97f2967-17f2-42cc-91b6-37f26b1a6964\") " Mar 12 18:48:56.269110 master-0 kubenswrapper[29097]: I0312 18:48:56.267176 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^50ae7dd3-ed24-4f03-8717-dcd38b66edcc\") pod \"a97f2967-17f2-42cc-91b6-37f26b1a6964\" (UID: \"a97f2967-17f2-42cc-91b6-37f26b1a6964\") " Mar 12 18:48:56.269110 master-0 kubenswrapper[29097]: I0312 18:48:56.267674 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a97f2967-17f2-42cc-91b6-37f26b1a6964-config-data\") pod \"a97f2967-17f2-42cc-91b6-37f26b1a6964\" (UID: \"a97f2967-17f2-42cc-91b6-37f26b1a6964\") " Mar 12 18:48:56.269110 master-0 kubenswrapper[29097]: I0312 18:48:56.267708 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prwlg\" (UniqueName: \"kubernetes.io/projected/a97f2967-17f2-42cc-91b6-37f26b1a6964-kube-api-access-prwlg\") pod \"a97f2967-17f2-42cc-91b6-37f26b1a6964\" (UID: \"a97f2967-17f2-42cc-91b6-37f26b1a6964\") " Mar 12 18:48:56.272082 master-0 kubenswrapper[29097]: I0312 18:48:56.272038 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a97f2967-17f2-42cc-91b6-37f26b1a6964-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a97f2967-17f2-42cc-91b6-37f26b1a6964" (UID: "a97f2967-17f2-42cc-91b6-37f26b1a6964"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:48:56.273983 master-0 kubenswrapper[29097]: I0312 18:48:56.272752 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a97f2967-17f2-42cc-91b6-37f26b1a6964-kube-api-access-prwlg" (OuterVolumeSpecName: "kube-api-access-prwlg") pod "a97f2967-17f2-42cc-91b6-37f26b1a6964" (UID: "a97f2967-17f2-42cc-91b6-37f26b1a6964"). InnerVolumeSpecName "kube-api-access-prwlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:48:56.273983 master-0 kubenswrapper[29097]: I0312 18:48:56.273457 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a97f2967-17f2-42cc-91b6-37f26b1a6964-logs" (OuterVolumeSpecName: "logs") pod "a97f2967-17f2-42cc-91b6-37f26b1a6964" (UID: "a97f2967-17f2-42cc-91b6-37f26b1a6964"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:48:56.273983 master-0 kubenswrapper[29097]: I0312 18:48:56.273701 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a97f2967-17f2-42cc-91b6-37f26b1a6964-scripts" (OuterVolumeSpecName: "scripts") pod "a97f2967-17f2-42cc-91b6-37f26b1a6964" (UID: "a97f2967-17f2-42cc-91b6-37f26b1a6964"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:56.311536 master-0 kubenswrapper[29097]: I0312 18:48:56.310214 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a97f2967-17f2-42cc-91b6-37f26b1a6964-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a97f2967-17f2-42cc-91b6-37f26b1a6964" (UID: "a97f2967-17f2-42cc-91b6-37f26b1a6964"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:56.325535 master-0 kubenswrapper[29097]: I0312 18:48:56.323499 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a97f2967-17f2-42cc-91b6-37f26b1a6964-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a97f2967-17f2-42cc-91b6-37f26b1a6964" (UID: "a97f2967-17f2-42cc-91b6-37f26b1a6964"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:56.351538 master-0 kubenswrapper[29097]: I0312 18:48:56.350866 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a97f2967-17f2-42cc-91b6-37f26b1a6964-config-data" (OuterVolumeSpecName: "config-data") pod "a97f2967-17f2-42cc-91b6-37f26b1a6964" (UID: "a97f2967-17f2-42cc-91b6-37f26b1a6964"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:48:56.377534 master-0 kubenswrapper[29097]: I0312 18:48:56.374079 29097 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a97f2967-17f2-42cc-91b6-37f26b1a6964-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:56.377534 master-0 kubenswrapper[29097]: I0312 18:48:56.374116 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a97f2967-17f2-42cc-91b6-37f26b1a6964-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:56.377534 master-0 kubenswrapper[29097]: I0312 18:48:56.374128 29097 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a97f2967-17f2-42cc-91b6-37f26b1a6964-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:56.377534 master-0 kubenswrapper[29097]: I0312 18:48:56.374138 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prwlg\" (UniqueName: \"kubernetes.io/projected/a97f2967-17f2-42cc-91b6-37f26b1a6964-kube-api-access-prwlg\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:56.377534 master-0 kubenswrapper[29097]: I0312 18:48:56.374146 29097 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a97f2967-17f2-42cc-91b6-37f26b1a6964-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:56.377534 master-0 kubenswrapper[29097]: I0312 18:48:56.374153 29097 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a97f2967-17f2-42cc-91b6-37f26b1a6964-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:56.377534 master-0 kubenswrapper[29097]: I0312 18:48:56.374161 29097 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a97f2967-17f2-42cc-91b6-37f26b1a6964-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:56.556630 master-0 kubenswrapper[29097]: I0312 18:48:56.555994 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-16afb-default-internal-api-0" event={"ID":"a97f2967-17f2-42cc-91b6-37f26b1a6964","Type":"ContainerDied","Data":"123bb1dcae921cb17eb59b0cf5a71f66535bfa8753b6bb5c6a22eef7dc466288"} Mar 12 18:48:56.556630 master-0 kubenswrapper[29097]: I0312 18:48:56.556066 29097 scope.go:117] "RemoveContainer" containerID="8c43ebb930a5dde6a879f0f926d002f1286c3d2bef69ff7801797e8735bb677b" Mar 12 18:48:56.556630 master-0 kubenswrapper[29097]: I0312 18:48:56.556080 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:48:56.590450 master-0 kubenswrapper[29097]: I0312 18:48:56.590404 29097 scope.go:117] "RemoveContainer" containerID="ae5de7d9bd60ff489b22e6c977fb1766db39f08cfc50c3c13d856d5ea7934c54" Mar 12 18:48:56.747433 master-0 kubenswrapper[29097]: I0312 18:48:56.747373 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5395ae3-4fe1-4ada-85ee-30841c1ad513" path="/var/lib/kubelet/pods/c5395ae3-4fe1-4ada-85ee-30841c1ad513/volumes" Mar 12 18:48:56.843241 master-0 kubenswrapper[29097]: I0312 18:48:56.843175 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^50ae7dd3-ed24-4f03-8717-dcd38b66edcc" (OuterVolumeSpecName: "glance") pod "a97f2967-17f2-42cc-91b6-37f26b1a6964" (UID: "a97f2967-17f2-42cc-91b6-37f26b1a6964"). InnerVolumeSpecName "pvc-15bfb998-3b5a-4ce6-84e0-d75d1f841c87". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 18:48:56.859181 master-0 kubenswrapper[29097]: I0312 18:48:56.859022 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ec8aab56-f74f-48cf-9ada-d06acc37cf58\" (UniqueName: \"kubernetes.io/csi/topolvm.io^310fb28b-3797-432d-a3e8-4c4039b4350a\") pod \"glance-16afb-default-external-api-0\" (UID: \"884251a4-69be-405c-90a2-b75d9970b52e\") " pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:48:56.891137 master-0 kubenswrapper[29097]: I0312 18:48:56.888313 29097 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-15bfb998-3b5a-4ce6-84e0-d75d1f841c87\" (UniqueName: \"kubernetes.io/csi/topolvm.io^50ae7dd3-ed24-4f03-8717-dcd38b66edcc\") on node \"master-0\" " Mar 12 18:48:56.929250 master-0 kubenswrapper[29097]: I0312 18:48:56.929198 29097 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 18:48:56.929442 master-0 kubenswrapper[29097]: I0312 18:48:56.929403 29097 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-15bfb998-3b5a-4ce6-84e0-d75d1f841c87" (UniqueName: "kubernetes.io/csi/topolvm.io^50ae7dd3-ed24-4f03-8717-dcd38b66edcc") on node "master-0" Mar 12 18:48:56.949759 master-0 kubenswrapper[29097]: I0312 18:48:56.947045 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:48:56.968640 master-0 kubenswrapper[29097]: I0312 18:48:56.968590 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-16afb-default-internal-api-0"] Mar 12 18:48:56.992600 master-0 kubenswrapper[29097]: I0312 18:48:56.992559 29097 reconciler_common.go:293] "Volume detached for volume \"pvc-15bfb998-3b5a-4ce6-84e0-d75d1f841c87\" (UniqueName: \"kubernetes.io/csi/topolvm.io^50ae7dd3-ed24-4f03-8717-dcd38b66edcc\") on node \"master-0\" DevicePath \"\"" Mar 12 18:48:56.994482 master-0 kubenswrapper[29097]: I0312 18:48:56.992980 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-16afb-default-internal-api-0"] Mar 12 18:48:57.040589 master-0 kubenswrapper[29097]: I0312 18:48:57.040545 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-16afb-default-internal-api-0"] Mar 12 18:48:57.041309 master-0 kubenswrapper[29097]: E0312 18:48:57.041291 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a97f2967-17f2-42cc-91b6-37f26b1a6964" containerName="glance-httpd" Mar 12 18:48:57.041384 master-0 kubenswrapper[29097]: I0312 18:48:57.041374 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="a97f2967-17f2-42cc-91b6-37f26b1a6964" containerName="glance-httpd" Mar 12 18:48:57.041509 master-0 kubenswrapper[29097]: E0312 18:48:57.041497 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a97f2967-17f2-42cc-91b6-37f26b1a6964" containerName="glance-log" Mar 12 18:48:57.041589 master-0 kubenswrapper[29097]: I0312 18:48:57.041579 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="a97f2967-17f2-42cc-91b6-37f26b1a6964" containerName="glance-log" Mar 12 18:48:57.041884 master-0 kubenswrapper[29097]: I0312 18:48:57.041871 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="a97f2967-17f2-42cc-91b6-37f26b1a6964" containerName="glance-log" Mar 12 18:48:57.041962 master-0 kubenswrapper[29097]: I0312 18:48:57.041950 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="a97f2967-17f2-42cc-91b6-37f26b1a6964" containerName="glance-httpd" Mar 12 18:48:57.043444 master-0 kubenswrapper[29097]: I0312 18:48:57.043424 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:48:57.053808 master-0 kubenswrapper[29097]: I0312 18:48:57.047014 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-16afb-default-internal-config-data" Mar 12 18:48:57.053808 master-0 kubenswrapper[29097]: I0312 18:48:57.047462 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 12 18:48:57.063087 master-0 kubenswrapper[29097]: I0312 18:48:57.061434 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-16afb-default-internal-api-0"] Mar 12 18:48:57.203706 master-0 kubenswrapper[29097]: I0312 18:48:57.203652 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cb659fff7-7fq4q"] Mar 12 18:48:57.212673 master-0 kubenswrapper[29097]: I0312 18:48:57.205177 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0c9885f-3ca4-4031-b894-899b09eb5b91-httpd-run\") pod \"glance-16afb-default-internal-api-0\" (UID: \"b0c9885f-3ca4-4031-b894-899b09eb5b91\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:48:57.212673 master-0 kubenswrapper[29097]: I0312 18:48:57.205241 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c9885f-3ca4-4031-b894-899b09eb5b91-combined-ca-bundle\") pod \"glance-16afb-default-internal-api-0\" (UID: \"b0c9885f-3ca4-4031-b894-899b09eb5b91\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:48:57.212673 master-0 kubenswrapper[29097]: I0312 18:48:57.205272 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0c9885f-3ca4-4031-b894-899b09eb5b91-config-data\") pod \"glance-16afb-default-internal-api-0\" (UID: \"b0c9885f-3ca4-4031-b894-899b09eb5b91\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:48:57.212673 master-0 kubenswrapper[29097]: I0312 18:48:57.205294 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0c9885f-3ca4-4031-b894-899b09eb5b91-internal-tls-certs\") pod \"glance-16afb-default-internal-api-0\" (UID: \"b0c9885f-3ca4-4031-b894-899b09eb5b91\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:48:57.212673 master-0 kubenswrapper[29097]: I0312 18:48:57.205325 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s7h6\" (UniqueName: \"kubernetes.io/projected/b0c9885f-3ca4-4031-b894-899b09eb5b91-kube-api-access-2s7h6\") pod \"glance-16afb-default-internal-api-0\" (UID: \"b0c9885f-3ca4-4031-b894-899b09eb5b91\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:48:57.212673 master-0 kubenswrapper[29097]: I0312 18:48:57.205357 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0c9885f-3ca4-4031-b894-899b09eb5b91-scripts\") pod \"glance-16afb-default-internal-api-0\" (UID: \"b0c9885f-3ca4-4031-b894-899b09eb5b91\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:48:57.212673 master-0 kubenswrapper[29097]: I0312 18:48:57.205399 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0c9885f-3ca4-4031-b894-899b09eb5b91-logs\") pod \"glance-16afb-default-internal-api-0\" (UID: \"b0c9885f-3ca4-4031-b894-899b09eb5b91\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:48:57.212673 master-0 kubenswrapper[29097]: I0312 18:48:57.205467 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-15bfb998-3b5a-4ce6-84e0-d75d1f841c87\" (UniqueName: \"kubernetes.io/csi/topolvm.io^50ae7dd3-ed24-4f03-8717-dcd38b66edcc\") pod \"glance-16afb-default-internal-api-0\" (UID: \"b0c9885f-3ca4-4031-b894-899b09eb5b91\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:48:57.231577 master-0 kubenswrapper[29097]: I0312 18:48:57.219417 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cb659fff7-7fq4q" Mar 12 18:48:57.260813 master-0 kubenswrapper[29097]: I0312 18:48:57.260759 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cb659fff7-7fq4q"] Mar 12 18:48:57.279175 master-0 kubenswrapper[29097]: I0312 18:48:57.279135 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Mar 12 18:48:57.288565 master-0 kubenswrapper[29097]: I0312 18:48:57.288508 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 12 18:48:57.295794 master-0 kubenswrapper[29097]: I0312 18:48:57.295752 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Mar 12 18:48:57.295997 master-0 kubenswrapper[29097]: I0312 18:48:57.295859 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Mar 12 18:48:57.296226 master-0 kubenswrapper[29097]: I0312 18:48:57.296181 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Mar 12 18:48:57.304576 master-0 kubenswrapper[29097]: I0312 18:48:57.304537 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Mar 12 18:48:57.318801 master-0 kubenswrapper[29097]: I0312 18:48:57.316938 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-15bfb998-3b5a-4ce6-84e0-d75d1f841c87\" (UniqueName: \"kubernetes.io/csi/topolvm.io^50ae7dd3-ed24-4f03-8717-dcd38b66edcc\") pod \"glance-16afb-default-internal-api-0\" (UID: \"b0c9885f-3ca4-4031-b894-899b09eb5b91\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:48:57.318801 master-0 kubenswrapper[29097]: I0312 18:48:57.316989 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/165a6ae9-40c5-4e7c-891c-699ba500e189-ovsdbserver-sb\") pod \"dnsmasq-dns-5cb659fff7-7fq4q\" (UID: \"165a6ae9-40c5-4e7c-891c-699ba500e189\") " pod="openstack/dnsmasq-dns-5cb659fff7-7fq4q" Mar 12 18:48:57.318801 master-0 kubenswrapper[29097]: I0312 18:48:57.317058 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0c9885f-3ca4-4031-b894-899b09eb5b91-httpd-run\") pod \"glance-16afb-default-internal-api-0\" (UID: \"b0c9885f-3ca4-4031-b894-899b09eb5b91\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:48:57.318801 master-0 kubenswrapper[29097]: I0312 18:48:57.317085 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c9885f-3ca4-4031-b894-899b09eb5b91-combined-ca-bundle\") pod \"glance-16afb-default-internal-api-0\" (UID: \"b0c9885f-3ca4-4031-b894-899b09eb5b91\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:48:57.318801 master-0 kubenswrapper[29097]: I0312 18:48:57.317107 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0c9885f-3ca4-4031-b894-899b09eb5b91-config-data\") pod \"glance-16afb-default-internal-api-0\" (UID: \"b0c9885f-3ca4-4031-b894-899b09eb5b91\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:48:57.318801 master-0 kubenswrapper[29097]: I0312 18:48:57.317127 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0c9885f-3ca4-4031-b894-899b09eb5b91-internal-tls-certs\") pod \"glance-16afb-default-internal-api-0\" (UID: \"b0c9885f-3ca4-4031-b894-899b09eb5b91\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:48:57.318801 master-0 kubenswrapper[29097]: I0312 18:48:57.317154 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s7h6\" (UniqueName: \"kubernetes.io/projected/b0c9885f-3ca4-4031-b894-899b09eb5b91-kube-api-access-2s7h6\") pod \"glance-16afb-default-internal-api-0\" (UID: \"b0c9885f-3ca4-4031-b894-899b09eb5b91\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:48:57.318801 master-0 kubenswrapper[29097]: I0312 18:48:57.317205 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0c9885f-3ca4-4031-b894-899b09eb5b91-scripts\") pod \"glance-16afb-default-internal-api-0\" (UID: \"b0c9885f-3ca4-4031-b894-899b09eb5b91\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:48:57.318801 master-0 kubenswrapper[29097]: I0312 18:48:57.317226 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/165a6ae9-40c5-4e7c-891c-699ba500e189-ovsdbserver-nb\") pod \"dnsmasq-dns-5cb659fff7-7fq4q\" (UID: \"165a6ae9-40c5-4e7c-891c-699ba500e189\") " pod="openstack/dnsmasq-dns-5cb659fff7-7fq4q" Mar 12 18:48:57.318801 master-0 kubenswrapper[29097]: I0312 18:48:57.317264 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/165a6ae9-40c5-4e7c-891c-699ba500e189-config\") pod \"dnsmasq-dns-5cb659fff7-7fq4q\" (UID: \"165a6ae9-40c5-4e7c-891c-699ba500e189\") " pod="openstack/dnsmasq-dns-5cb659fff7-7fq4q" Mar 12 18:48:57.318801 master-0 kubenswrapper[29097]: I0312 18:48:57.317280 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/165a6ae9-40c5-4e7c-891c-699ba500e189-dns-svc\") pod \"dnsmasq-dns-5cb659fff7-7fq4q\" (UID: \"165a6ae9-40c5-4e7c-891c-699ba500e189\") " pod="openstack/dnsmasq-dns-5cb659fff7-7fq4q" Mar 12 18:48:57.318801 master-0 kubenswrapper[29097]: I0312 18:48:57.317299 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0c9885f-3ca4-4031-b894-899b09eb5b91-logs\") pod \"glance-16afb-default-internal-api-0\" (UID: \"b0c9885f-3ca4-4031-b894-899b09eb5b91\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:48:57.318801 master-0 kubenswrapper[29097]: I0312 18:48:57.317317 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/165a6ae9-40c5-4e7c-891c-699ba500e189-dns-swift-storage-0\") pod \"dnsmasq-dns-5cb659fff7-7fq4q\" (UID: \"165a6ae9-40c5-4e7c-891c-699ba500e189\") " pod="openstack/dnsmasq-dns-5cb659fff7-7fq4q" Mar 12 18:48:57.318801 master-0 kubenswrapper[29097]: I0312 18:48:57.317337 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhnkg\" (UniqueName: \"kubernetes.io/projected/165a6ae9-40c5-4e7c-891c-699ba500e189-kube-api-access-xhnkg\") pod \"dnsmasq-dns-5cb659fff7-7fq4q\" (UID: \"165a6ae9-40c5-4e7c-891c-699ba500e189\") " pod="openstack/dnsmasq-dns-5cb659fff7-7fq4q" Mar 12 18:48:57.320027 master-0 kubenswrapper[29097]: I0312 18:48:57.320002 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0c9885f-3ca4-4031-b894-899b09eb5b91-httpd-run\") pod \"glance-16afb-default-internal-api-0\" (UID: \"b0c9885f-3ca4-4031-b894-899b09eb5b91\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:48:57.331571 master-0 kubenswrapper[29097]: I0312 18:48:57.327740 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0c9885f-3ca4-4031-b894-899b09eb5b91-logs\") pod \"glance-16afb-default-internal-api-0\" (UID: \"b0c9885f-3ca4-4031-b894-899b09eb5b91\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:48:57.346718 master-0 kubenswrapper[29097]: I0312 18:48:57.343013 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0c9885f-3ca4-4031-b894-899b09eb5b91-scripts\") pod \"glance-16afb-default-internal-api-0\" (UID: \"b0c9885f-3ca4-4031-b894-899b09eb5b91\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:48:57.364028 master-0 kubenswrapper[29097]: I0312 18:48:57.359312 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s7h6\" (UniqueName: \"kubernetes.io/projected/b0c9885f-3ca4-4031-b894-899b09eb5b91-kube-api-access-2s7h6\") pod \"glance-16afb-default-internal-api-0\" (UID: \"b0c9885f-3ca4-4031-b894-899b09eb5b91\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:48:57.364028 master-0 kubenswrapper[29097]: I0312 18:48:57.360616 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0c9885f-3ca4-4031-b894-899b09eb5b91-config-data\") pod \"glance-16afb-default-internal-api-0\" (UID: \"b0c9885f-3ca4-4031-b894-899b09eb5b91\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:48:57.388001 master-0 kubenswrapper[29097]: I0312 18:48:57.384384 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0c9885f-3ca4-4031-b894-899b09eb5b91-internal-tls-certs\") pod \"glance-16afb-default-internal-api-0\" (UID: \"b0c9885f-3ca4-4031-b894-899b09eb5b91\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:48:57.389080 master-0 kubenswrapper[29097]: I0312 18:48:57.388584 29097 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 18:48:57.389080 master-0 kubenswrapper[29097]: I0312 18:48:57.388630 29097 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-15bfb998-3b5a-4ce6-84e0-d75d1f841c87\" (UniqueName: \"kubernetes.io/csi/topolvm.io^50ae7dd3-ed24-4f03-8717-dcd38b66edcc\") pod \"glance-16afb-default-internal-api-0\" (UID: \"b0c9885f-3ca4-4031-b894-899b09eb5b91\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/bdf06cc41b16558e5d4e2346226e79fd70cee97d9259625a849a3aa2d0277459/globalmount\"" pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:48:57.389879 master-0 kubenswrapper[29097]: I0312 18:48:57.389861 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c9885f-3ca4-4031-b894-899b09eb5b91-combined-ca-bundle\") pod \"glance-16afb-default-internal-api-0\" (UID: \"b0c9885f-3ca4-4031-b894-899b09eb5b91\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:48:57.432891 master-0 kubenswrapper[29097]: I0312 18:48:57.426984 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\") " pod="openstack/ironic-inspector-0" Mar 12 18:48:57.432891 master-0 kubenswrapper[29097]: I0312 18:48:57.427030 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\") " pod="openstack/ironic-inspector-0" Mar 12 18:48:57.432891 master-0 kubenswrapper[29097]: I0312 18:48:57.427064 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\") " pod="openstack/ironic-inspector-0" Mar 12 18:48:57.432891 master-0 kubenswrapper[29097]: I0312 18:48:57.427090 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/165a6ae9-40c5-4e7c-891c-699ba500e189-ovsdbserver-sb\") pod \"dnsmasq-dns-5cb659fff7-7fq4q\" (UID: \"165a6ae9-40c5-4e7c-891c-699ba500e189\") " pod="openstack/dnsmasq-dns-5cb659fff7-7fq4q" Mar 12 18:48:57.432891 master-0 kubenswrapper[29097]: I0312 18:48:57.427173 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-scripts\") pod \"ironic-inspector-0\" (UID: \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\") " pod="openstack/ironic-inspector-0" Mar 12 18:48:57.432891 master-0 kubenswrapper[29097]: I0312 18:48:57.427215 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\") " pod="openstack/ironic-inspector-0" Mar 12 18:48:57.432891 master-0 kubenswrapper[29097]: I0312 18:48:57.427247 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-config\") pod \"ironic-inspector-0\" (UID: \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\") " pod="openstack/ironic-inspector-0" Mar 12 18:48:57.432891 master-0 kubenswrapper[29097]: I0312 18:48:57.427271 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/165a6ae9-40c5-4e7c-891c-699ba500e189-ovsdbserver-nb\") pod \"dnsmasq-dns-5cb659fff7-7fq4q\" (UID: \"165a6ae9-40c5-4e7c-891c-699ba500e189\") " pod="openstack/dnsmasq-dns-5cb659fff7-7fq4q" Mar 12 18:48:57.432891 master-0 kubenswrapper[29097]: I0312 18:48:57.427309 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfnpw\" (UniqueName: \"kubernetes.io/projected/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-kube-api-access-gfnpw\") pod \"ironic-inspector-0\" (UID: \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\") " pod="openstack/ironic-inspector-0" Mar 12 18:48:57.432891 master-0 kubenswrapper[29097]: I0312 18:48:57.427339 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/165a6ae9-40c5-4e7c-891c-699ba500e189-dns-svc\") pod \"dnsmasq-dns-5cb659fff7-7fq4q\" (UID: \"165a6ae9-40c5-4e7c-891c-699ba500e189\") " pod="openstack/dnsmasq-dns-5cb659fff7-7fq4q" Mar 12 18:48:57.432891 master-0 kubenswrapper[29097]: I0312 18:48:57.427356 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/165a6ae9-40c5-4e7c-891c-699ba500e189-config\") pod \"dnsmasq-dns-5cb659fff7-7fq4q\" (UID: \"165a6ae9-40c5-4e7c-891c-699ba500e189\") " pod="openstack/dnsmasq-dns-5cb659fff7-7fq4q" Mar 12 18:48:57.432891 master-0 kubenswrapper[29097]: I0312 18:48:57.427392 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/165a6ae9-40c5-4e7c-891c-699ba500e189-dns-swift-storage-0\") pod \"dnsmasq-dns-5cb659fff7-7fq4q\" (UID: \"165a6ae9-40c5-4e7c-891c-699ba500e189\") " pod="openstack/dnsmasq-dns-5cb659fff7-7fq4q" Mar 12 18:48:57.432891 master-0 kubenswrapper[29097]: I0312 18:48:57.427414 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhnkg\" (UniqueName: \"kubernetes.io/projected/165a6ae9-40c5-4e7c-891c-699ba500e189-kube-api-access-xhnkg\") pod \"dnsmasq-dns-5cb659fff7-7fq4q\" (UID: \"165a6ae9-40c5-4e7c-891c-699ba500e189\") " pod="openstack/dnsmasq-dns-5cb659fff7-7fq4q" Mar 12 18:48:57.432891 master-0 kubenswrapper[29097]: I0312 18:48:57.428325 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/165a6ae9-40c5-4e7c-891c-699ba500e189-ovsdbserver-sb\") pod \"dnsmasq-dns-5cb659fff7-7fq4q\" (UID: \"165a6ae9-40c5-4e7c-891c-699ba500e189\") " pod="openstack/dnsmasq-dns-5cb659fff7-7fq4q" Mar 12 18:48:57.450553 master-0 kubenswrapper[29097]: I0312 18:48:57.435802 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/165a6ae9-40c5-4e7c-891c-699ba500e189-ovsdbserver-nb\") pod \"dnsmasq-dns-5cb659fff7-7fq4q\" (UID: \"165a6ae9-40c5-4e7c-891c-699ba500e189\") " pod="openstack/dnsmasq-dns-5cb659fff7-7fq4q" Mar 12 18:48:57.463549 master-0 kubenswrapper[29097]: I0312 18:48:57.462318 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/165a6ae9-40c5-4e7c-891c-699ba500e189-dns-swift-storage-0\") pod \"dnsmasq-dns-5cb659fff7-7fq4q\" (UID: \"165a6ae9-40c5-4e7c-891c-699ba500e189\") " pod="openstack/dnsmasq-dns-5cb659fff7-7fq4q" Mar 12 18:48:57.464616 master-0 kubenswrapper[29097]: I0312 18:48:57.464561 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhnkg\" (UniqueName: \"kubernetes.io/projected/165a6ae9-40c5-4e7c-891c-699ba500e189-kube-api-access-xhnkg\") pod \"dnsmasq-dns-5cb659fff7-7fq4q\" (UID: \"165a6ae9-40c5-4e7c-891c-699ba500e189\") " pod="openstack/dnsmasq-dns-5cb659fff7-7fq4q" Mar 12 18:48:57.469932 master-0 kubenswrapper[29097]: I0312 18:48:57.465750 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/165a6ae9-40c5-4e7c-891c-699ba500e189-config\") pod \"dnsmasq-dns-5cb659fff7-7fq4q\" (UID: \"165a6ae9-40c5-4e7c-891c-699ba500e189\") " pod="openstack/dnsmasq-dns-5cb659fff7-7fq4q" Mar 12 18:48:57.469932 master-0 kubenswrapper[29097]: I0312 18:48:57.468170 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/165a6ae9-40c5-4e7c-891c-699ba500e189-dns-svc\") pod \"dnsmasq-dns-5cb659fff7-7fq4q\" (UID: \"165a6ae9-40c5-4e7c-891c-699ba500e189\") " pod="openstack/dnsmasq-dns-5cb659fff7-7fq4q" Mar 12 18:48:57.534951 master-0 kubenswrapper[29097]: I0312 18:48:57.534896 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\") " pod="openstack/ironic-inspector-0" Mar 12 18:48:57.535151 master-0 kubenswrapper[29097]: I0312 18:48:57.535062 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-scripts\") pod \"ironic-inspector-0\" (UID: \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\") " pod="openstack/ironic-inspector-0" Mar 12 18:48:57.535151 master-0 kubenswrapper[29097]: I0312 18:48:57.535133 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\") " pod="openstack/ironic-inspector-0" Mar 12 18:48:57.535219 master-0 kubenswrapper[29097]: I0312 18:48:57.535183 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-config\") pod \"ironic-inspector-0\" (UID: \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\") " pod="openstack/ironic-inspector-0" Mar 12 18:48:57.535251 master-0 kubenswrapper[29097]: I0312 18:48:57.535221 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfnpw\" (UniqueName: \"kubernetes.io/projected/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-kube-api-access-gfnpw\") pod \"ironic-inspector-0\" (UID: \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\") " pod="openstack/ironic-inspector-0" Mar 12 18:48:57.537664 master-0 kubenswrapper[29097]: I0312 18:48:57.536095 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\") " pod="openstack/ironic-inspector-0" Mar 12 18:48:57.537723 master-0 kubenswrapper[29097]: I0312 18:48:57.537692 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\") " pod="openstack/ironic-inspector-0" Mar 12 18:48:57.541754 master-0 kubenswrapper[29097]: I0312 18:48:57.541623 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\") " pod="openstack/ironic-inspector-0" Mar 12 18:48:57.542232 master-0 kubenswrapper[29097]: I0312 18:48:57.542174 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\") " pod="openstack/ironic-inspector-0" Mar 12 18:48:57.542232 master-0 kubenswrapper[29097]: I0312 18:48:57.542213 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-scripts\") pod \"ironic-inspector-0\" (UID: \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\") " pod="openstack/ironic-inspector-0" Mar 12 18:48:57.542495 master-0 kubenswrapper[29097]: I0312 18:48:57.542455 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\") " pod="openstack/ironic-inspector-0" Mar 12 18:48:57.544309 master-0 kubenswrapper[29097]: I0312 18:48:57.544273 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-config\") pod \"ironic-inspector-0\" (UID: \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\") " pod="openstack/ironic-inspector-0" Mar 12 18:48:57.545554 master-0 kubenswrapper[29097]: I0312 18:48:57.545011 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\") " pod="openstack/ironic-inspector-0" Mar 12 18:48:57.556771 master-0 kubenswrapper[29097]: I0312 18:48:57.556551 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfnpw\" (UniqueName: \"kubernetes.io/projected/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-kube-api-access-gfnpw\") pod \"ironic-inspector-0\" (UID: \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\") " pod="openstack/ironic-inspector-0" Mar 12 18:48:57.578535 master-0 kubenswrapper[29097]: I0312 18:48:57.575934 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cb659fff7-7fq4q" Mar 12 18:48:57.746092 master-0 kubenswrapper[29097]: I0312 18:48:57.739060 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 12 18:48:57.878332 master-0 kubenswrapper[29097]: I0312 18:48:57.876439 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-16afb-default-external-api-0"] Mar 12 18:48:58.106567 master-0 kubenswrapper[29097]: W0312 18:48:58.104759 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod165a6ae9_40c5_4e7c_891c_699ba500e189.slice/crio-66286a08b8f17c57649d1476ac336dfda966ef041fe9d4d74d00536deba6a74b WatchSource:0}: Error finding container 66286a08b8f17c57649d1476ac336dfda966ef041fe9d4d74d00536deba6a74b: Status 404 returned error can't find the container with id 66286a08b8f17c57649d1476ac336dfda966ef041fe9d4d74d00536deba6a74b Mar 12 18:48:58.142406 master-0 kubenswrapper[29097]: I0312 18:48:58.142251 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cb659fff7-7fq4q"] Mar 12 18:48:58.340771 master-0 kubenswrapper[29097]: I0312 18:48:58.340718 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-15bfb998-3b5a-4ce6-84e0-d75d1f841c87\" (UniqueName: \"kubernetes.io/csi/topolvm.io^50ae7dd3-ed24-4f03-8717-dcd38b66edcc\") pod \"glance-16afb-default-internal-api-0\" (UID: \"b0c9885f-3ca4-4031-b894-899b09eb5b91\") " pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:48:58.366530 master-0 kubenswrapper[29097]: I0312 18:48:58.366441 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:48:58.562340 master-0 kubenswrapper[29097]: I0312 18:48:58.562227 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Mar 12 18:48:58.613552 master-0 kubenswrapper[29097]: I0312 18:48:58.613500 29097 generic.go:334] "Generic (PLEG): container finished" podID="165a6ae9-40c5-4e7c-891c-699ba500e189" containerID="2907ff8598bc263162a6b38451d63c3d5fb481dfd569bd90213336af6de27e5c" exitCode=0 Mar 12 18:48:58.614456 master-0 kubenswrapper[29097]: I0312 18:48:58.613793 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cb659fff7-7fq4q" event={"ID":"165a6ae9-40c5-4e7c-891c-699ba500e189","Type":"ContainerDied","Data":"2907ff8598bc263162a6b38451d63c3d5fb481dfd569bd90213336af6de27e5c"} Mar 12 18:48:58.614456 master-0 kubenswrapper[29097]: I0312 18:48:58.613823 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cb659fff7-7fq4q" event={"ID":"165a6ae9-40c5-4e7c-891c-699ba500e189","Type":"ContainerStarted","Data":"66286a08b8f17c57649d1476ac336dfda966ef041fe9d4d74d00536deba6a74b"} Mar 12 18:48:58.617851 master-0 kubenswrapper[29097]: I0312 18:48:58.617832 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9","Type":"ContainerStarted","Data":"5b1b9055ac9810c8c05965fc6a2d493ed37815321300cabae7b7d0e5db89fca1"} Mar 12 18:48:58.621479 master-0 kubenswrapper[29097]: I0312 18:48:58.621407 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-16afb-default-external-api-0" event={"ID":"884251a4-69be-405c-90a2-b75d9970b52e","Type":"ContainerStarted","Data":"b63c53b111e5db3de501ee60e5b6cb0e7cb47ea3239c24bd1027d2d0952e09bb"} Mar 12 18:48:58.768966 master-0 kubenswrapper[29097]: I0312 18:48:58.768855 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a97f2967-17f2-42cc-91b6-37f26b1a6964" path="/var/lib/kubelet/pods/a97f2967-17f2-42cc-91b6-37f26b1a6964/volumes" Mar 12 18:48:59.007791 master-0 kubenswrapper[29097]: I0312 18:48:59.007742 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-16afb-default-internal-api-0"] Mar 12 18:48:59.640902 master-0 kubenswrapper[29097]: I0312 18:48:59.640238 29097 generic.go:334] "Generic (PLEG): container finished" podID="cc73eb83-6e67-4ebb-ad05-95baa34f0ab9" containerID="0910a67133ca55cd87a9afdee9090d01aa6b9e620025dd024a6c541b516e36e7" exitCode=0 Mar 12 18:48:59.640902 master-0 kubenswrapper[29097]: I0312 18:48:59.640318 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9","Type":"ContainerDied","Data":"0910a67133ca55cd87a9afdee9090d01aa6b9e620025dd024a6c541b516e36e7"} Mar 12 18:48:59.647944 master-0 kubenswrapper[29097]: I0312 18:48:59.647752 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-16afb-default-internal-api-0" event={"ID":"b0c9885f-3ca4-4031-b894-899b09eb5b91","Type":"ContainerStarted","Data":"59b632ab4f2d2a8213922ae4e778047d7afe55fa005e44e53b760455ea3af03e"} Mar 12 18:48:59.649939 master-0 kubenswrapper[29097]: I0312 18:48:59.649887 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cb659fff7-7fq4q" event={"ID":"165a6ae9-40c5-4e7c-891c-699ba500e189","Type":"ContainerStarted","Data":"c2d56974df3ea9b38100073bd35aa998fa097a98623a69cdc83688fcee12cf05"} Mar 12 18:48:59.650833 master-0 kubenswrapper[29097]: I0312 18:48:59.650793 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5cb659fff7-7fq4q" Mar 12 18:48:59.711505 master-0 kubenswrapper[29097]: I0312 18:48:59.711406 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5cb659fff7-7fq4q" podStartSLOduration=2.711380912 podStartE2EDuration="2.711380912s" podCreationTimestamp="2026-03-12 18:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:48:59.694562416 +0000 UTC m=+1179.248542513" watchObservedRunningTime="2026-03-12 18:48:59.711380912 +0000 UTC m=+1179.265361009" Mar 12 18:49:00.128903 master-0 kubenswrapper[29097]: I0312 18:49:00.128841 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Mar 12 18:49:00.681749 master-0 kubenswrapper[29097]: I0312 18:49:00.681691 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-16afb-default-internal-api-0" event={"ID":"b0c9885f-3ca4-4031-b894-899b09eb5b91","Type":"ContainerStarted","Data":"a7e15443d08a58942a1b9aba623fe3c4f8611118f569c7f823432cc5714afb8d"} Mar 12 18:49:00.691764 master-0 kubenswrapper[29097]: I0312 18:49:00.689631 29097 generic.go:334] "Generic (PLEG): container finished" podID="7fb8fbc7-949d-4526-8456-fbf8277cee2f" containerID="e457718832c4d7068ea141ddf5b82409ac2608b68231e092a2367adc52322b3f" exitCode=0 Mar 12 18:49:00.691764 master-0 kubenswrapper[29097]: I0312 18:49:00.689683 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"7fb8fbc7-949d-4526-8456-fbf8277cee2f","Type":"ContainerDied","Data":"e457718832c4d7068ea141ddf5b82409ac2608b68231e092a2367adc52322b3f"} Mar 12 18:49:00.706158 master-0 kubenswrapper[29097]: I0312 18:49:00.706119 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-16afb-default-external-api-0" event={"ID":"884251a4-69be-405c-90a2-b75d9970b52e","Type":"ContainerStarted","Data":"81cabc11c4f966be2874329352be3f60fb6727bc3afe2ddcdae47be5cc55b00a"} Mar 12 18:49:01.717803 master-0 kubenswrapper[29097]: I0312 18:49:01.717741 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-16afb-default-external-api-0" event={"ID":"884251a4-69be-405c-90a2-b75d9970b52e","Type":"ContainerStarted","Data":"9f26559103cbc86b83d17a6fe673190251cfc62cb3bfe9cafadfef9a0559a614"} Mar 12 18:49:01.723461 master-0 kubenswrapper[29097]: I0312 18:49:01.723403 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-16afb-default-internal-api-0" event={"ID":"b0c9885f-3ca4-4031-b894-899b09eb5b91","Type":"ContainerStarted","Data":"bd23e78bfdd9bc0f756d92cc7b3e0dafab18c46680a0a2a9566944a213d4bb2c"} Mar 12 18:49:01.774938 master-0 kubenswrapper[29097]: I0312 18:49:01.774853 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-16afb-default-external-api-0" podStartSLOduration=6.77483437 podStartE2EDuration="6.77483437s" podCreationTimestamp="2026-03-12 18:48:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:49:01.764869523 +0000 UTC m=+1181.318849620" watchObservedRunningTime="2026-03-12 18:49:01.77483437 +0000 UTC m=+1181.328814467" Mar 12 18:49:01.799002 master-0 kubenswrapper[29097]: I0312 18:49:01.794148 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-16afb-default-internal-api-0" podStartSLOduration=5.794130117 podStartE2EDuration="5.794130117s" podCreationTimestamp="2026-03-12 18:48:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:49:01.793631545 +0000 UTC m=+1181.347611662" watchObservedRunningTime="2026-03-12 18:49:01.794130117 +0000 UTC m=+1181.348110224" Mar 12 18:49:06.948218 master-0 kubenswrapper[29097]: I0312 18:49:06.947747 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:49:06.948218 master-0 kubenswrapper[29097]: I0312 18:49:06.947813 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:49:06.988876 master-0 kubenswrapper[29097]: I0312 18:49:06.988300 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:49:07.012081 master-0 kubenswrapper[29097]: I0312 18:49:07.012024 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:49:07.578241 master-0 kubenswrapper[29097]: I0312 18:49:07.577741 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5cb659fff7-7fq4q" Mar 12 18:49:07.720745 master-0 kubenswrapper[29097]: I0312 18:49:07.720683 29097 scope.go:117] "RemoveContainer" containerID="b8aaa63468c9fd987f471541a3a63ebc1775a5b66d6068f313f8da7cd0af22e7" Mar 12 18:49:07.721044 master-0 kubenswrapper[29097]: E0312 18:49:07.721016 29097 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-7cb69d965b-d79tc_openstack(87a19dc7-5415-4d3d-a22e-9e2524a67e38)\"" pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" podUID="87a19dc7-5415-4d3d-a22e-9e2524a67e38" Mar 12 18:49:07.868802 master-0 kubenswrapper[29097]: I0312 18:49:07.868662 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:49:07.868802 master-0 kubenswrapper[29097]: I0312 18:49:07.868785 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:49:07.902813 master-0 kubenswrapper[29097]: I0312 18:49:07.902725 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9c57cd77c-r4fqr"] Mar 12 18:49:07.903034 master-0 kubenswrapper[29097]: I0312 18:49:07.903003 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9c57cd77c-r4fqr" podUID="1c556c93-e908-407c-9f50-ad291a0bb179" containerName="dnsmasq-dns" containerID="cri-o://84420a33b4aea2f3c66a83786c8826a0bdc5ecffdcccccef53e00875780f9062" gracePeriod=10 Mar 12 18:49:08.368013 master-0 kubenswrapper[29097]: I0312 18:49:08.367955 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:49:08.368013 master-0 kubenswrapper[29097]: I0312 18:49:08.368007 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:49:08.402691 master-0 kubenswrapper[29097]: I0312 18:49:08.399599 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:49:08.408603 master-0 kubenswrapper[29097]: I0312 18:49:08.408563 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:49:08.873913 master-0 kubenswrapper[29097]: I0312 18:49:08.873861 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:49:08.873913 master-0 kubenswrapper[29097]: I0312 18:49:08.873916 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:49:09.759577 master-0 kubenswrapper[29097]: I0312 18:49:09.759477 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9c57cd77c-r4fqr" Mar 12 18:49:09.857332 master-0 kubenswrapper[29097]: I0312 18:49:09.857268 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c556c93-e908-407c-9f50-ad291a0bb179-dns-swift-storage-0\") pod \"1c556c93-e908-407c-9f50-ad291a0bb179\" (UID: \"1c556c93-e908-407c-9f50-ad291a0bb179\") " Mar 12 18:49:09.857539 master-0 kubenswrapper[29097]: I0312 18:49:09.857362 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c556c93-e908-407c-9f50-ad291a0bb179-ovsdbserver-sb\") pod \"1c556c93-e908-407c-9f50-ad291a0bb179\" (UID: \"1c556c93-e908-407c-9f50-ad291a0bb179\") " Mar 12 18:49:09.857539 master-0 kubenswrapper[29097]: I0312 18:49:09.857393 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c556c93-e908-407c-9f50-ad291a0bb179-config\") pod \"1c556c93-e908-407c-9f50-ad291a0bb179\" (UID: \"1c556c93-e908-407c-9f50-ad291a0bb179\") " Mar 12 18:49:09.857539 master-0 kubenswrapper[29097]: I0312 18:49:09.857433 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c556c93-e908-407c-9f50-ad291a0bb179-ovsdbserver-nb\") pod \"1c556c93-e908-407c-9f50-ad291a0bb179\" (UID: \"1c556c93-e908-407c-9f50-ad291a0bb179\") " Mar 12 18:49:09.857640 master-0 kubenswrapper[29097]: I0312 18:49:09.857605 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bqz8\" (UniqueName: \"kubernetes.io/projected/1c556c93-e908-407c-9f50-ad291a0bb179-kube-api-access-5bqz8\") pod \"1c556c93-e908-407c-9f50-ad291a0bb179\" (UID: \"1c556c93-e908-407c-9f50-ad291a0bb179\") " Mar 12 18:49:09.857703 master-0 kubenswrapper[29097]: I0312 18:49:09.857679 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c556c93-e908-407c-9f50-ad291a0bb179-dns-svc\") pod \"1c556c93-e908-407c-9f50-ad291a0bb179\" (UID: \"1c556c93-e908-407c-9f50-ad291a0bb179\") " Mar 12 18:49:09.888103 master-0 kubenswrapper[29097]: I0312 18:49:09.886750 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c556c93-e908-407c-9f50-ad291a0bb179-kube-api-access-5bqz8" (OuterVolumeSpecName: "kube-api-access-5bqz8") pod "1c556c93-e908-407c-9f50-ad291a0bb179" (UID: "1c556c93-e908-407c-9f50-ad291a0bb179"). InnerVolumeSpecName "kube-api-access-5bqz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:49:09.897218 master-0 kubenswrapper[29097]: I0312 18:49:09.894355 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-pz2nb" event={"ID":"04fa4fda-ab43-4c4e-be26-48fc6ef9fc48","Type":"ContainerStarted","Data":"24c3e1c0366b72e56408c863295c1307ad9cea10248382b96ecf21cb5bb0b88a"} Mar 12 18:49:09.901638 master-0 kubenswrapper[29097]: I0312 18:49:09.901366 29097 generic.go:334] "Generic (PLEG): container finished" podID="1c556c93-e908-407c-9f50-ad291a0bb179" containerID="84420a33b4aea2f3c66a83786c8826a0bdc5ecffdcccccef53e00875780f9062" exitCode=0 Mar 12 18:49:09.903529 master-0 kubenswrapper[29097]: I0312 18:49:09.903049 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9c57cd77c-r4fqr" Mar 12 18:49:09.903529 master-0 kubenswrapper[29097]: I0312 18:49:09.903293 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9c57cd77c-r4fqr" event={"ID":"1c556c93-e908-407c-9f50-ad291a0bb179","Type":"ContainerDied","Data":"84420a33b4aea2f3c66a83786c8826a0bdc5ecffdcccccef53e00875780f9062"} Mar 12 18:49:09.903529 master-0 kubenswrapper[29097]: I0312 18:49:09.903333 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9c57cd77c-r4fqr" event={"ID":"1c556c93-e908-407c-9f50-ad291a0bb179","Type":"ContainerDied","Data":"923f43950678978efcdccf938e10af43e7388f73ffce85a4c92fc076b93b5629"} Mar 12 18:49:09.903529 master-0 kubenswrapper[29097]: I0312 18:49:09.903350 29097 scope.go:117] "RemoveContainer" containerID="84420a33b4aea2f3c66a83786c8826a0bdc5ecffdcccccef53e00875780f9062" Mar 12 18:49:09.925688 master-0 kubenswrapper[29097]: I0312 18:49:09.918489 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-pz2nb" podStartSLOduration=2.071156087 podStartE2EDuration="15.918471423s" podCreationTimestamp="2026-03-12 18:48:54 +0000 UTC" firstStartedPulling="2026-03-12 18:48:55.517810798 +0000 UTC m=+1175.071790895" lastFinishedPulling="2026-03-12 18:49:09.365126134 +0000 UTC m=+1188.919106231" observedRunningTime="2026-03-12 18:49:09.911922621 +0000 UTC m=+1189.465902718" watchObservedRunningTime="2026-03-12 18:49:09.918471423 +0000 UTC m=+1189.472451520" Mar 12 18:49:09.945103 master-0 kubenswrapper[29097]: I0312 18:49:09.939635 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c556c93-e908-407c-9f50-ad291a0bb179-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1c556c93-e908-407c-9f50-ad291a0bb179" (UID: "1c556c93-e908-407c-9f50-ad291a0bb179"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:49:09.951144 master-0 kubenswrapper[29097]: I0312 18:49:09.950712 29097 scope.go:117] "RemoveContainer" containerID="9bcea6245874411139f2ad2f808a35d8864b7cda959005f74cb0124b883933e3" Mar 12 18:49:09.962587 master-0 kubenswrapper[29097]: I0312 18:49:09.960078 29097 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c556c93-e908-407c-9f50-ad291a0bb179-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 18:49:09.962587 master-0 kubenswrapper[29097]: I0312 18:49:09.960113 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bqz8\" (UniqueName: \"kubernetes.io/projected/1c556c93-e908-407c-9f50-ad291a0bb179-kube-api-access-5bqz8\") on node \"master-0\" DevicePath \"\"" Mar 12 18:49:09.967532 master-0 kubenswrapper[29097]: I0312 18:49:09.964233 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c556c93-e908-407c-9f50-ad291a0bb179-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1c556c93-e908-407c-9f50-ad291a0bb179" (UID: "1c556c93-e908-407c-9f50-ad291a0bb179"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:49:09.990677 master-0 kubenswrapper[29097]: I0312 18:49:09.985307 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c556c93-e908-407c-9f50-ad291a0bb179-config" (OuterVolumeSpecName: "config") pod "1c556c93-e908-407c-9f50-ad291a0bb179" (UID: "1c556c93-e908-407c-9f50-ad291a0bb179"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:49:09.990677 master-0 kubenswrapper[29097]: I0312 18:49:09.986573 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c556c93-e908-407c-9f50-ad291a0bb179-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1c556c93-e908-407c-9f50-ad291a0bb179" (UID: "1c556c93-e908-407c-9f50-ad291a0bb179"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:49:09.990677 master-0 kubenswrapper[29097]: I0312 18:49:09.986617 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c556c93-e908-407c-9f50-ad291a0bb179-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1c556c93-e908-407c-9f50-ad291a0bb179" (UID: "1c556c93-e908-407c-9f50-ad291a0bb179"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:49:09.990677 master-0 kubenswrapper[29097]: I0312 18:49:09.987118 29097 scope.go:117] "RemoveContainer" containerID="84420a33b4aea2f3c66a83786c8826a0bdc5ecffdcccccef53e00875780f9062" Mar 12 18:49:09.990677 master-0 kubenswrapper[29097]: E0312 18:49:09.987579 29097 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84420a33b4aea2f3c66a83786c8826a0bdc5ecffdcccccef53e00875780f9062\": container with ID starting with 84420a33b4aea2f3c66a83786c8826a0bdc5ecffdcccccef53e00875780f9062 not found: ID does not exist" containerID="84420a33b4aea2f3c66a83786c8826a0bdc5ecffdcccccef53e00875780f9062" Mar 12 18:49:09.990677 master-0 kubenswrapper[29097]: I0312 18:49:09.987608 29097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84420a33b4aea2f3c66a83786c8826a0bdc5ecffdcccccef53e00875780f9062"} err="failed to get container status \"84420a33b4aea2f3c66a83786c8826a0bdc5ecffdcccccef53e00875780f9062\": rpc error: code = NotFound desc = could not find container \"84420a33b4aea2f3c66a83786c8826a0bdc5ecffdcccccef53e00875780f9062\": container with ID starting with 84420a33b4aea2f3c66a83786c8826a0bdc5ecffdcccccef53e00875780f9062 not found: ID does not exist" Mar 12 18:49:09.990677 master-0 kubenswrapper[29097]: I0312 18:49:09.987630 29097 scope.go:117] "RemoveContainer" containerID="9bcea6245874411139f2ad2f808a35d8864b7cda959005f74cb0124b883933e3" Mar 12 18:49:09.990677 master-0 kubenswrapper[29097]: E0312 18:49:09.988058 29097 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bcea6245874411139f2ad2f808a35d8864b7cda959005f74cb0124b883933e3\": container with ID starting with 9bcea6245874411139f2ad2f808a35d8864b7cda959005f74cb0124b883933e3 not found: ID does not exist" containerID="9bcea6245874411139f2ad2f808a35d8864b7cda959005f74cb0124b883933e3" Mar 12 18:49:09.990677 master-0 kubenswrapper[29097]: I0312 18:49:09.988088 29097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bcea6245874411139f2ad2f808a35d8864b7cda959005f74cb0124b883933e3"} err="failed to get container status \"9bcea6245874411139f2ad2f808a35d8864b7cda959005f74cb0124b883933e3\": rpc error: code = NotFound desc = could not find container \"9bcea6245874411139f2ad2f808a35d8864b7cda959005f74cb0124b883933e3\": container with ID starting with 9bcea6245874411139f2ad2f808a35d8864b7cda959005f74cb0124b883933e3 not found: ID does not exist" Mar 12 18:49:10.061784 master-0 kubenswrapper[29097]: I0312 18:49:10.061743 29097 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c556c93-e908-407c-9f50-ad291a0bb179-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 18:49:10.061784 master-0 kubenswrapper[29097]: I0312 18:49:10.061778 29097 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c556c93-e908-407c-9f50-ad291a0bb179-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 12 18:49:10.061911 master-0 kubenswrapper[29097]: I0312 18:49:10.061788 29097 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c556c93-e908-407c-9f50-ad291a0bb179-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 12 18:49:10.061911 master-0 kubenswrapper[29097]: I0312 18:49:10.061798 29097 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c556c93-e908-407c-9f50-ad291a0bb179-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:49:10.259211 master-0 kubenswrapper[29097]: I0312 18:49:10.259164 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9c57cd77c-r4fqr"] Mar 12 18:49:10.270890 master-0 kubenswrapper[29097]: I0312 18:49:10.270755 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9c57cd77c-r4fqr"] Mar 12 18:49:10.733824 master-0 kubenswrapper[29097]: I0312 18:49:10.733769 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c556c93-e908-407c-9f50-ad291a0bb179" path="/var/lib/kubelet/pods/1c556c93-e908-407c-9f50-ad291a0bb179/volumes" Mar 12 18:49:10.917270 master-0 kubenswrapper[29097]: I0312 18:49:10.917217 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"7fb8fbc7-949d-4526-8456-fbf8277cee2f","Type":"ContainerStarted","Data":"832563a1e62d5d4b6180a20d5980336063fb49495b67923c8a4e7ffc2b268996"} Mar 12 18:49:10.937587 master-0 kubenswrapper[29097]: I0312 18:49:10.937437 29097 generic.go:334] "Generic (PLEG): container finished" podID="cc73eb83-6e67-4ebb-ad05-95baa34f0ab9" containerID="5d0151ff13f0bc936908cbb14ca167ac27cf814d224b79fc4d6f607ece16a9ed" exitCode=0 Mar 12 18:49:10.938285 master-0 kubenswrapper[29097]: I0312 18:49:10.938185 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9","Type":"ContainerDied","Data":"5d0151ff13f0bc936908cbb14ca167ac27cf814d224b79fc4d6f607ece16a9ed"} Mar 12 18:49:11.652982 master-0 kubenswrapper[29097]: I0312 18:49:11.652640 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 12 18:49:11.700653 master-0 kubenswrapper[29097]: I0312 18:49:11.700611 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:49:11.700985 master-0 kubenswrapper[29097]: I0312 18:49:11.700972 29097 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 18:49:11.701268 master-0 kubenswrapper[29097]: I0312 18:49:11.701247 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:49:11.701337 master-0 kubenswrapper[29097]: I0312 18:49:11.701323 29097 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 18:49:11.708013 master-0 kubenswrapper[29097]: I0312 18:49:11.707958 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-16afb-default-external-api-0" Mar 12 18:49:11.724033 master-0 kubenswrapper[29097]: I0312 18:49:11.723985 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-16afb-default-internal-api-0" Mar 12 18:49:11.920701 master-0 kubenswrapper[29097]: I0312 18:49:11.920585 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-combined-ca-bundle\") pod \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\" (UID: \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\") " Mar 12 18:49:11.921382 master-0 kubenswrapper[29097]: I0312 18:49:11.920694 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfnpw\" (UniqueName: \"kubernetes.io/projected/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-kube-api-access-gfnpw\") pod \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\" (UID: \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\") " Mar 12 18:49:11.921382 master-0 kubenswrapper[29097]: I0312 18:49:11.920821 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-etc-podinfo\") pod \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\" (UID: \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\") " Mar 12 18:49:11.921382 master-0 kubenswrapper[29097]: I0312 18:49:11.920968 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\" (UID: \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\") " Mar 12 18:49:11.921382 master-0 kubenswrapper[29097]: I0312 18:49:11.921017 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-config\") pod \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\" (UID: \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\") " Mar 12 18:49:11.921382 master-0 kubenswrapper[29097]: I0312 18:49:11.921063 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-scripts\") pod \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\" (UID: \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\") " Mar 12 18:49:11.921382 master-0 kubenswrapper[29097]: I0312 18:49:11.921128 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-var-lib-ironic\") pod \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\" (UID: \"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9\") " Mar 12 18:49:11.924008 master-0 kubenswrapper[29097]: I0312 18:49:11.923939 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "cc73eb83-6e67-4ebb-ad05-95baa34f0ab9" (UID: "cc73eb83-6e67-4ebb-ad05-95baa34f0ab9"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:49:11.926943 master-0 kubenswrapper[29097]: I0312 18:49:11.926899 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "cc73eb83-6e67-4ebb-ad05-95baa34f0ab9" (UID: "cc73eb83-6e67-4ebb-ad05-95baa34f0ab9"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 12 18:49:11.927191 master-0 kubenswrapper[29097]: I0312 18:49:11.927135 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-config" (OuterVolumeSpecName: "config") pod "cc73eb83-6e67-4ebb-ad05-95baa34f0ab9" (UID: "cc73eb83-6e67-4ebb-ad05-95baa34f0ab9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:49:11.927652 master-0 kubenswrapper[29097]: I0312 18:49:11.927625 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-kube-api-access-gfnpw" (OuterVolumeSpecName: "kube-api-access-gfnpw") pod "cc73eb83-6e67-4ebb-ad05-95baa34f0ab9" (UID: "cc73eb83-6e67-4ebb-ad05-95baa34f0ab9"). InnerVolumeSpecName "kube-api-access-gfnpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:49:11.928316 master-0 kubenswrapper[29097]: I0312 18:49:11.928279 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-scripts" (OuterVolumeSpecName: "scripts") pod "cc73eb83-6e67-4ebb-ad05-95baa34f0ab9" (UID: "cc73eb83-6e67-4ebb-ad05-95baa34f0ab9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:49:11.930767 master-0 kubenswrapper[29097]: I0312 18:49:11.930731 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "cc73eb83-6e67-4ebb-ad05-95baa34f0ab9" (UID: "cc73eb83-6e67-4ebb-ad05-95baa34f0ab9"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:49:11.954355 master-0 kubenswrapper[29097]: I0312 18:49:11.954291 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"cc73eb83-6e67-4ebb-ad05-95baa34f0ab9","Type":"ContainerDied","Data":"5b1b9055ac9810c8c05965fc6a2d493ed37815321300cabae7b7d0e5db89fca1"} Mar 12 18:49:11.954355 master-0 kubenswrapper[29097]: I0312 18:49:11.954357 29097 scope.go:117] "RemoveContainer" containerID="5d0151ff13f0bc936908cbb14ca167ac27cf814d224b79fc4d6f607ece16a9ed" Mar 12 18:49:11.955001 master-0 kubenswrapper[29097]: I0312 18:49:11.954640 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 12 18:49:12.018535 master-0 kubenswrapper[29097]: I0312 18:49:12.014186 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc73eb83-6e67-4ebb-ad05-95baa34f0ab9" (UID: "cc73eb83-6e67-4ebb-ad05-95baa34f0ab9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:49:12.024373 master-0 kubenswrapper[29097]: I0312 18:49:12.023821 29097 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Mar 12 18:49:12.024373 master-0 kubenswrapper[29097]: I0312 18:49:12.023886 29097 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:49:12.024373 master-0 kubenswrapper[29097]: I0312 18:49:12.023905 29097 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:49:12.024373 master-0 kubenswrapper[29097]: I0312 18:49:12.023923 29097 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Mar 12 18:49:12.024373 master-0 kubenswrapper[29097]: I0312 18:49:12.023943 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:49:12.024373 master-0 kubenswrapper[29097]: I0312 18:49:12.023961 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfnpw\" (UniqueName: \"kubernetes.io/projected/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-kube-api-access-gfnpw\") on node \"master-0\" DevicePath \"\"" Mar 12 18:49:12.024373 master-0 kubenswrapper[29097]: I0312 18:49:12.023977 29097 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Mar 12 18:49:12.062370 master-0 kubenswrapper[29097]: I0312 18:49:12.062269 29097 scope.go:117] "RemoveContainer" containerID="0910a67133ca55cd87a9afdee9090d01aa6b9e620025dd024a6c541b516e36e7" Mar 12 18:49:12.396243 master-0 kubenswrapper[29097]: I0312 18:49:12.395588 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Mar 12 18:49:12.428616 master-0 kubenswrapper[29097]: I0312 18:49:12.428566 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-0"] Mar 12 18:49:12.441982 master-0 kubenswrapper[29097]: I0312 18:49:12.441677 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Mar 12 18:49:12.443659 master-0 kubenswrapper[29097]: E0312 18:49:12.442403 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc73eb83-6e67-4ebb-ad05-95baa34f0ab9" containerName="inspector-pxe-init" Mar 12 18:49:12.443659 master-0 kubenswrapper[29097]: I0312 18:49:12.442427 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc73eb83-6e67-4ebb-ad05-95baa34f0ab9" containerName="inspector-pxe-init" Mar 12 18:49:12.443659 master-0 kubenswrapper[29097]: E0312 18:49:12.442451 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c556c93-e908-407c-9f50-ad291a0bb179" containerName="dnsmasq-dns" Mar 12 18:49:12.443659 master-0 kubenswrapper[29097]: I0312 18:49:12.442457 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c556c93-e908-407c-9f50-ad291a0bb179" containerName="dnsmasq-dns" Mar 12 18:49:12.443659 master-0 kubenswrapper[29097]: E0312 18:49:12.442473 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c556c93-e908-407c-9f50-ad291a0bb179" containerName="init" Mar 12 18:49:12.443659 master-0 kubenswrapper[29097]: I0312 18:49:12.442480 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c556c93-e908-407c-9f50-ad291a0bb179" containerName="init" Mar 12 18:49:12.443659 master-0 kubenswrapper[29097]: E0312 18:49:12.442531 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc73eb83-6e67-4ebb-ad05-95baa34f0ab9" containerName="ironic-python-agent-init" Mar 12 18:49:12.443659 master-0 kubenswrapper[29097]: I0312 18:49:12.442538 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc73eb83-6e67-4ebb-ad05-95baa34f0ab9" containerName="ironic-python-agent-init" Mar 12 18:49:12.443659 master-0 kubenswrapper[29097]: I0312 18:49:12.443055 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc73eb83-6e67-4ebb-ad05-95baa34f0ab9" containerName="inspector-pxe-init" Mar 12 18:49:12.443659 master-0 kubenswrapper[29097]: I0312 18:49:12.443092 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c556c93-e908-407c-9f50-ad291a0bb179" containerName="dnsmasq-dns" Mar 12 18:49:12.447574 master-0 kubenswrapper[29097]: I0312 18:49:12.446867 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 12 18:49:12.449336 master-0 kubenswrapper[29097]: I0312 18:49:12.449292 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Mar 12 18:49:12.450015 master-0 kubenswrapper[29097]: I0312 18:49:12.449853 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-internal-svc" Mar 12 18:49:12.458156 master-0 kubenswrapper[29097]: I0312 18:49:12.455814 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Mar 12 18:49:12.458156 master-0 kubenswrapper[29097]: I0312 18:49:12.456368 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Mar 12 18:49:12.458156 master-0 kubenswrapper[29097]: I0312 18:49:12.456628 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-public-svc" Mar 12 18:49:12.471860 master-0 kubenswrapper[29097]: I0312 18:49:12.471167 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Mar 12 18:49:12.562948 master-0 kubenswrapper[29097]: I0312 18:49:12.562883 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/78880e0c-4006-4417-be0e-5dc39d5bf43f-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"78880e0c-4006-4417-be0e-5dc39d5bf43f\") " pod="openstack/ironic-inspector-0" Mar 12 18:49:12.562948 master-0 kubenswrapper[29097]: I0312 18:49:12.562952 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/78880e0c-4006-4417-be0e-5dc39d5bf43f-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"78880e0c-4006-4417-be0e-5dc39d5bf43f\") " pod="openstack/ironic-inspector-0" Mar 12 18:49:12.563274 master-0 kubenswrapper[29097]: I0312 18:49:12.563039 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78880e0c-4006-4417-be0e-5dc39d5bf43f-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"78880e0c-4006-4417-be0e-5dc39d5bf43f\") " pod="openstack/ironic-inspector-0" Mar 12 18:49:12.563274 master-0 kubenswrapper[29097]: I0312 18:49:12.563130 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/78880e0c-4006-4417-be0e-5dc39d5bf43f-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"78880e0c-4006-4417-be0e-5dc39d5bf43f\") " pod="openstack/ironic-inspector-0" Mar 12 18:49:12.563274 master-0 kubenswrapper[29097]: I0312 18:49:12.563149 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78880e0c-4006-4417-be0e-5dc39d5bf43f-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"78880e0c-4006-4417-be0e-5dc39d5bf43f\") " pod="openstack/ironic-inspector-0" Mar 12 18:49:12.563274 master-0 kubenswrapper[29097]: I0312 18:49:12.563177 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/78880e0c-4006-4417-be0e-5dc39d5bf43f-config\") pod \"ironic-inspector-0\" (UID: \"78880e0c-4006-4417-be0e-5dc39d5bf43f\") " pod="openstack/ironic-inspector-0" Mar 12 18:49:12.563274 master-0 kubenswrapper[29097]: I0312 18:49:12.563225 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78880e0c-4006-4417-be0e-5dc39d5bf43f-scripts\") pod \"ironic-inspector-0\" (UID: \"78880e0c-4006-4417-be0e-5dc39d5bf43f\") " pod="openstack/ironic-inspector-0" Mar 12 18:49:12.563274 master-0 kubenswrapper[29097]: I0312 18:49:12.563262 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78880e0c-4006-4417-be0e-5dc39d5bf43f-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"78880e0c-4006-4417-be0e-5dc39d5bf43f\") " pod="openstack/ironic-inspector-0" Mar 12 18:49:12.563457 master-0 kubenswrapper[29097]: I0312 18:49:12.563314 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79992\" (UniqueName: \"kubernetes.io/projected/78880e0c-4006-4417-be0e-5dc39d5bf43f-kube-api-access-79992\") pod \"ironic-inspector-0\" (UID: \"78880e0c-4006-4417-be0e-5dc39d5bf43f\") " pod="openstack/ironic-inspector-0" Mar 12 18:49:12.664711 master-0 kubenswrapper[29097]: I0312 18:49:12.664652 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78880e0c-4006-4417-be0e-5dc39d5bf43f-scripts\") pod \"ironic-inspector-0\" (UID: \"78880e0c-4006-4417-be0e-5dc39d5bf43f\") " pod="openstack/ironic-inspector-0" Mar 12 18:49:12.664711 master-0 kubenswrapper[29097]: I0312 18:49:12.664710 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78880e0c-4006-4417-be0e-5dc39d5bf43f-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"78880e0c-4006-4417-be0e-5dc39d5bf43f\") " pod="openstack/ironic-inspector-0" Mar 12 18:49:12.665023 master-0 kubenswrapper[29097]: I0312 18:49:12.664760 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79992\" (UniqueName: \"kubernetes.io/projected/78880e0c-4006-4417-be0e-5dc39d5bf43f-kube-api-access-79992\") pod \"ironic-inspector-0\" (UID: \"78880e0c-4006-4417-be0e-5dc39d5bf43f\") " pod="openstack/ironic-inspector-0" Mar 12 18:49:12.665023 master-0 kubenswrapper[29097]: I0312 18:49:12.664782 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/78880e0c-4006-4417-be0e-5dc39d5bf43f-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"78880e0c-4006-4417-be0e-5dc39d5bf43f\") " pod="openstack/ironic-inspector-0" Mar 12 18:49:12.665023 master-0 kubenswrapper[29097]: I0312 18:49:12.664808 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/78880e0c-4006-4417-be0e-5dc39d5bf43f-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"78880e0c-4006-4417-be0e-5dc39d5bf43f\") " pod="openstack/ironic-inspector-0" Mar 12 18:49:12.665023 master-0 kubenswrapper[29097]: I0312 18:49:12.664858 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78880e0c-4006-4417-be0e-5dc39d5bf43f-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"78880e0c-4006-4417-be0e-5dc39d5bf43f\") " pod="openstack/ironic-inspector-0" Mar 12 18:49:12.665023 master-0 kubenswrapper[29097]: I0312 18:49:12.664933 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/78880e0c-4006-4417-be0e-5dc39d5bf43f-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"78880e0c-4006-4417-be0e-5dc39d5bf43f\") " pod="openstack/ironic-inspector-0" Mar 12 18:49:12.665023 master-0 kubenswrapper[29097]: I0312 18:49:12.664955 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78880e0c-4006-4417-be0e-5dc39d5bf43f-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"78880e0c-4006-4417-be0e-5dc39d5bf43f\") " pod="openstack/ironic-inspector-0" Mar 12 18:49:12.665023 master-0 kubenswrapper[29097]: I0312 18:49:12.664980 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/78880e0c-4006-4417-be0e-5dc39d5bf43f-config\") pod \"ironic-inspector-0\" (UID: \"78880e0c-4006-4417-be0e-5dc39d5bf43f\") " pod="openstack/ironic-inspector-0" Mar 12 18:49:12.665497 master-0 kubenswrapper[29097]: I0312 18:49:12.665458 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/78880e0c-4006-4417-be0e-5dc39d5bf43f-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"78880e0c-4006-4417-be0e-5dc39d5bf43f\") " pod="openstack/ironic-inspector-0" Mar 12 18:49:12.665911 master-0 kubenswrapper[29097]: I0312 18:49:12.665878 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/78880e0c-4006-4417-be0e-5dc39d5bf43f-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"78880e0c-4006-4417-be0e-5dc39d5bf43f\") " pod="openstack/ironic-inspector-0" Mar 12 18:49:12.668503 master-0 kubenswrapper[29097]: I0312 18:49:12.668459 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78880e0c-4006-4417-be0e-5dc39d5bf43f-scripts\") pod \"ironic-inspector-0\" (UID: \"78880e0c-4006-4417-be0e-5dc39d5bf43f\") " pod="openstack/ironic-inspector-0" Mar 12 18:49:12.669287 master-0 kubenswrapper[29097]: I0312 18:49:12.669203 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/78880e0c-4006-4417-be0e-5dc39d5bf43f-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"78880e0c-4006-4417-be0e-5dc39d5bf43f\") " pod="openstack/ironic-inspector-0" Mar 12 18:49:12.669846 master-0 kubenswrapper[29097]: I0312 18:49:12.669818 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78880e0c-4006-4417-be0e-5dc39d5bf43f-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"78880e0c-4006-4417-be0e-5dc39d5bf43f\") " pod="openstack/ironic-inspector-0" Mar 12 18:49:12.669901 master-0 kubenswrapper[29097]: I0312 18:49:12.669867 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/78880e0c-4006-4417-be0e-5dc39d5bf43f-config\") pod \"ironic-inspector-0\" (UID: \"78880e0c-4006-4417-be0e-5dc39d5bf43f\") " pod="openstack/ironic-inspector-0" Mar 12 18:49:12.673204 master-0 kubenswrapper[29097]: I0312 18:49:12.673171 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78880e0c-4006-4417-be0e-5dc39d5bf43f-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"78880e0c-4006-4417-be0e-5dc39d5bf43f\") " pod="openstack/ironic-inspector-0" Mar 12 18:49:12.677037 master-0 kubenswrapper[29097]: I0312 18:49:12.676981 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78880e0c-4006-4417-be0e-5dc39d5bf43f-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"78880e0c-4006-4417-be0e-5dc39d5bf43f\") " pod="openstack/ironic-inspector-0" Mar 12 18:49:12.684966 master-0 kubenswrapper[29097]: I0312 18:49:12.684861 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79992\" (UniqueName: \"kubernetes.io/projected/78880e0c-4006-4417-be0e-5dc39d5bf43f-kube-api-access-79992\") pod \"ironic-inspector-0\" (UID: \"78880e0c-4006-4417-be0e-5dc39d5bf43f\") " pod="openstack/ironic-inspector-0" Mar 12 18:49:12.737111 master-0 kubenswrapper[29097]: I0312 18:49:12.736991 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc73eb83-6e67-4ebb-ad05-95baa34f0ab9" path="/var/lib/kubelet/pods/cc73eb83-6e67-4ebb-ad05-95baa34f0ab9/volumes" Mar 12 18:49:12.786537 master-0 kubenswrapper[29097]: I0312 18:49:12.785972 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 12 18:49:13.383006 master-0 kubenswrapper[29097]: I0312 18:49:13.382948 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Mar 12 18:49:13.987381 master-0 kubenswrapper[29097]: I0312 18:49:13.987308 29097 generic.go:334] "Generic (PLEG): container finished" podID="78880e0c-4006-4417-be0e-5dc39d5bf43f" containerID="248a6d67f4b6b7a458e35b68756e73b4df93412b195ebf399a0ddcbf07dd3da8" exitCode=0 Mar 12 18:49:13.987819 master-0 kubenswrapper[29097]: I0312 18:49:13.987389 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"78880e0c-4006-4417-be0e-5dc39d5bf43f","Type":"ContainerDied","Data":"248a6d67f4b6b7a458e35b68756e73b4df93412b195ebf399a0ddcbf07dd3da8"} Mar 12 18:49:13.987819 master-0 kubenswrapper[29097]: I0312 18:49:13.987502 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"78880e0c-4006-4417-be0e-5dc39d5bf43f","Type":"ContainerStarted","Data":"9948074a4b908f682be1bf4a156b5ced65532e7cf68978c85980bed67959f44c"} Mar 12 18:49:15.004027 master-0 kubenswrapper[29097]: I0312 18:49:15.003728 29097 generic.go:334] "Generic (PLEG): container finished" podID="78880e0c-4006-4417-be0e-5dc39d5bf43f" containerID="6a720fa51d6845fcf092a982a54e92a4868d622e5e0b20c2afc11fb907e835a6" exitCode=0 Mar 12 18:49:15.004027 master-0 kubenswrapper[29097]: I0312 18:49:15.003818 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"78880e0c-4006-4417-be0e-5dc39d5bf43f","Type":"ContainerDied","Data":"6a720fa51d6845fcf092a982a54e92a4868d622e5e0b20c2afc11fb907e835a6"} Mar 12 18:49:16.018711 master-0 kubenswrapper[29097]: I0312 18:49:16.018663 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"78880e0c-4006-4417-be0e-5dc39d5bf43f","Type":"ContainerStarted","Data":"43d6a903ff7cf57981fe01b921e90bb75800b95b533e880bf548d84e3f0fbc07"} Mar 12 18:49:17.058344 master-0 kubenswrapper[29097]: I0312 18:49:17.058294 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"78880e0c-4006-4417-be0e-5dc39d5bf43f","Type":"ContainerStarted","Data":"0ef5a4d0f42471f08a5170557df0549f82be95fff39dc9fca868cd244233e713"} Mar 12 18:49:17.058344 master-0 kubenswrapper[29097]: I0312 18:49:17.058344 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"78880e0c-4006-4417-be0e-5dc39d5bf43f","Type":"ContainerStarted","Data":"bbe9bb6fda4539b533a5e14b81a6749435cbd642bea1d4e171ced4613e65e7c1"} Mar 12 18:49:18.074434 master-0 kubenswrapper[29097]: I0312 18:49:18.074377 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"78880e0c-4006-4417-be0e-5dc39d5bf43f","Type":"ContainerStarted","Data":"29d435682e18e07f3e7272d46c1b94f65b7330779f5c101384b882a0edd9ed7d"} Mar 12 18:49:18.074434 master-0 kubenswrapper[29097]: I0312 18:49:18.074426 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"78880e0c-4006-4417-be0e-5dc39d5bf43f","Type":"ContainerStarted","Data":"ea55bcc47dc11e69ffc9d002e070d01ced4fb064b509b9aabef00a38a2181666"} Mar 12 18:49:18.075064 master-0 kubenswrapper[29097]: I0312 18:49:18.074671 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 12 18:49:18.140740 master-0 kubenswrapper[29097]: I0312 18:49:18.140646 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-0" podStartSLOduration=6.140625819 podStartE2EDuration="6.140625819s" podCreationTimestamp="2026-03-12 18:49:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:49:18.12774057 +0000 UTC m=+1197.681720687" watchObservedRunningTime="2026-03-12 18:49:18.140625819 +0000 UTC m=+1197.694605916" Mar 12 18:49:19.088015 master-0 kubenswrapper[29097]: I0312 18:49:19.087949 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 12 18:49:20.173465 master-0 kubenswrapper[29097]: I0312 18:49:20.173405 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Mar 12 18:49:20.732112 master-0 kubenswrapper[29097]: I0312 18:49:20.731933 29097 scope.go:117] "RemoveContainer" containerID="b8aaa63468c9fd987f471541a3a63ebc1775a5b66d6068f313f8da7cd0af22e7" Mar 12 18:49:21.115421 master-0 kubenswrapper[29097]: I0312 18:49:21.115366 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" event={"ID":"87a19dc7-5415-4d3d-a22e-9e2524a67e38","Type":"ContainerStarted","Data":"4bdca24346f859430a5c9498fef11bd0e202128c4a210423fd411f943b8e7d42"} Mar 12 18:49:21.115732 master-0 kubenswrapper[29097]: I0312 18:49:21.115701 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" Mar 12 18:49:21.120018 master-0 kubenswrapper[29097]: I0312 18:49:21.119840 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Mar 12 18:49:21.246806 master-0 kubenswrapper[29097]: E0312 18:49:21.246667 29097 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33feec78_4592_4343_965b_aa1b7044fcf3.slice/crio-26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Error finding container 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Status 404 returned error can't find the container with id 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547 Mar 12 18:49:22.793260 master-0 kubenswrapper[29097]: I0312 18:49:22.793190 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 12 18:49:22.793260 master-0 kubenswrapper[29097]: I0312 18:49:22.793269 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 12 18:49:22.793932 master-0 kubenswrapper[29097]: I0312 18:49:22.793286 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Mar 12 18:49:22.793932 master-0 kubenswrapper[29097]: I0312 18:49:22.793302 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Mar 12 18:49:22.826014 master-0 kubenswrapper[29097]: I0312 18:49:22.825957 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Mar 12 18:49:22.829049 master-0 kubenswrapper[29097]: I0312 18:49:22.828993 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Mar 12 18:49:23.157587 master-0 kubenswrapper[29097]: I0312 18:49:23.157001 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Mar 12 18:49:23.159593 master-0 kubenswrapper[29097]: I0312 18:49:23.158630 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Mar 12 18:49:27.742877 master-0 kubenswrapper[29097]: I0312 18:49:27.742770 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-7cb69d965b-d79tc" Mar 12 18:49:28.241693 master-0 kubenswrapper[29097]: I0312 18:49:28.241631 29097 generic.go:334] "Generic (PLEG): container finished" podID="04fa4fda-ab43-4c4e-be26-48fc6ef9fc48" containerID="24c3e1c0366b72e56408c863295c1307ad9cea10248382b96ecf21cb5bb0b88a" exitCode=0 Mar 12 18:49:28.241693 master-0 kubenswrapper[29097]: I0312 18:49:28.241694 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-pz2nb" event={"ID":"04fa4fda-ab43-4c4e-be26-48fc6ef9fc48","Type":"ContainerDied","Data":"24c3e1c0366b72e56408c863295c1307ad9cea10248382b96ecf21cb5bb0b88a"} Mar 12 18:49:29.758675 master-0 kubenswrapper[29097]: I0312 18:49:29.758621 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-pz2nb" Mar 12 18:49:29.831084 master-0 kubenswrapper[29097]: I0312 18:49:29.831019 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04fa4fda-ab43-4c4e-be26-48fc6ef9fc48-scripts\") pod \"04fa4fda-ab43-4c4e-be26-48fc6ef9fc48\" (UID: \"04fa4fda-ab43-4c4e-be26-48fc6ef9fc48\") " Mar 12 18:49:29.831292 master-0 kubenswrapper[29097]: I0312 18:49:29.831098 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxctn\" (UniqueName: \"kubernetes.io/projected/04fa4fda-ab43-4c4e-be26-48fc6ef9fc48-kube-api-access-nxctn\") pod \"04fa4fda-ab43-4c4e-be26-48fc6ef9fc48\" (UID: \"04fa4fda-ab43-4c4e-be26-48fc6ef9fc48\") " Mar 12 18:49:29.831369 master-0 kubenswrapper[29097]: I0312 18:49:29.831330 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04fa4fda-ab43-4c4e-be26-48fc6ef9fc48-config-data\") pod \"04fa4fda-ab43-4c4e-be26-48fc6ef9fc48\" (UID: \"04fa4fda-ab43-4c4e-be26-48fc6ef9fc48\") " Mar 12 18:49:29.831419 master-0 kubenswrapper[29097]: I0312 18:49:29.831384 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04fa4fda-ab43-4c4e-be26-48fc6ef9fc48-combined-ca-bundle\") pod \"04fa4fda-ab43-4c4e-be26-48fc6ef9fc48\" (UID: \"04fa4fda-ab43-4c4e-be26-48fc6ef9fc48\") " Mar 12 18:49:29.835475 master-0 kubenswrapper[29097]: I0312 18:49:29.835437 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04fa4fda-ab43-4c4e-be26-48fc6ef9fc48-scripts" (OuterVolumeSpecName: "scripts") pod "04fa4fda-ab43-4c4e-be26-48fc6ef9fc48" (UID: "04fa4fda-ab43-4c4e-be26-48fc6ef9fc48"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:49:29.838251 master-0 kubenswrapper[29097]: I0312 18:49:29.838219 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04fa4fda-ab43-4c4e-be26-48fc6ef9fc48-kube-api-access-nxctn" (OuterVolumeSpecName: "kube-api-access-nxctn") pod "04fa4fda-ab43-4c4e-be26-48fc6ef9fc48" (UID: "04fa4fda-ab43-4c4e-be26-48fc6ef9fc48"). InnerVolumeSpecName "kube-api-access-nxctn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:49:29.864950 master-0 kubenswrapper[29097]: I0312 18:49:29.864886 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04fa4fda-ab43-4c4e-be26-48fc6ef9fc48-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04fa4fda-ab43-4c4e-be26-48fc6ef9fc48" (UID: "04fa4fda-ab43-4c4e-be26-48fc6ef9fc48"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:49:29.870793 master-0 kubenswrapper[29097]: I0312 18:49:29.870732 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04fa4fda-ab43-4c4e-be26-48fc6ef9fc48-config-data" (OuterVolumeSpecName: "config-data") pod "04fa4fda-ab43-4c4e-be26-48fc6ef9fc48" (UID: "04fa4fda-ab43-4c4e-be26-48fc6ef9fc48"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:49:29.935008 master-0 kubenswrapper[29097]: I0312 18:49:29.934923 29097 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04fa4fda-ab43-4c4e-be26-48fc6ef9fc48-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 18:49:29.935143 master-0 kubenswrapper[29097]: I0312 18:49:29.935008 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04fa4fda-ab43-4c4e-be26-48fc6ef9fc48-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:49:29.935143 master-0 kubenswrapper[29097]: I0312 18:49:29.935023 29097 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04fa4fda-ab43-4c4e-be26-48fc6ef9fc48-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:49:29.935143 master-0 kubenswrapper[29097]: I0312 18:49:29.935037 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxctn\" (UniqueName: \"kubernetes.io/projected/04fa4fda-ab43-4c4e-be26-48fc6ef9fc48-kube-api-access-nxctn\") on node \"master-0\" DevicePath \"\"" Mar 12 18:49:30.274136 master-0 kubenswrapper[29097]: I0312 18:49:30.273956 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-pz2nb" Mar 12 18:49:30.274571 master-0 kubenswrapper[29097]: I0312 18:49:30.273835 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-pz2nb" event={"ID":"04fa4fda-ab43-4c4e-be26-48fc6ef9fc48","Type":"ContainerDied","Data":"36ba084c9555bfa5ee036f5b5e5de0d4d6e988cc0af3a30337282c10d9d8a95a"} Mar 12 18:49:30.274571 master-0 kubenswrapper[29097]: I0312 18:49:30.274553 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36ba084c9555bfa5ee036f5b5e5de0d4d6e988cc0af3a30337282c10d9d8a95a" Mar 12 18:49:30.455125 master-0 kubenswrapper[29097]: I0312 18:49:30.452753 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 12 18:49:30.471540 master-0 kubenswrapper[29097]: E0312 18:49:30.469818 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04fa4fda-ab43-4c4e-be26-48fc6ef9fc48" containerName="nova-cell0-conductor-db-sync" Mar 12 18:49:30.471540 master-0 kubenswrapper[29097]: I0312 18:49:30.469868 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="04fa4fda-ab43-4c4e-be26-48fc6ef9fc48" containerName="nova-cell0-conductor-db-sync" Mar 12 18:49:30.471540 master-0 kubenswrapper[29097]: I0312 18:49:30.470234 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="04fa4fda-ab43-4c4e-be26-48fc6ef9fc48" containerName="nova-cell0-conductor-db-sync" Mar 12 18:49:30.471540 master-0 kubenswrapper[29097]: I0312 18:49:30.470874 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 12 18:49:30.471540 master-0 kubenswrapper[29097]: I0312 18:49:30.470951 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 12 18:49:30.476421 master-0 kubenswrapper[29097]: I0312 18:49:30.473992 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 12 18:49:30.553587 master-0 kubenswrapper[29097]: I0312 18:49:30.553498 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d2f9a34-cf61-4193-a964-a4a2cbf0adb6-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9d2f9a34-cf61-4193-a964-a4a2cbf0adb6\") " pod="openstack/nova-cell0-conductor-0" Mar 12 18:49:30.553798 master-0 kubenswrapper[29097]: I0312 18:49:30.553705 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d2f9a34-cf61-4193-a964-a4a2cbf0adb6-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9d2f9a34-cf61-4193-a964-a4a2cbf0adb6\") " pod="openstack/nova-cell0-conductor-0" Mar 12 18:49:30.553798 master-0 kubenswrapper[29097]: I0312 18:49:30.553764 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw4pr\" (UniqueName: \"kubernetes.io/projected/9d2f9a34-cf61-4193-a964-a4a2cbf0adb6-kube-api-access-tw4pr\") pod \"nova-cell0-conductor-0\" (UID: \"9d2f9a34-cf61-4193-a964-a4a2cbf0adb6\") " pod="openstack/nova-cell0-conductor-0" Mar 12 18:49:30.656552 master-0 kubenswrapper[29097]: I0312 18:49:30.656487 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d2f9a34-cf61-4193-a964-a4a2cbf0adb6-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9d2f9a34-cf61-4193-a964-a4a2cbf0adb6\") " pod="openstack/nova-cell0-conductor-0" Mar 12 18:49:30.656645 master-0 kubenswrapper[29097]: I0312 18:49:30.656600 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d2f9a34-cf61-4193-a964-a4a2cbf0adb6-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9d2f9a34-cf61-4193-a964-a4a2cbf0adb6\") " pod="openstack/nova-cell0-conductor-0" Mar 12 18:49:30.656645 master-0 kubenswrapper[29097]: I0312 18:49:30.656627 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw4pr\" (UniqueName: \"kubernetes.io/projected/9d2f9a34-cf61-4193-a964-a4a2cbf0adb6-kube-api-access-tw4pr\") pod \"nova-cell0-conductor-0\" (UID: \"9d2f9a34-cf61-4193-a964-a4a2cbf0adb6\") " pod="openstack/nova-cell0-conductor-0" Mar 12 18:49:30.660954 master-0 kubenswrapper[29097]: I0312 18:49:30.660860 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d2f9a34-cf61-4193-a964-a4a2cbf0adb6-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9d2f9a34-cf61-4193-a964-a4a2cbf0adb6\") " pod="openstack/nova-cell0-conductor-0" Mar 12 18:49:30.666601 master-0 kubenswrapper[29097]: I0312 18:49:30.665366 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d2f9a34-cf61-4193-a964-a4a2cbf0adb6-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9d2f9a34-cf61-4193-a964-a4a2cbf0adb6\") " pod="openstack/nova-cell0-conductor-0" Mar 12 18:49:30.673567 master-0 kubenswrapper[29097]: I0312 18:49:30.673504 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw4pr\" (UniqueName: \"kubernetes.io/projected/9d2f9a34-cf61-4193-a964-a4a2cbf0adb6-kube-api-access-tw4pr\") pod \"nova-cell0-conductor-0\" (UID: \"9d2f9a34-cf61-4193-a964-a4a2cbf0adb6\") " pod="openstack/nova-cell0-conductor-0" Mar 12 18:49:30.788797 master-0 kubenswrapper[29097]: I0312 18:49:30.788730 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 12 18:49:31.283239 master-0 kubenswrapper[29097]: W0312 18:49:31.283156 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d2f9a34_cf61_4193_a964_a4a2cbf0adb6.slice/crio-a6b5f7b3e6593f9771b5840a9db0b6a4261c8da2c1828628759dd96d10780ec1 WatchSource:0}: Error finding container a6b5f7b3e6593f9771b5840a9db0b6a4261c8da2c1828628759dd96d10780ec1: Status 404 returned error can't find the container with id a6b5f7b3e6593f9771b5840a9db0b6a4261c8da2c1828628759dd96d10780ec1 Mar 12 18:49:31.289109 master-0 kubenswrapper[29097]: I0312 18:49:31.289030 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 12 18:49:32.315055 master-0 kubenswrapper[29097]: I0312 18:49:32.314018 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9d2f9a34-cf61-4193-a964-a4a2cbf0adb6","Type":"ContainerStarted","Data":"5e5243ddbf0990e94b31c3b482b42e7b991ac298d6678a8d476c6b6f10ae3f26"} Mar 12 18:49:32.315715 master-0 kubenswrapper[29097]: I0312 18:49:32.315071 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 12 18:49:32.315715 master-0 kubenswrapper[29097]: I0312 18:49:32.315089 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9d2f9a34-cf61-4193-a964-a4a2cbf0adb6","Type":"ContainerStarted","Data":"a6b5f7b3e6593f9771b5840a9db0b6a4261c8da2c1828628759dd96d10780ec1"} Mar 12 18:49:32.336694 master-0 kubenswrapper[29097]: I0312 18:49:32.336505 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.336484308 podStartE2EDuration="2.336484308s" podCreationTimestamp="2026-03-12 18:49:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:49:32.335207146 +0000 UTC m=+1211.889187243" watchObservedRunningTime="2026-03-12 18:49:32.336484308 +0000 UTC m=+1211.890464405" Mar 12 18:49:40.826635 master-0 kubenswrapper[29097]: I0312 18:49:40.826544 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 12 18:49:41.791867 master-0 kubenswrapper[29097]: I0312 18:49:41.791799 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-p89p7"] Mar 12 18:49:41.794263 master-0 kubenswrapper[29097]: I0312 18:49:41.794229 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-p89p7" Mar 12 18:49:41.797855 master-0 kubenswrapper[29097]: I0312 18:49:41.797817 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 12 18:49:41.797970 master-0 kubenswrapper[29097]: I0312 18:49:41.797914 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 12 18:49:41.829605 master-0 kubenswrapper[29097]: I0312 18:49:41.829534 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-p89p7"] Mar 12 18:49:41.855589 master-0 kubenswrapper[29097]: I0312 18:49:41.855506 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz4sp\" (UniqueName: \"kubernetes.io/projected/ebb59236-a534-4ed8-9f62-1be13d1bdaf9-kube-api-access-mz4sp\") pod \"nova-cell0-cell-mapping-p89p7\" (UID: \"ebb59236-a534-4ed8-9f62-1be13d1bdaf9\") " pod="openstack/nova-cell0-cell-mapping-p89p7" Mar 12 18:49:41.855589 master-0 kubenswrapper[29097]: I0312 18:49:41.855590 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebb59236-a534-4ed8-9f62-1be13d1bdaf9-config-data\") pod \"nova-cell0-cell-mapping-p89p7\" (UID: \"ebb59236-a534-4ed8-9f62-1be13d1bdaf9\") " pod="openstack/nova-cell0-cell-mapping-p89p7" Mar 12 18:49:41.855871 master-0 kubenswrapper[29097]: I0312 18:49:41.855609 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb59236-a534-4ed8-9f62-1be13d1bdaf9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-p89p7\" (UID: \"ebb59236-a534-4ed8-9f62-1be13d1bdaf9\") " pod="openstack/nova-cell0-cell-mapping-p89p7" Mar 12 18:49:41.856123 master-0 kubenswrapper[29097]: I0312 18:49:41.855966 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebb59236-a534-4ed8-9f62-1be13d1bdaf9-scripts\") pod \"nova-cell0-cell-mapping-p89p7\" (UID: \"ebb59236-a534-4ed8-9f62-1be13d1bdaf9\") " pod="openstack/nova-cell0-cell-mapping-p89p7" Mar 12 18:49:41.957779 master-0 kubenswrapper[29097]: I0312 18:49:41.957712 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebb59236-a534-4ed8-9f62-1be13d1bdaf9-scripts\") pod \"nova-cell0-cell-mapping-p89p7\" (UID: \"ebb59236-a534-4ed8-9f62-1be13d1bdaf9\") " pod="openstack/nova-cell0-cell-mapping-p89p7" Mar 12 18:49:41.958001 master-0 kubenswrapper[29097]: I0312 18:49:41.957861 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz4sp\" (UniqueName: \"kubernetes.io/projected/ebb59236-a534-4ed8-9f62-1be13d1bdaf9-kube-api-access-mz4sp\") pod \"nova-cell0-cell-mapping-p89p7\" (UID: \"ebb59236-a534-4ed8-9f62-1be13d1bdaf9\") " pod="openstack/nova-cell0-cell-mapping-p89p7" Mar 12 18:49:41.958001 master-0 kubenswrapper[29097]: I0312 18:49:41.957885 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebb59236-a534-4ed8-9f62-1be13d1bdaf9-config-data\") pod \"nova-cell0-cell-mapping-p89p7\" (UID: \"ebb59236-a534-4ed8-9f62-1be13d1bdaf9\") " pod="openstack/nova-cell0-cell-mapping-p89p7" Mar 12 18:49:41.958001 master-0 kubenswrapper[29097]: I0312 18:49:41.957902 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb59236-a534-4ed8-9f62-1be13d1bdaf9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-p89p7\" (UID: \"ebb59236-a534-4ed8-9f62-1be13d1bdaf9\") " pod="openstack/nova-cell0-cell-mapping-p89p7" Mar 12 18:49:41.961473 master-0 kubenswrapper[29097]: I0312 18:49:41.961424 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebb59236-a534-4ed8-9f62-1be13d1bdaf9-scripts\") pod \"nova-cell0-cell-mapping-p89p7\" (UID: \"ebb59236-a534-4ed8-9f62-1be13d1bdaf9\") " pod="openstack/nova-cell0-cell-mapping-p89p7" Mar 12 18:49:41.961537 master-0 kubenswrapper[29097]: I0312 18:49:41.961475 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb59236-a534-4ed8-9f62-1be13d1bdaf9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-p89p7\" (UID: \"ebb59236-a534-4ed8-9f62-1be13d1bdaf9\") " pod="openstack/nova-cell0-cell-mapping-p89p7" Mar 12 18:49:41.962054 master-0 kubenswrapper[29097]: I0312 18:49:41.962005 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebb59236-a534-4ed8-9f62-1be13d1bdaf9-config-data\") pod \"nova-cell0-cell-mapping-p89p7\" (UID: \"ebb59236-a534-4ed8-9f62-1be13d1bdaf9\") " pod="openstack/nova-cell0-cell-mapping-p89p7" Mar 12 18:49:42.160587 master-0 kubenswrapper[29097]: I0312 18:49:42.160222 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz4sp\" (UniqueName: \"kubernetes.io/projected/ebb59236-a534-4ed8-9f62-1be13d1bdaf9-kube-api-access-mz4sp\") pod \"nova-cell0-cell-mapping-p89p7\" (UID: \"ebb59236-a534-4ed8-9f62-1be13d1bdaf9\") " pod="openstack/nova-cell0-cell-mapping-p89p7" Mar 12 18:49:42.424769 master-0 kubenswrapper[29097]: I0312 18:49:42.424205 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Mar 12 18:49:42.425792 master-0 kubenswrapper[29097]: I0312 18:49:42.425765 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 18:49:42.436268 master-0 kubenswrapper[29097]: I0312 18:49:42.435215 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-ironic-compute-config-data" Mar 12 18:49:42.457966 master-0 kubenswrapper[29097]: I0312 18:49:42.457915 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-p89p7" Mar 12 18:49:42.470428 master-0 kubenswrapper[29097]: I0312 18:49:42.470314 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ded41e3-0282-4ae3-871f-46803445326f-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"3ded41e3-0282-4ae3-871f-46803445326f\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 18:49:42.470428 master-0 kubenswrapper[29097]: I0312 18:49:42.470368 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t8ff\" (UniqueName: \"kubernetes.io/projected/3ded41e3-0282-4ae3-871f-46803445326f-kube-api-access-9t8ff\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"3ded41e3-0282-4ae3-871f-46803445326f\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 18:49:42.470428 master-0 kubenswrapper[29097]: I0312 18:49:42.470423 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ded41e3-0282-4ae3-871f-46803445326f-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"3ded41e3-0282-4ae3-871f-46803445326f\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 18:49:42.574570 master-0 kubenswrapper[29097]: I0312 18:49:42.574454 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Mar 12 18:49:42.602008 master-0 kubenswrapper[29097]: I0312 18:49:42.601254 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ded41e3-0282-4ae3-871f-46803445326f-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"3ded41e3-0282-4ae3-871f-46803445326f\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 18:49:42.602008 master-0 kubenswrapper[29097]: I0312 18:49:42.601355 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t8ff\" (UniqueName: \"kubernetes.io/projected/3ded41e3-0282-4ae3-871f-46803445326f-kube-api-access-9t8ff\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"3ded41e3-0282-4ae3-871f-46803445326f\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 18:49:42.602008 master-0 kubenswrapper[29097]: I0312 18:49:42.601456 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ded41e3-0282-4ae3-871f-46803445326f-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"3ded41e3-0282-4ae3-871f-46803445326f\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 18:49:42.609503 master-0 kubenswrapper[29097]: I0312 18:49:42.609460 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ded41e3-0282-4ae3-871f-46803445326f-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"3ded41e3-0282-4ae3-871f-46803445326f\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 18:49:42.623462 master-0 kubenswrapper[29097]: I0312 18:49:42.623419 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ded41e3-0282-4ae3-871f-46803445326f-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"3ded41e3-0282-4ae3-871f-46803445326f\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 18:49:43.608597 master-0 kubenswrapper[29097]: I0312 18:49:43.608534 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t8ff\" (UniqueName: \"kubernetes.io/projected/3ded41e3-0282-4ae3-871f-46803445326f-kube-api-access-9t8ff\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"3ded41e3-0282-4ae3-871f-46803445326f\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 18:49:43.609901 master-0 kubenswrapper[29097]: I0312 18:49:43.609147 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lfv8d"] Mar 12 18:49:43.611529 master-0 kubenswrapper[29097]: I0312 18:49:43.611474 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lfv8d" Mar 12 18:49:43.616957 master-0 kubenswrapper[29097]: I0312 18:49:43.615767 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Mar 12 18:49:43.620963 master-0 kubenswrapper[29097]: I0312 18:49:43.620910 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 12 18:49:43.653650 master-0 kubenswrapper[29097]: I0312 18:49:43.653592 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lfv8d"] Mar 12 18:49:43.654248 master-0 kubenswrapper[29097]: I0312 18:49:43.654170 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 18:49:43.744654 master-0 kubenswrapper[29097]: I0312 18:49:43.744398 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrlmt\" (UniqueName: \"kubernetes.io/projected/b4177d20-cc30-4b7f-872e-2c8692ee6b8e-kube-api-access-zrlmt\") pod \"nova-cell1-conductor-db-sync-lfv8d\" (UID: \"b4177d20-cc30-4b7f-872e-2c8692ee6b8e\") " pod="openstack/nova-cell1-conductor-db-sync-lfv8d" Mar 12 18:49:43.744654 master-0 kubenswrapper[29097]: I0312 18:49:43.744496 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4177d20-cc30-4b7f-872e-2c8692ee6b8e-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-lfv8d\" (UID: \"b4177d20-cc30-4b7f-872e-2c8692ee6b8e\") " pod="openstack/nova-cell1-conductor-db-sync-lfv8d" Mar 12 18:49:43.744654 master-0 kubenswrapper[29097]: I0312 18:49:43.744542 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4177d20-cc30-4b7f-872e-2c8692ee6b8e-scripts\") pod \"nova-cell1-conductor-db-sync-lfv8d\" (UID: \"b4177d20-cc30-4b7f-872e-2c8692ee6b8e\") " pod="openstack/nova-cell1-conductor-db-sync-lfv8d" Mar 12 18:49:43.744654 master-0 kubenswrapper[29097]: I0312 18:49:43.744611 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4177d20-cc30-4b7f-872e-2c8692ee6b8e-config-data\") pod \"nova-cell1-conductor-db-sync-lfv8d\" (UID: \"b4177d20-cc30-4b7f-872e-2c8692ee6b8e\") " pod="openstack/nova-cell1-conductor-db-sync-lfv8d" Mar 12 18:49:43.791926 master-0 kubenswrapper[29097]: I0312 18:49:43.784587 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-p89p7"] Mar 12 18:49:43.816725 master-0 kubenswrapper[29097]: I0312 18:49:43.816663 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 12 18:49:43.823420 master-0 kubenswrapper[29097]: I0312 18:49:43.818854 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 18:49:43.823420 master-0 kubenswrapper[29097]: I0312 18:49:43.822234 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 12 18:49:43.846666 master-0 kubenswrapper[29097]: I0312 18:49:43.846603 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4177d20-cc30-4b7f-872e-2c8692ee6b8e-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-lfv8d\" (UID: \"b4177d20-cc30-4b7f-872e-2c8692ee6b8e\") " pod="openstack/nova-cell1-conductor-db-sync-lfv8d" Mar 12 18:49:43.846666 master-0 kubenswrapper[29097]: I0312 18:49:43.846661 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4177d20-cc30-4b7f-872e-2c8692ee6b8e-scripts\") pod \"nova-cell1-conductor-db-sync-lfv8d\" (UID: \"b4177d20-cc30-4b7f-872e-2c8692ee6b8e\") " pod="openstack/nova-cell1-conductor-db-sync-lfv8d" Mar 12 18:49:43.847552 master-0 kubenswrapper[29097]: I0312 18:49:43.847216 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4177d20-cc30-4b7f-872e-2c8692ee6b8e-config-data\") pod \"nova-cell1-conductor-db-sync-lfv8d\" (UID: \"b4177d20-cc30-4b7f-872e-2c8692ee6b8e\") " pod="openstack/nova-cell1-conductor-db-sync-lfv8d" Mar 12 18:49:43.847552 master-0 kubenswrapper[29097]: I0312 18:49:43.847397 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrlmt\" (UniqueName: \"kubernetes.io/projected/b4177d20-cc30-4b7f-872e-2c8692ee6b8e-kube-api-access-zrlmt\") pod \"nova-cell1-conductor-db-sync-lfv8d\" (UID: \"b4177d20-cc30-4b7f-872e-2c8692ee6b8e\") " pod="openstack/nova-cell1-conductor-db-sync-lfv8d" Mar 12 18:49:43.865127 master-0 kubenswrapper[29097]: I0312 18:49:43.865052 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4177d20-cc30-4b7f-872e-2c8692ee6b8e-config-data\") pod \"nova-cell1-conductor-db-sync-lfv8d\" (UID: \"b4177d20-cc30-4b7f-872e-2c8692ee6b8e\") " pod="openstack/nova-cell1-conductor-db-sync-lfv8d" Mar 12 18:49:43.884986 master-0 kubenswrapper[29097]: I0312 18:49:43.882005 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4177d20-cc30-4b7f-872e-2c8692ee6b8e-scripts\") pod \"nova-cell1-conductor-db-sync-lfv8d\" (UID: \"b4177d20-cc30-4b7f-872e-2c8692ee6b8e\") " pod="openstack/nova-cell1-conductor-db-sync-lfv8d" Mar 12 18:49:43.884986 master-0 kubenswrapper[29097]: I0312 18:49:43.883964 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 18:49:43.906208 master-0 kubenswrapper[29097]: I0312 18:49:43.906153 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4177d20-cc30-4b7f-872e-2c8692ee6b8e-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-lfv8d\" (UID: \"b4177d20-cc30-4b7f-872e-2c8692ee6b8e\") " pod="openstack/nova-cell1-conductor-db-sync-lfv8d" Mar 12 18:49:43.953969 master-0 kubenswrapper[29097]: I0312 18:49:43.953575 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7\") " pod="openstack/nova-api-0" Mar 12 18:49:43.953969 master-0 kubenswrapper[29097]: I0312 18:49:43.953881 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7-config-data\") pod \"nova-api-0\" (UID: \"fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7\") " pod="openstack/nova-api-0" Mar 12 18:49:43.953969 master-0 kubenswrapper[29097]: I0312 18:49:43.953942 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7-logs\") pod \"nova-api-0\" (UID: \"fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7\") " pod="openstack/nova-api-0" Mar 12 18:49:43.953969 master-0 kubenswrapper[29097]: I0312 18:49:43.953972 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpx2l\" (UniqueName: \"kubernetes.io/projected/fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7-kube-api-access-tpx2l\") pod \"nova-api-0\" (UID: \"fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7\") " pod="openstack/nova-api-0" Mar 12 18:49:44.058739 master-0 kubenswrapper[29097]: I0312 18:49:44.055898 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7\") " pod="openstack/nova-api-0" Mar 12 18:49:44.058739 master-0 kubenswrapper[29097]: I0312 18:49:44.056006 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7-config-data\") pod \"nova-api-0\" (UID: \"fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7\") " pod="openstack/nova-api-0" Mar 12 18:49:44.058739 master-0 kubenswrapper[29097]: I0312 18:49:44.056071 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7-logs\") pod \"nova-api-0\" (UID: \"fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7\") " pod="openstack/nova-api-0" Mar 12 18:49:44.058739 master-0 kubenswrapper[29097]: I0312 18:49:44.056124 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpx2l\" (UniqueName: \"kubernetes.io/projected/fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7-kube-api-access-tpx2l\") pod \"nova-api-0\" (UID: \"fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7\") " pod="openstack/nova-api-0" Mar 12 18:49:44.058739 master-0 kubenswrapper[29097]: I0312 18:49:44.056737 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7-logs\") pod \"nova-api-0\" (UID: \"fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7\") " pod="openstack/nova-api-0" Mar 12 18:49:44.065563 master-0 kubenswrapper[29097]: I0312 18:49:44.065505 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7-config-data\") pod \"nova-api-0\" (UID: \"fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7\") " pod="openstack/nova-api-0" Mar 12 18:49:44.073146 master-0 kubenswrapper[29097]: I0312 18:49:44.069230 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7\") " pod="openstack/nova-api-0" Mar 12 18:49:44.294545 master-0 kubenswrapper[29097]: I0312 18:49:44.293778 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrlmt\" (UniqueName: \"kubernetes.io/projected/b4177d20-cc30-4b7f-872e-2c8692ee6b8e-kube-api-access-zrlmt\") pod \"nova-cell1-conductor-db-sync-lfv8d\" (UID: \"b4177d20-cc30-4b7f-872e-2c8692ee6b8e\") " pod="openstack/nova-cell1-conductor-db-sync-lfv8d" Mar 12 18:49:44.294545 master-0 kubenswrapper[29097]: I0312 18:49:44.293866 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpx2l\" (UniqueName: \"kubernetes.io/projected/fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7-kube-api-access-tpx2l\") pod \"nova-api-0\" (UID: \"fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7\") " pod="openstack/nova-api-0" Mar 12 18:49:44.364988 master-0 kubenswrapper[29097]: I0312 18:49:44.364366 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 12 18:49:44.376614 master-0 kubenswrapper[29097]: I0312 18:49:44.366586 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 18:49:44.376614 master-0 kubenswrapper[29097]: I0312 18:49:44.369452 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 12 18:49:44.430634 master-0 kubenswrapper[29097]: W0312 18:49:44.426695 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ded41e3_0282_4ae3_871f_46803445326f.slice/crio-bc42d5fdc33d91178405c401744f3d6b1bf557cbc5f595427a08374f11f7b951 WatchSource:0}: Error finding container bc42d5fdc33d91178405c401744f3d6b1bf557cbc5f595427a08374f11f7b951: Status 404 returned error can't find the container with id bc42d5fdc33d91178405c401744f3d6b1bf557cbc5f595427a08374f11f7b951 Mar 12 18:49:44.452603 master-0 kubenswrapper[29097]: I0312 18:49:44.432126 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lfv8d" Mar 12 18:49:44.452603 master-0 kubenswrapper[29097]: I0312 18:49:44.444360 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 18:49:44.455242 master-0 kubenswrapper[29097]: I0312 18:49:44.455118 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 18:49:44.571445 master-0 kubenswrapper[29097]: I0312 18:49:44.566882 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c72ae11-ab3b-4eb4-b46e-9c6505de9df0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9c72ae11-ab3b-4eb4-b46e-9c6505de9df0\") " pod="openstack/nova-metadata-0" Mar 12 18:49:44.581776 master-0 kubenswrapper[29097]: I0312 18:49:44.578103 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c72ae11-ab3b-4eb4-b46e-9c6505de9df0-config-data\") pod \"nova-metadata-0\" (UID: \"9c72ae11-ab3b-4eb4-b46e-9c6505de9df0\") " pod="openstack/nova-metadata-0" Mar 12 18:49:44.581776 master-0 kubenswrapper[29097]: I0312 18:49:44.578273 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sjt9\" (UniqueName: \"kubernetes.io/projected/9c72ae11-ab3b-4eb4-b46e-9c6505de9df0-kube-api-access-5sjt9\") pod \"nova-metadata-0\" (UID: \"9c72ae11-ab3b-4eb4-b46e-9c6505de9df0\") " pod="openstack/nova-metadata-0" Mar 12 18:49:44.581776 master-0 kubenswrapper[29097]: I0312 18:49:44.578869 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c72ae11-ab3b-4eb4-b46e-9c6505de9df0-logs\") pod \"nova-metadata-0\" (UID: \"9c72ae11-ab3b-4eb4-b46e-9c6505de9df0\") " pod="openstack/nova-metadata-0" Mar 12 18:49:44.653930 master-0 kubenswrapper[29097]: I0312 18:49:44.653596 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 18:49:44.668769 master-0 kubenswrapper[29097]: I0312 18:49:44.668117 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 18:49:44.687193 master-0 kubenswrapper[29097]: I0312 18:49:44.684901 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c72ae11-ab3b-4eb4-b46e-9c6505de9df0-config-data\") pod \"nova-metadata-0\" (UID: \"9c72ae11-ab3b-4eb4-b46e-9c6505de9df0\") " pod="openstack/nova-metadata-0" Mar 12 18:49:44.687193 master-0 kubenswrapper[29097]: I0312 18:49:44.684970 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sjt9\" (UniqueName: \"kubernetes.io/projected/9c72ae11-ab3b-4eb4-b46e-9c6505de9df0-kube-api-access-5sjt9\") pod \"nova-metadata-0\" (UID: \"9c72ae11-ab3b-4eb4-b46e-9c6505de9df0\") " pod="openstack/nova-metadata-0" Mar 12 18:49:44.687193 master-0 kubenswrapper[29097]: I0312 18:49:44.685087 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c72ae11-ab3b-4eb4-b46e-9c6505de9df0-logs\") pod \"nova-metadata-0\" (UID: \"9c72ae11-ab3b-4eb4-b46e-9c6505de9df0\") " pod="openstack/nova-metadata-0" Mar 12 18:49:44.687193 master-0 kubenswrapper[29097]: I0312 18:49:44.685124 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c72ae11-ab3b-4eb4-b46e-9c6505de9df0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9c72ae11-ab3b-4eb4-b46e-9c6505de9df0\") " pod="openstack/nova-metadata-0" Mar 12 18:49:44.687879 master-0 kubenswrapper[29097]: I0312 18:49:44.687773 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c72ae11-ab3b-4eb4-b46e-9c6505de9df0-logs\") pod \"nova-metadata-0\" (UID: \"9c72ae11-ab3b-4eb4-b46e-9c6505de9df0\") " pod="openstack/nova-metadata-0" Mar 12 18:49:44.703616 master-0 kubenswrapper[29097]: I0312 18:49:44.702189 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-p89p7" event={"ID":"ebb59236-a534-4ed8-9f62-1be13d1bdaf9","Type":"ContainerStarted","Data":"a338cd3f4126eac83bf2810f37a67b2a523536a23e034deb2931cd8a92ed8215"} Mar 12 18:49:44.703616 master-0 kubenswrapper[29097]: I0312 18:49:44.702300 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-p89p7" event={"ID":"ebb59236-a534-4ed8-9f62-1be13d1bdaf9","Type":"ContainerStarted","Data":"65e177e295981aa50e51566e71ae15bdf2d9e9380109cfbd6efb25d75edbed44"} Mar 12 18:49:44.707583 master-0 kubenswrapper[29097]: I0312 18:49:44.705994 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c72ae11-ab3b-4eb4-b46e-9c6505de9df0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9c72ae11-ab3b-4eb4-b46e-9c6505de9df0\") " pod="openstack/nova-metadata-0" Mar 12 18:49:44.707583 master-0 kubenswrapper[29097]: I0312 18:49:44.706243 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 12 18:49:44.712177 master-0 kubenswrapper[29097]: I0312 18:49:44.711766 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c72ae11-ab3b-4eb4-b46e-9c6505de9df0-config-data\") pod \"nova-metadata-0\" (UID: \"9c72ae11-ab3b-4eb4-b46e-9c6505de9df0\") " pod="openstack/nova-metadata-0" Mar 12 18:49:44.716170 master-0 kubenswrapper[29097]: I0312 18:49:44.713591 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"3ded41e3-0282-4ae3-871f-46803445326f","Type":"ContainerStarted","Data":"bc42d5fdc33d91178405c401744f3d6b1bf557cbc5f595427a08374f11f7b951"} Mar 12 18:49:44.716170 master-0 kubenswrapper[29097]: I0312 18:49:44.715027 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sjt9\" (UniqueName: \"kubernetes.io/projected/9c72ae11-ab3b-4eb4-b46e-9c6505de9df0-kube-api-access-5sjt9\") pod \"nova-metadata-0\" (UID: \"9c72ae11-ab3b-4eb4-b46e-9c6505de9df0\") " pod="openstack/nova-metadata-0" Mar 12 18:49:44.775989 master-0 kubenswrapper[29097]: I0312 18:49:44.775106 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Mar 12 18:49:44.775989 master-0 kubenswrapper[29097]: I0312 18:49:44.775162 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 18:49:44.775989 master-0 kubenswrapper[29097]: I0312 18:49:44.775178 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 18:49:44.778617 master-0 kubenswrapper[29097]: I0312 18:49:44.777857 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:49:44.783071 master-0 kubenswrapper[29097]: I0312 18:49:44.781351 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 12 18:49:44.790227 master-0 kubenswrapper[29097]: I0312 18:49:44.787586 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/528b36cc-fcf4-4ab9-9723-8974a70458fa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"528b36cc-fcf4-4ab9-9723-8974a70458fa\") " pod="openstack/nova-scheduler-0" Mar 12 18:49:44.790227 master-0 kubenswrapper[29097]: I0312 18:49:44.787713 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbgxk\" (UniqueName: \"kubernetes.io/projected/528b36cc-fcf4-4ab9-9723-8974a70458fa-kube-api-access-wbgxk\") pod \"nova-scheduler-0\" (UID: \"528b36cc-fcf4-4ab9-9723-8974a70458fa\") " pod="openstack/nova-scheduler-0" Mar 12 18:49:44.790227 master-0 kubenswrapper[29097]: I0312 18:49:44.787856 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/528b36cc-fcf4-4ab9-9723-8974a70458fa-config-data\") pod \"nova-scheduler-0\" (UID: \"528b36cc-fcf4-4ab9-9723-8974a70458fa\") " pod="openstack/nova-scheduler-0" Mar 12 18:49:44.810744 master-0 kubenswrapper[29097]: I0312 18:49:44.810596 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 18:49:44.833227 master-0 kubenswrapper[29097]: I0312 18:49:44.833176 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 18:49:44.895532 master-0 kubenswrapper[29097]: I0312 18:49:44.892797 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/528b36cc-fcf4-4ab9-9723-8974a70458fa-config-data\") pod \"nova-scheduler-0\" (UID: \"528b36cc-fcf4-4ab9-9723-8974a70458fa\") " pod="openstack/nova-scheduler-0" Mar 12 18:49:44.895532 master-0 kubenswrapper[29097]: I0312 18:49:44.892903 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/528b36cc-fcf4-4ab9-9723-8974a70458fa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"528b36cc-fcf4-4ab9-9723-8974a70458fa\") " pod="openstack/nova-scheduler-0" Mar 12 18:49:44.895532 master-0 kubenswrapper[29097]: I0312 18:49:44.893007 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f7mx\" (UniqueName: \"kubernetes.io/projected/a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a-kube-api-access-4f7mx\") pod \"nova-cell1-novncproxy-0\" (UID: \"a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:49:44.895532 master-0 kubenswrapper[29097]: I0312 18:49:44.893053 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:49:44.895532 master-0 kubenswrapper[29097]: I0312 18:49:44.893156 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbgxk\" (UniqueName: \"kubernetes.io/projected/528b36cc-fcf4-4ab9-9723-8974a70458fa-kube-api-access-wbgxk\") pod \"nova-scheduler-0\" (UID: \"528b36cc-fcf4-4ab9-9723-8974a70458fa\") " pod="openstack/nova-scheduler-0" Mar 12 18:49:44.895532 master-0 kubenswrapper[29097]: I0312 18:49:44.893220 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:49:44.901736 master-0 kubenswrapper[29097]: I0312 18:49:44.899465 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/528b36cc-fcf4-4ab9-9723-8974a70458fa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"528b36cc-fcf4-4ab9-9723-8974a70458fa\") " pod="openstack/nova-scheduler-0" Mar 12 18:49:44.904228 master-0 kubenswrapper[29097]: I0312 18:49:44.904195 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/528b36cc-fcf4-4ab9-9723-8974a70458fa-config-data\") pod \"nova-scheduler-0\" (UID: \"528b36cc-fcf4-4ab9-9723-8974a70458fa\") " pod="openstack/nova-scheduler-0" Mar 12 18:49:44.934910 master-0 kubenswrapper[29097]: I0312 18:49:44.934561 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58d8bd468f-79rnx"] Mar 12 18:49:44.941893 master-0 kubenswrapper[29097]: I0312 18:49:44.936784 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58d8bd468f-79rnx" Mar 12 18:49:45.047263 master-0 kubenswrapper[29097]: I0312 18:49:44.994775 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f7mx\" (UniqueName: \"kubernetes.io/projected/a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a-kube-api-access-4f7mx\") pod \"nova-cell1-novncproxy-0\" (UID: \"a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:49:45.047263 master-0 kubenswrapper[29097]: I0312 18:49:44.994849 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:49:45.047263 master-0 kubenswrapper[29097]: I0312 18:49:44.994926 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:49:45.047263 master-0 kubenswrapper[29097]: I0312 18:49:45.006175 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:49:45.047263 master-0 kubenswrapper[29097]: I0312 18:49:45.013172 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:49:45.047263 master-0 kubenswrapper[29097]: I0312 18:49:45.018143 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbgxk\" (UniqueName: \"kubernetes.io/projected/528b36cc-fcf4-4ab9-9723-8974a70458fa-kube-api-access-wbgxk\") pod \"nova-scheduler-0\" (UID: \"528b36cc-fcf4-4ab9-9723-8974a70458fa\") " pod="openstack/nova-scheduler-0" Mar 12 18:49:45.047263 master-0 kubenswrapper[29097]: I0312 18:49:45.026294 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f7mx\" (UniqueName: \"kubernetes.io/projected/a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a-kube-api-access-4f7mx\") pod \"nova-cell1-novncproxy-0\" (UID: \"a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:49:45.047263 master-0 kubenswrapper[29097]: I0312 18:49:45.029353 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 18:49:45.071721 master-0 kubenswrapper[29097]: I0312 18:49:45.071284 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58d8bd468f-79rnx"] Mar 12 18:49:45.101547 master-0 kubenswrapper[29097]: I0312 18:49:45.099414 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8363492c-c44f-48da-b2a5-b6f83718f64e-dns-swift-storage-0\") pod \"dnsmasq-dns-58d8bd468f-79rnx\" (UID: \"8363492c-c44f-48da-b2a5-b6f83718f64e\") " pod="openstack/dnsmasq-dns-58d8bd468f-79rnx" Mar 12 18:49:45.101547 master-0 kubenswrapper[29097]: I0312 18:49:45.099528 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8363492c-c44f-48da-b2a5-b6f83718f64e-config\") pod \"dnsmasq-dns-58d8bd468f-79rnx\" (UID: \"8363492c-c44f-48da-b2a5-b6f83718f64e\") " pod="openstack/dnsmasq-dns-58d8bd468f-79rnx" Mar 12 18:49:45.101547 master-0 kubenswrapper[29097]: I0312 18:49:45.099603 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8363492c-c44f-48da-b2a5-b6f83718f64e-dns-svc\") pod \"dnsmasq-dns-58d8bd468f-79rnx\" (UID: \"8363492c-c44f-48da-b2a5-b6f83718f64e\") " pod="openstack/dnsmasq-dns-58d8bd468f-79rnx" Mar 12 18:49:45.101547 master-0 kubenswrapper[29097]: I0312 18:49:45.099630 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8363492c-c44f-48da-b2a5-b6f83718f64e-ovsdbserver-nb\") pod \"dnsmasq-dns-58d8bd468f-79rnx\" (UID: \"8363492c-c44f-48da-b2a5-b6f83718f64e\") " pod="openstack/dnsmasq-dns-58d8bd468f-79rnx" Mar 12 18:49:45.101547 master-0 kubenswrapper[29097]: I0312 18:49:45.099663 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xtb7\" (UniqueName: \"kubernetes.io/projected/8363492c-c44f-48da-b2a5-b6f83718f64e-kube-api-access-5xtb7\") pod \"dnsmasq-dns-58d8bd468f-79rnx\" (UID: \"8363492c-c44f-48da-b2a5-b6f83718f64e\") " pod="openstack/dnsmasq-dns-58d8bd468f-79rnx" Mar 12 18:49:45.101547 master-0 kubenswrapper[29097]: I0312 18:49:45.099737 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8363492c-c44f-48da-b2a5-b6f83718f64e-ovsdbserver-sb\") pod \"dnsmasq-dns-58d8bd468f-79rnx\" (UID: \"8363492c-c44f-48da-b2a5-b6f83718f64e\") " pod="openstack/dnsmasq-dns-58d8bd468f-79rnx" Mar 12 18:49:45.192953 master-0 kubenswrapper[29097]: I0312 18:49:45.172305 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:49:45.207337 master-0 kubenswrapper[29097]: I0312 18:49:45.205332 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8363492c-c44f-48da-b2a5-b6f83718f64e-ovsdbserver-sb\") pod \"dnsmasq-dns-58d8bd468f-79rnx\" (UID: \"8363492c-c44f-48da-b2a5-b6f83718f64e\") " pod="openstack/dnsmasq-dns-58d8bd468f-79rnx" Mar 12 18:49:45.207337 master-0 kubenswrapper[29097]: I0312 18:49:45.205417 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8363492c-c44f-48da-b2a5-b6f83718f64e-dns-swift-storage-0\") pod \"dnsmasq-dns-58d8bd468f-79rnx\" (UID: \"8363492c-c44f-48da-b2a5-b6f83718f64e\") " pod="openstack/dnsmasq-dns-58d8bd468f-79rnx" Mar 12 18:49:45.207337 master-0 kubenswrapper[29097]: I0312 18:49:45.205483 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8363492c-c44f-48da-b2a5-b6f83718f64e-config\") pod \"dnsmasq-dns-58d8bd468f-79rnx\" (UID: \"8363492c-c44f-48da-b2a5-b6f83718f64e\") " pod="openstack/dnsmasq-dns-58d8bd468f-79rnx" Mar 12 18:49:45.207337 master-0 kubenswrapper[29097]: I0312 18:49:45.205564 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8363492c-c44f-48da-b2a5-b6f83718f64e-dns-svc\") pod \"dnsmasq-dns-58d8bd468f-79rnx\" (UID: \"8363492c-c44f-48da-b2a5-b6f83718f64e\") " pod="openstack/dnsmasq-dns-58d8bd468f-79rnx" Mar 12 18:49:45.207337 master-0 kubenswrapper[29097]: I0312 18:49:45.205591 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8363492c-c44f-48da-b2a5-b6f83718f64e-ovsdbserver-nb\") pod \"dnsmasq-dns-58d8bd468f-79rnx\" (UID: \"8363492c-c44f-48da-b2a5-b6f83718f64e\") " pod="openstack/dnsmasq-dns-58d8bd468f-79rnx" Mar 12 18:49:45.207337 master-0 kubenswrapper[29097]: I0312 18:49:45.205629 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xtb7\" (UniqueName: \"kubernetes.io/projected/8363492c-c44f-48da-b2a5-b6f83718f64e-kube-api-access-5xtb7\") pod \"dnsmasq-dns-58d8bd468f-79rnx\" (UID: \"8363492c-c44f-48da-b2a5-b6f83718f64e\") " pod="openstack/dnsmasq-dns-58d8bd468f-79rnx" Mar 12 18:49:45.207337 master-0 kubenswrapper[29097]: I0312 18:49:45.206996 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8363492c-c44f-48da-b2a5-b6f83718f64e-ovsdbserver-sb\") pod \"dnsmasq-dns-58d8bd468f-79rnx\" (UID: \"8363492c-c44f-48da-b2a5-b6f83718f64e\") " pod="openstack/dnsmasq-dns-58d8bd468f-79rnx" Mar 12 18:49:45.215699 master-0 kubenswrapper[29097]: I0312 18:49:45.215568 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8363492c-c44f-48da-b2a5-b6f83718f64e-dns-swift-storage-0\") pod \"dnsmasq-dns-58d8bd468f-79rnx\" (UID: \"8363492c-c44f-48da-b2a5-b6f83718f64e\") " pod="openstack/dnsmasq-dns-58d8bd468f-79rnx" Mar 12 18:49:45.224768 master-0 kubenswrapper[29097]: I0312 18:49:45.221274 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8363492c-c44f-48da-b2a5-b6f83718f64e-ovsdbserver-nb\") pod \"dnsmasq-dns-58d8bd468f-79rnx\" (UID: \"8363492c-c44f-48da-b2a5-b6f83718f64e\") " pod="openstack/dnsmasq-dns-58d8bd468f-79rnx" Mar 12 18:49:45.226918 master-0 kubenswrapper[29097]: I0312 18:49:45.226874 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8363492c-c44f-48da-b2a5-b6f83718f64e-config\") pod \"dnsmasq-dns-58d8bd468f-79rnx\" (UID: \"8363492c-c44f-48da-b2a5-b6f83718f64e\") " pod="openstack/dnsmasq-dns-58d8bd468f-79rnx" Mar 12 18:49:45.229654 master-0 kubenswrapper[29097]: I0312 18:49:45.229621 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8363492c-c44f-48da-b2a5-b6f83718f64e-dns-svc\") pod \"dnsmasq-dns-58d8bd468f-79rnx\" (UID: \"8363492c-c44f-48da-b2a5-b6f83718f64e\") " pod="openstack/dnsmasq-dns-58d8bd468f-79rnx" Mar 12 18:49:45.261943 master-0 kubenswrapper[29097]: I0312 18:49:45.260575 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xtb7\" (UniqueName: \"kubernetes.io/projected/8363492c-c44f-48da-b2a5-b6f83718f64e-kube-api-access-5xtb7\") pod \"dnsmasq-dns-58d8bd468f-79rnx\" (UID: \"8363492c-c44f-48da-b2a5-b6f83718f64e\") " pod="openstack/dnsmasq-dns-58d8bd468f-79rnx" Mar 12 18:49:45.344752 master-0 kubenswrapper[29097]: I0312 18:49:45.344529 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-p89p7" podStartSLOduration=4.34448899 podStartE2EDuration="4.34448899s" podCreationTimestamp="2026-03-12 18:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:49:44.747161863 +0000 UTC m=+1224.301141950" watchObservedRunningTime="2026-03-12 18:49:45.34448899 +0000 UTC m=+1224.898469087" Mar 12 18:49:45.350251 master-0 kubenswrapper[29097]: I0312 18:49:45.350205 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58d8bd468f-79rnx" Mar 12 18:49:45.482349 master-0 kubenswrapper[29097]: I0312 18:49:45.473061 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 18:49:45.482349 master-0 kubenswrapper[29097]: W0312 18:49:45.480909 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbbcc60a_f9e6_4fc1_bb10_f4416aa4c3b7.slice/crio-66b18f8ba5465d5788303e56b3bf567ffcc955601a151080d806de7e2c528be0 WatchSource:0}: Error finding container 66b18f8ba5465d5788303e56b3bf567ffcc955601a151080d806de7e2c528be0: Status 404 returned error can't find the container with id 66b18f8ba5465d5788303e56b3bf567ffcc955601a151080d806de7e2c528be0 Mar 12 18:49:45.667635 master-0 kubenswrapper[29097]: I0312 18:49:45.665778 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lfv8d"] Mar 12 18:49:45.775064 master-0 kubenswrapper[29097]: I0312 18:49:45.773356 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lfv8d" event={"ID":"b4177d20-cc30-4b7f-872e-2c8692ee6b8e","Type":"ContainerStarted","Data":"49b7959a4663e18ec213f99c9d616bbdbe5dfc41a1aa877fd7ded6eb1c9ccf88"} Mar 12 18:49:45.781529 master-0 kubenswrapper[29097]: I0312 18:49:45.780460 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7","Type":"ContainerStarted","Data":"66b18f8ba5465d5788303e56b3bf567ffcc955601a151080d806de7e2c528be0"} Mar 12 18:49:45.886015 master-0 kubenswrapper[29097]: I0312 18:49:45.884167 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 18:49:46.075891 master-0 kubenswrapper[29097]: I0312 18:49:46.075132 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 18:49:46.247031 master-0 kubenswrapper[29097]: I0312 18:49:46.245685 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 18:49:46.356636 master-0 kubenswrapper[29097]: I0312 18:49:46.356586 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58d8bd468f-79rnx"] Mar 12 18:49:46.807211 master-0 kubenswrapper[29097]: I0312 18:49:46.807041 29097 generic.go:334] "Generic (PLEG): container finished" podID="8363492c-c44f-48da-b2a5-b6f83718f64e" containerID="98725e3abc3cd28e83a621ea318ea7db27140b699523eb769d81a2f46112eb93" exitCode=0 Mar 12 18:49:46.807211 master-0 kubenswrapper[29097]: I0312 18:49:46.807115 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58d8bd468f-79rnx" event={"ID":"8363492c-c44f-48da-b2a5-b6f83718f64e","Type":"ContainerDied","Data":"98725e3abc3cd28e83a621ea318ea7db27140b699523eb769d81a2f46112eb93"} Mar 12 18:49:46.807211 master-0 kubenswrapper[29097]: I0312 18:49:46.807143 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58d8bd468f-79rnx" event={"ID":"8363492c-c44f-48da-b2a5-b6f83718f64e","Type":"ContainerStarted","Data":"0735e7e9986ae7f8e4906d26943e15746dbdb04e601999c71c39b1c12aa77e34"} Mar 12 18:49:46.816632 master-0 kubenswrapper[29097]: I0312 18:49:46.816580 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a","Type":"ContainerStarted","Data":"ad4818b617904a783f91f138c8c01c8a7bd617ab25c78fe5542ac88674c0cd50"} Mar 12 18:49:46.819265 master-0 kubenswrapper[29097]: I0312 18:49:46.819216 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9c72ae11-ab3b-4eb4-b46e-9c6505de9df0","Type":"ContainerStarted","Data":"f1862339335e231790a80e44d5ac8c0a9d6e2329f223ab86d9eefff3f0faf1f5"} Mar 12 18:49:46.821251 master-0 kubenswrapper[29097]: I0312 18:49:46.821216 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lfv8d" event={"ID":"b4177d20-cc30-4b7f-872e-2c8692ee6b8e","Type":"ContainerStarted","Data":"9999bf2f9d35d54e538cf17cff061fb4d3f3a45842e487d1a951c10f3e222dd5"} Mar 12 18:49:46.828220 master-0 kubenswrapper[29097]: I0312 18:49:46.827280 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"528b36cc-fcf4-4ab9-9723-8974a70458fa","Type":"ContainerStarted","Data":"ad05b63c96564f4f8c19ab9bdd5f1ba5e69f9776b719604e8d71b463ed597bda"} Mar 12 18:49:46.923142 master-0 kubenswrapper[29097]: I0312 18:49:46.923061 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-lfv8d" podStartSLOduration=4.922926168 podStartE2EDuration="4.922926168s" podCreationTimestamp="2026-03-12 18:49:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:49:46.86801615 +0000 UTC m=+1226.421996257" watchObservedRunningTime="2026-03-12 18:49:46.922926168 +0000 UTC m=+1226.476906265" Mar 12 18:49:47.878074 master-0 kubenswrapper[29097]: I0312 18:49:47.877987 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58d8bd468f-79rnx" event={"ID":"8363492c-c44f-48da-b2a5-b6f83718f64e","Type":"ContainerStarted","Data":"5a257d067e60f425b0627f1e96dd4950d73102a1f6be1387bfdc014b23faa944"} Mar 12 18:49:47.878619 master-0 kubenswrapper[29097]: I0312 18:49:47.878167 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58d8bd468f-79rnx" Mar 12 18:49:47.898757 master-0 kubenswrapper[29097]: I0312 18:49:47.898676 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58d8bd468f-79rnx" podStartSLOduration=3.898661417 podStartE2EDuration="3.898661417s" podCreationTimestamp="2026-03-12 18:49:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:49:47.896855743 +0000 UTC m=+1227.450835850" watchObservedRunningTime="2026-03-12 18:49:47.898661417 +0000 UTC m=+1227.452641514" Mar 12 18:49:48.454075 master-0 kubenswrapper[29097]: I0312 18:49:48.453996 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 18:49:48.506276 master-0 kubenswrapper[29097]: I0312 18:49:48.505644 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 18:49:51.926273 master-0 kubenswrapper[29097]: I0312 18:49:51.926189 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7","Type":"ContainerStarted","Data":"fc4b5285fa64750c63fb337cc8a2961735cc483e198eedab2ace1691a3f129bc"} Mar 12 18:49:51.926273 master-0 kubenswrapper[29097]: I0312 18:49:51.926270 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7","Type":"ContainerStarted","Data":"1b71d5e934cf86261ca5ad3a501cf664777fd060a8a51a72aebd292020b287c1"} Mar 12 18:49:51.929064 master-0 kubenswrapper[29097]: I0312 18:49:51.929019 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"528b36cc-fcf4-4ab9-9723-8974a70458fa","Type":"ContainerStarted","Data":"d88f0e93f9a8e877d7b8acc4a3470c0c61e85839d8013057558b5735ad92c013"} Mar 12 18:49:51.931318 master-0 kubenswrapper[29097]: I0312 18:49:51.931289 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a","Type":"ContainerStarted","Data":"d931809c5cd7037170a35fb8841a94a5210a157e8a5dcec597397a98da5e0388"} Mar 12 18:49:51.931456 master-0 kubenswrapper[29097]: I0312 18:49:51.931305 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://d931809c5cd7037170a35fb8841a94a5210a157e8a5dcec597397a98da5e0388" gracePeriod=30 Mar 12 18:49:51.939677 master-0 kubenswrapper[29097]: I0312 18:49:51.939307 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9c72ae11-ab3b-4eb4-b46e-9c6505de9df0","Type":"ContainerStarted","Data":"89df0282e6f46897a9c42a14b3fd9091d4b76b650ca5304c0e9b91f7af202e93"} Mar 12 18:49:51.939677 master-0 kubenswrapper[29097]: I0312 18:49:51.939358 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9c72ae11-ab3b-4eb4-b46e-9c6505de9df0","Type":"ContainerStarted","Data":"8f421ab22bd63041b23ab5384f38678c4b524dc94027b48b2ed0c84cd9c7fff6"} Mar 12 18:49:51.939677 master-0 kubenswrapper[29097]: I0312 18:49:51.939432 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9c72ae11-ab3b-4eb4-b46e-9c6505de9df0" containerName="nova-metadata-metadata" containerID="cri-o://89df0282e6f46897a9c42a14b3fd9091d4b76b650ca5304c0e9b91f7af202e93" gracePeriod=30 Mar 12 18:49:51.939677 master-0 kubenswrapper[29097]: I0312 18:49:51.939429 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9c72ae11-ab3b-4eb4-b46e-9c6505de9df0" containerName="nova-metadata-log" containerID="cri-o://8f421ab22bd63041b23ab5384f38678c4b524dc94027b48b2ed0c84cd9c7fff6" gracePeriod=30 Mar 12 18:49:51.960348 master-0 kubenswrapper[29097]: I0312 18:49:51.960073 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.8610571780000003 podStartE2EDuration="8.960052501s" podCreationTimestamp="2026-03-12 18:49:43 +0000 UTC" firstStartedPulling="2026-03-12 18:49:45.502254533 +0000 UTC m=+1225.056234630" lastFinishedPulling="2026-03-12 18:49:50.601249856 +0000 UTC m=+1230.155229953" observedRunningTime="2026-03-12 18:49:51.946048545 +0000 UTC m=+1231.500028642" watchObservedRunningTime="2026-03-12 18:49:51.960052501 +0000 UTC m=+1231.514032608" Mar 12 18:49:52.002664 master-0 kubenswrapper[29097]: I0312 18:49:52.002055 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.611053392 podStartE2EDuration="8.00203697s" podCreationTimestamp="2026-03-12 18:49:44 +0000 UTC" firstStartedPulling="2026-03-12 18:49:46.220611674 +0000 UTC m=+1225.774591761" lastFinishedPulling="2026-03-12 18:49:50.611595242 +0000 UTC m=+1230.165575339" observedRunningTime="2026-03-12 18:49:51.989585562 +0000 UTC m=+1231.543565659" watchObservedRunningTime="2026-03-12 18:49:52.00203697 +0000 UTC m=+1231.556017067" Mar 12 18:49:52.007960 master-0 kubenswrapper[29097]: I0312 18:49:52.007502 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.43152526 podStartE2EDuration="8.007478744s" podCreationTimestamp="2026-03-12 18:49:44 +0000 UTC" firstStartedPulling="2026-03-12 18:49:46.0446386 +0000 UTC m=+1225.598618697" lastFinishedPulling="2026-03-12 18:49:50.620592084 +0000 UTC m=+1230.174572181" observedRunningTime="2026-03-12 18:49:51.970010518 +0000 UTC m=+1231.523990615" watchObservedRunningTime="2026-03-12 18:49:52.007478744 +0000 UTC m=+1231.561458841" Mar 12 18:49:52.020324 master-0 kubenswrapper[29097]: I0312 18:49:52.017593 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.359188731 podStartE2EDuration="9.017570764s" podCreationTimestamp="2026-03-12 18:49:43 +0000 UTC" firstStartedPulling="2026-03-12 18:49:45.952028419 +0000 UTC m=+1225.506008516" lastFinishedPulling="2026-03-12 18:49:50.610410452 +0000 UTC m=+1230.164390549" observedRunningTime="2026-03-12 18:49:52.014657512 +0000 UTC m=+1231.568637629" watchObservedRunningTime="2026-03-12 18:49:52.017570764 +0000 UTC m=+1231.571550861" Mar 12 18:49:52.957847 master-0 kubenswrapper[29097]: I0312 18:49:52.957782 29097 generic.go:334] "Generic (PLEG): container finished" podID="9c72ae11-ab3b-4eb4-b46e-9c6505de9df0" containerID="89df0282e6f46897a9c42a14b3fd9091d4b76b650ca5304c0e9b91f7af202e93" exitCode=0 Mar 12 18:49:52.957847 master-0 kubenswrapper[29097]: I0312 18:49:52.957836 29097 generic.go:334] "Generic (PLEG): container finished" podID="9c72ae11-ab3b-4eb4-b46e-9c6505de9df0" containerID="8f421ab22bd63041b23ab5384f38678c4b524dc94027b48b2ed0c84cd9c7fff6" exitCode=143 Mar 12 18:49:52.959264 master-0 kubenswrapper[29097]: I0312 18:49:52.959232 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9c72ae11-ab3b-4eb4-b46e-9c6505de9df0","Type":"ContainerDied","Data":"89df0282e6f46897a9c42a14b3fd9091d4b76b650ca5304c0e9b91f7af202e93"} Mar 12 18:49:52.959334 master-0 kubenswrapper[29097]: I0312 18:49:52.959269 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9c72ae11-ab3b-4eb4-b46e-9c6505de9df0","Type":"ContainerDied","Data":"8f421ab22bd63041b23ab5384f38678c4b524dc94027b48b2ed0c84cd9c7fff6"} Mar 12 18:49:54.445937 master-0 kubenswrapper[29097]: I0312 18:49:54.445289 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 18:49:54.445937 master-0 kubenswrapper[29097]: I0312 18:49:54.445355 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 18:49:54.834387 master-0 kubenswrapper[29097]: I0312 18:49:54.834326 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 18:49:54.834387 master-0 kubenswrapper[29097]: I0312 18:49:54.834381 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 18:49:55.031208 master-0 kubenswrapper[29097]: I0312 18:49:55.031123 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 12 18:49:55.031208 master-0 kubenswrapper[29097]: I0312 18:49:55.031179 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 12 18:49:55.073412 master-0 kubenswrapper[29097]: I0312 18:49:55.073207 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 12 18:49:55.173097 master-0 kubenswrapper[29097]: I0312 18:49:55.172917 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:49:55.352401 master-0 kubenswrapper[29097]: I0312 18:49:55.352323 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58d8bd468f-79rnx" Mar 12 18:49:55.497286 master-0 kubenswrapper[29097]: I0312 18:49:55.497113 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.1.4:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 18:49:55.538882 master-0 kubenswrapper[29097]: I0312 18:49:55.538807 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.1.4:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 18:49:56.053644 master-0 kubenswrapper[29097]: I0312 18:49:56.053575 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 12 18:49:56.549058 master-0 kubenswrapper[29097]: I0312 18:49:56.543831 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cb659fff7-7fq4q"] Mar 12 18:49:56.549058 master-0 kubenswrapper[29097]: I0312 18:49:56.544111 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5cb659fff7-7fq4q" podUID="165a6ae9-40c5-4e7c-891c-699ba500e189" containerName="dnsmasq-dns" containerID="cri-o://c2d56974df3ea9b38100073bd35aa998fa097a98623a69cdc83688fcee12cf05" gracePeriod=10 Mar 12 18:49:57.576680 master-0 kubenswrapper[29097]: I0312 18:49:57.576591 29097 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5cb659fff7-7fq4q" podUID="165a6ae9-40c5-4e7c-891c-699ba500e189" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.253:5353: connect: connection refused" Mar 12 18:49:58.952656 master-0 kubenswrapper[29097]: I0312 18:49:58.952505 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 18:49:59.039069 master-0 kubenswrapper[29097]: I0312 18:49:59.038964 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c72ae11-ab3b-4eb4-b46e-9c6505de9df0-combined-ca-bundle\") pod \"9c72ae11-ab3b-4eb4-b46e-9c6505de9df0\" (UID: \"9c72ae11-ab3b-4eb4-b46e-9c6505de9df0\") " Mar 12 18:49:59.039298 master-0 kubenswrapper[29097]: I0312 18:49:59.039236 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c72ae11-ab3b-4eb4-b46e-9c6505de9df0-config-data\") pod \"9c72ae11-ab3b-4eb4-b46e-9c6505de9df0\" (UID: \"9c72ae11-ab3b-4eb4-b46e-9c6505de9df0\") " Mar 12 18:49:59.039367 master-0 kubenswrapper[29097]: I0312 18:49:59.039341 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c72ae11-ab3b-4eb4-b46e-9c6505de9df0-logs\") pod \"9c72ae11-ab3b-4eb4-b46e-9c6505de9df0\" (UID: \"9c72ae11-ab3b-4eb4-b46e-9c6505de9df0\") " Mar 12 18:49:59.039438 master-0 kubenswrapper[29097]: I0312 18:49:59.039421 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sjt9\" (UniqueName: \"kubernetes.io/projected/9c72ae11-ab3b-4eb4-b46e-9c6505de9df0-kube-api-access-5sjt9\") pod \"9c72ae11-ab3b-4eb4-b46e-9c6505de9df0\" (UID: \"9c72ae11-ab3b-4eb4-b46e-9c6505de9df0\") " Mar 12 18:49:59.039679 master-0 kubenswrapper[29097]: I0312 18:49:59.039590 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c72ae11-ab3b-4eb4-b46e-9c6505de9df0-logs" (OuterVolumeSpecName: "logs") pod "9c72ae11-ab3b-4eb4-b46e-9c6505de9df0" (UID: "9c72ae11-ab3b-4eb4-b46e-9c6505de9df0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:49:59.043927 master-0 kubenswrapper[29097]: I0312 18:49:59.043896 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c72ae11-ab3b-4eb4-b46e-9c6505de9df0-kube-api-access-5sjt9" (OuterVolumeSpecName: "kube-api-access-5sjt9") pod "9c72ae11-ab3b-4eb4-b46e-9c6505de9df0" (UID: "9c72ae11-ab3b-4eb4-b46e-9c6505de9df0"). InnerVolumeSpecName "kube-api-access-5sjt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:49:59.045060 master-0 kubenswrapper[29097]: I0312 18:49:59.045025 29097 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c72ae11-ab3b-4eb4-b46e-9c6505de9df0-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:49:59.045060 master-0 kubenswrapper[29097]: I0312 18:49:59.045056 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sjt9\" (UniqueName: \"kubernetes.io/projected/9c72ae11-ab3b-4eb4-b46e-9c6505de9df0-kube-api-access-5sjt9\") on node \"master-0\" DevicePath \"\"" Mar 12 18:49:59.061628 master-0 kubenswrapper[29097]: I0312 18:49:59.061565 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"3ded41e3-0282-4ae3-871f-46803445326f","Type":"ContainerStarted","Data":"dbefd0a76d7181b6fd718832623f4ca708f3445eadd52232ddcbd2c1a477c0da"} Mar 12 18:49:59.062674 master-0 kubenswrapper[29097]: I0312 18:49:59.062501 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 18:49:59.068791 master-0 kubenswrapper[29097]: I0312 18:49:59.068744 29097 generic.go:334] "Generic (PLEG): container finished" podID="ebb59236-a534-4ed8-9f62-1be13d1bdaf9" containerID="a338cd3f4126eac83bf2810f37a67b2a523536a23e034deb2931cd8a92ed8215" exitCode=0 Mar 12 18:49:59.068874 master-0 kubenswrapper[29097]: I0312 18:49:59.068778 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-p89p7" event={"ID":"ebb59236-a534-4ed8-9f62-1be13d1bdaf9","Type":"ContainerDied","Data":"a338cd3f4126eac83bf2810f37a67b2a523536a23e034deb2931cd8a92ed8215"} Mar 12 18:49:59.076545 master-0 kubenswrapper[29097]: I0312 18:49:59.076099 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9c72ae11-ab3b-4eb4-b46e-9c6505de9df0","Type":"ContainerDied","Data":"f1862339335e231790a80e44d5ac8c0a9d6e2329f223ab86d9eefff3f0faf1f5"} Mar 12 18:49:59.076545 master-0 kubenswrapper[29097]: I0312 18:49:59.076153 29097 scope.go:117] "RemoveContainer" containerID="89df0282e6f46897a9c42a14b3fd9091d4b76b650ca5304c0e9b91f7af202e93" Mar 12 18:49:59.076545 master-0 kubenswrapper[29097]: I0312 18:49:59.076260 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 18:49:59.081679 master-0 kubenswrapper[29097]: I0312 18:49:59.081636 29097 generic.go:334] "Generic (PLEG): container finished" podID="7fb8fbc7-949d-4526-8456-fbf8277cee2f" containerID="832563a1e62d5d4b6180a20d5980336063fb49495b67923c8a4e7ffc2b268996" exitCode=0 Mar 12 18:49:59.081679 master-0 kubenswrapper[29097]: I0312 18:49:59.081680 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"7fb8fbc7-949d-4526-8456-fbf8277cee2f","Type":"ContainerDied","Data":"832563a1e62d5d4b6180a20d5980336063fb49495b67923c8a4e7ffc2b268996"} Mar 12 18:49:59.083274 master-0 kubenswrapper[29097]: I0312 18:49:59.083242 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c72ae11-ab3b-4eb4-b46e-9c6505de9df0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c72ae11-ab3b-4eb4-b46e-9c6505de9df0" (UID: "9c72ae11-ab3b-4eb4-b46e-9c6505de9df0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:49:59.085173 master-0 kubenswrapper[29097]: I0312 18:49:59.085144 29097 generic.go:334] "Generic (PLEG): container finished" podID="b4177d20-cc30-4b7f-872e-2c8692ee6b8e" containerID="9999bf2f9d35d54e538cf17cff061fb4d3f3a45842e487d1a951c10f3e222dd5" exitCode=0 Mar 12 18:49:59.085269 master-0 kubenswrapper[29097]: I0312 18:49:59.085225 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lfv8d" event={"ID":"b4177d20-cc30-4b7f-872e-2c8692ee6b8e","Type":"ContainerDied","Data":"9999bf2f9d35d54e538cf17cff061fb4d3f3a45842e487d1a951c10f3e222dd5"} Mar 12 18:49:59.088081 master-0 kubenswrapper[29097]: I0312 18:49:59.087870 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c72ae11-ab3b-4eb4-b46e-9c6505de9df0-config-data" (OuterVolumeSpecName: "config-data") pod "9c72ae11-ab3b-4eb4-b46e-9c6505de9df0" (UID: "9c72ae11-ab3b-4eb4-b46e-9c6505de9df0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:49:59.088314 master-0 kubenswrapper[29097]: I0312 18:49:59.088275 29097 generic.go:334] "Generic (PLEG): container finished" podID="165a6ae9-40c5-4e7c-891c-699ba500e189" containerID="c2d56974df3ea9b38100073bd35aa998fa097a98623a69cdc83688fcee12cf05" exitCode=0 Mar 12 18:49:59.088314 master-0 kubenswrapper[29097]: I0312 18:49:59.088309 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cb659fff7-7fq4q" event={"ID":"165a6ae9-40c5-4e7c-891c-699ba500e189","Type":"ContainerDied","Data":"c2d56974df3ea9b38100073bd35aa998fa097a98623a69cdc83688fcee12cf05"} Mar 12 18:49:59.092398 master-0 kubenswrapper[29097]: I0312 18:49:59.092363 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cb659fff7-7fq4q" Mar 12 18:49:59.097632 master-0 kubenswrapper[29097]: I0312 18:49:59.097590 29097 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-cell1-compute-ironic-compute-0" podUID="3ded41e3-0282-4ae3-871f-46803445326f" containerName="nova-cell1-compute-ironic-compute-compute" probeResult="failure" output="" Mar 12 18:49:59.108755 master-0 kubenswrapper[29097]: I0312 18:49:59.108622 29097 scope.go:117] "RemoveContainer" containerID="8f421ab22bd63041b23ab5384f38678c4b524dc94027b48b2ed0c84cd9c7fff6" Mar 12 18:49:59.147484 master-0 kubenswrapper[29097]: I0312 18:49:59.147446 29097 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c72ae11-ab3b-4eb4-b46e-9c6505de9df0-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 18:49:59.147702 master-0 kubenswrapper[29097]: I0312 18:49:59.147686 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c72ae11-ab3b-4eb4-b46e-9c6505de9df0-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:49:59.352385 master-0 kubenswrapper[29097]: I0312 18:49:59.351929 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/165a6ae9-40c5-4e7c-891c-699ba500e189-config\") pod \"165a6ae9-40c5-4e7c-891c-699ba500e189\" (UID: \"165a6ae9-40c5-4e7c-891c-699ba500e189\") " Mar 12 18:49:59.352385 master-0 kubenswrapper[29097]: I0312 18:49:59.352112 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhnkg\" (UniqueName: \"kubernetes.io/projected/165a6ae9-40c5-4e7c-891c-699ba500e189-kube-api-access-xhnkg\") pod \"165a6ae9-40c5-4e7c-891c-699ba500e189\" (UID: \"165a6ae9-40c5-4e7c-891c-699ba500e189\") " Mar 12 18:49:59.352494 master-0 kubenswrapper[29097]: I0312 18:49:59.352471 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/165a6ae9-40c5-4e7c-891c-699ba500e189-ovsdbserver-nb\") pod \"165a6ae9-40c5-4e7c-891c-699ba500e189\" (UID: \"165a6ae9-40c5-4e7c-891c-699ba500e189\") " Mar 12 18:49:59.352540 master-0 kubenswrapper[29097]: I0312 18:49:59.352496 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/165a6ae9-40c5-4e7c-891c-699ba500e189-dns-svc\") pod \"165a6ae9-40c5-4e7c-891c-699ba500e189\" (UID: \"165a6ae9-40c5-4e7c-891c-699ba500e189\") " Mar 12 18:49:59.352578 master-0 kubenswrapper[29097]: I0312 18:49:59.352569 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/165a6ae9-40c5-4e7c-891c-699ba500e189-ovsdbserver-sb\") pod \"165a6ae9-40c5-4e7c-891c-699ba500e189\" (UID: \"165a6ae9-40c5-4e7c-891c-699ba500e189\") " Mar 12 18:49:59.352695 master-0 kubenswrapper[29097]: I0312 18:49:59.352619 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/165a6ae9-40c5-4e7c-891c-699ba500e189-dns-swift-storage-0\") pod \"165a6ae9-40c5-4e7c-891c-699ba500e189\" (UID: \"165a6ae9-40c5-4e7c-891c-699ba500e189\") " Mar 12 18:49:59.390323 master-0 kubenswrapper[29097]: I0312 18:49:59.390198 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-compute-ironic-compute-0" podStartSLOduration=3.134164517 podStartE2EDuration="17.390116032s" podCreationTimestamp="2026-03-12 18:49:42 +0000 UTC" firstStartedPulling="2026-03-12 18:49:44.578545132 +0000 UTC m=+1224.132525229" lastFinishedPulling="2026-03-12 18:49:58.834496646 +0000 UTC m=+1238.388476744" observedRunningTime="2026-03-12 18:49:59.362013457 +0000 UTC m=+1238.915993554" watchObservedRunningTime="2026-03-12 18:49:59.390116032 +0000 UTC m=+1238.944096139" Mar 12 18:49:59.404860 master-0 kubenswrapper[29097]: I0312 18:49:59.403047 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/165a6ae9-40c5-4e7c-891c-699ba500e189-kube-api-access-xhnkg" (OuterVolumeSpecName: "kube-api-access-xhnkg") pod "165a6ae9-40c5-4e7c-891c-699ba500e189" (UID: "165a6ae9-40c5-4e7c-891c-699ba500e189"). InnerVolumeSpecName "kube-api-access-xhnkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:49:59.453481 master-0 kubenswrapper[29097]: I0312 18:49:59.450719 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/165a6ae9-40c5-4e7c-891c-699ba500e189-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "165a6ae9-40c5-4e7c-891c-699ba500e189" (UID: "165a6ae9-40c5-4e7c-891c-699ba500e189"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:49:59.458342 master-0 kubenswrapper[29097]: I0312 18:49:59.458289 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhnkg\" (UniqueName: \"kubernetes.io/projected/165a6ae9-40c5-4e7c-891c-699ba500e189-kube-api-access-xhnkg\") on node \"master-0\" DevicePath \"\"" Mar 12 18:49:59.458342 master-0 kubenswrapper[29097]: I0312 18:49:59.458335 29097 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/165a6ae9-40c5-4e7c-891c-699ba500e189-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 18:49:59.469747 master-0 kubenswrapper[29097]: I0312 18:49:59.462847 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 18:49:59.479358 master-0 kubenswrapper[29097]: I0312 18:49:59.477162 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 18:49:59.480476 master-0 kubenswrapper[29097]: I0312 18:49:59.480382 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/165a6ae9-40c5-4e7c-891c-699ba500e189-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "165a6ae9-40c5-4e7c-891c-699ba500e189" (UID: "165a6ae9-40c5-4e7c-891c-699ba500e189"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:49:59.485719 master-0 kubenswrapper[29097]: I0312 18:49:59.484125 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/165a6ae9-40c5-4e7c-891c-699ba500e189-config" (OuterVolumeSpecName: "config") pod "165a6ae9-40c5-4e7c-891c-699ba500e189" (UID: "165a6ae9-40c5-4e7c-891c-699ba500e189"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:49:59.495649 master-0 kubenswrapper[29097]: I0312 18:49:59.491210 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 12 18:49:59.495649 master-0 kubenswrapper[29097]: E0312 18:49:59.493941 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c72ae11-ab3b-4eb4-b46e-9c6505de9df0" containerName="nova-metadata-metadata" Mar 12 18:49:59.495649 master-0 kubenswrapper[29097]: I0312 18:49:59.493962 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c72ae11-ab3b-4eb4-b46e-9c6505de9df0" containerName="nova-metadata-metadata" Mar 12 18:49:59.495649 master-0 kubenswrapper[29097]: E0312 18:49:59.494011 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c72ae11-ab3b-4eb4-b46e-9c6505de9df0" containerName="nova-metadata-log" Mar 12 18:49:59.495649 master-0 kubenswrapper[29097]: I0312 18:49:59.494018 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c72ae11-ab3b-4eb4-b46e-9c6505de9df0" containerName="nova-metadata-log" Mar 12 18:49:59.495649 master-0 kubenswrapper[29097]: E0312 18:49:59.494027 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="165a6ae9-40c5-4e7c-891c-699ba500e189" containerName="dnsmasq-dns" Mar 12 18:49:59.495649 master-0 kubenswrapper[29097]: I0312 18:49:59.494032 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="165a6ae9-40c5-4e7c-891c-699ba500e189" containerName="dnsmasq-dns" Mar 12 18:49:59.495649 master-0 kubenswrapper[29097]: E0312 18:49:59.494049 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="165a6ae9-40c5-4e7c-891c-699ba500e189" containerName="init" Mar 12 18:49:59.495649 master-0 kubenswrapper[29097]: I0312 18:49:59.494056 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="165a6ae9-40c5-4e7c-891c-699ba500e189" containerName="init" Mar 12 18:49:59.495649 master-0 kubenswrapper[29097]: I0312 18:49:59.494254 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c72ae11-ab3b-4eb4-b46e-9c6505de9df0" containerName="nova-metadata-metadata" Mar 12 18:49:59.495649 master-0 kubenswrapper[29097]: I0312 18:49:59.494309 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c72ae11-ab3b-4eb4-b46e-9c6505de9df0" containerName="nova-metadata-log" Mar 12 18:49:59.495649 master-0 kubenswrapper[29097]: I0312 18:49:59.494325 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="165a6ae9-40c5-4e7c-891c-699ba500e189" containerName="dnsmasq-dns" Mar 12 18:49:59.498658 master-0 kubenswrapper[29097]: I0312 18:49:59.496071 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 18:49:59.500941 master-0 kubenswrapper[29097]: I0312 18:49:59.498717 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 12 18:49:59.500941 master-0 kubenswrapper[29097]: I0312 18:49:59.498946 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 12 18:49:59.530363 master-0 kubenswrapper[29097]: I0312 18:49:59.530311 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/165a6ae9-40c5-4e7c-891c-699ba500e189-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "165a6ae9-40c5-4e7c-891c-699ba500e189" (UID: "165a6ae9-40c5-4e7c-891c-699ba500e189"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:49:59.545268 master-0 kubenswrapper[29097]: I0312 18:49:59.544799 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 18:49:59.561636 master-0 kubenswrapper[29097]: I0312 18:49:59.561594 29097 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/165a6ae9-40c5-4e7c-891c-699ba500e189-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:49:59.561636 master-0 kubenswrapper[29097]: I0312 18:49:59.561629 29097 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/165a6ae9-40c5-4e7c-891c-699ba500e189-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 18:49:59.561636 master-0 kubenswrapper[29097]: I0312 18:49:59.561639 29097 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/165a6ae9-40c5-4e7c-891c-699ba500e189-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 12 18:49:59.586557 master-0 kubenswrapper[29097]: I0312 18:49:59.585475 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/165a6ae9-40c5-4e7c-891c-699ba500e189-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "165a6ae9-40c5-4e7c-891c-699ba500e189" (UID: "165a6ae9-40c5-4e7c-891c-699ba500e189"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:49:59.663322 master-0 kubenswrapper[29097]: I0312 18:49:59.663182 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f1991af-1c66-45fe-beeb-42c620bc4836-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6f1991af-1c66-45fe-beeb-42c620bc4836\") " pod="openstack/nova-metadata-0" Mar 12 18:49:59.663322 master-0 kubenswrapper[29097]: I0312 18:49:59.663247 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f1991af-1c66-45fe-beeb-42c620bc4836-config-data\") pod \"nova-metadata-0\" (UID: \"6f1991af-1c66-45fe-beeb-42c620bc4836\") " pod="openstack/nova-metadata-0" Mar 12 18:49:59.663680 master-0 kubenswrapper[29097]: I0312 18:49:59.663333 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f1991af-1c66-45fe-beeb-42c620bc4836-logs\") pod \"nova-metadata-0\" (UID: \"6f1991af-1c66-45fe-beeb-42c620bc4836\") " pod="openstack/nova-metadata-0" Mar 12 18:49:59.663680 master-0 kubenswrapper[29097]: I0312 18:49:59.663361 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f1991af-1c66-45fe-beeb-42c620bc4836-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6f1991af-1c66-45fe-beeb-42c620bc4836\") " pod="openstack/nova-metadata-0" Mar 12 18:49:59.663680 master-0 kubenswrapper[29097]: I0312 18:49:59.663479 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5548d\" (UniqueName: \"kubernetes.io/projected/6f1991af-1c66-45fe-beeb-42c620bc4836-kube-api-access-5548d\") pod \"nova-metadata-0\" (UID: \"6f1991af-1c66-45fe-beeb-42c620bc4836\") " pod="openstack/nova-metadata-0" Mar 12 18:49:59.663846 master-0 kubenswrapper[29097]: I0312 18:49:59.663777 29097 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/165a6ae9-40c5-4e7c-891c-699ba500e189-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 12 18:49:59.770717 master-0 kubenswrapper[29097]: I0312 18:49:59.770480 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f1991af-1c66-45fe-beeb-42c620bc4836-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6f1991af-1c66-45fe-beeb-42c620bc4836\") " pod="openstack/nova-metadata-0" Mar 12 18:49:59.770717 master-0 kubenswrapper[29097]: I0312 18:49:59.770599 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f1991af-1c66-45fe-beeb-42c620bc4836-config-data\") pod \"nova-metadata-0\" (UID: \"6f1991af-1c66-45fe-beeb-42c620bc4836\") " pod="openstack/nova-metadata-0" Mar 12 18:49:59.770938 master-0 kubenswrapper[29097]: I0312 18:49:59.770734 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f1991af-1c66-45fe-beeb-42c620bc4836-logs\") pod \"nova-metadata-0\" (UID: \"6f1991af-1c66-45fe-beeb-42c620bc4836\") " pod="openstack/nova-metadata-0" Mar 12 18:49:59.770938 master-0 kubenswrapper[29097]: I0312 18:49:59.770781 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f1991af-1c66-45fe-beeb-42c620bc4836-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6f1991af-1c66-45fe-beeb-42c620bc4836\") " pod="openstack/nova-metadata-0" Mar 12 18:49:59.771387 master-0 kubenswrapper[29097]: I0312 18:49:59.771037 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5548d\" (UniqueName: \"kubernetes.io/projected/6f1991af-1c66-45fe-beeb-42c620bc4836-kube-api-access-5548d\") pod \"nova-metadata-0\" (UID: \"6f1991af-1c66-45fe-beeb-42c620bc4836\") " pod="openstack/nova-metadata-0" Mar 12 18:49:59.773796 master-0 kubenswrapper[29097]: I0312 18:49:59.773761 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f1991af-1c66-45fe-beeb-42c620bc4836-logs\") pod \"nova-metadata-0\" (UID: \"6f1991af-1c66-45fe-beeb-42c620bc4836\") " pod="openstack/nova-metadata-0" Mar 12 18:49:59.789947 master-0 kubenswrapper[29097]: I0312 18:49:59.789866 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f1991af-1c66-45fe-beeb-42c620bc4836-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6f1991af-1c66-45fe-beeb-42c620bc4836\") " pod="openstack/nova-metadata-0" Mar 12 18:49:59.791325 master-0 kubenswrapper[29097]: I0312 18:49:59.791239 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f1991af-1c66-45fe-beeb-42c620bc4836-config-data\") pod \"nova-metadata-0\" (UID: \"6f1991af-1c66-45fe-beeb-42c620bc4836\") " pod="openstack/nova-metadata-0" Mar 12 18:49:59.793484 master-0 kubenswrapper[29097]: I0312 18:49:59.793446 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5548d\" (UniqueName: \"kubernetes.io/projected/6f1991af-1c66-45fe-beeb-42c620bc4836-kube-api-access-5548d\") pod \"nova-metadata-0\" (UID: \"6f1991af-1c66-45fe-beeb-42c620bc4836\") " pod="openstack/nova-metadata-0" Mar 12 18:49:59.795376 master-0 kubenswrapper[29097]: I0312 18:49:59.795318 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f1991af-1c66-45fe-beeb-42c620bc4836-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6f1991af-1c66-45fe-beeb-42c620bc4836\") " pod="openstack/nova-metadata-0" Mar 12 18:49:59.851386 master-0 kubenswrapper[29097]: I0312 18:49:59.850790 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 18:50:00.139991 master-0 kubenswrapper[29097]: I0312 18:50:00.139960 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cb659fff7-7fq4q" Mar 12 18:50:00.140472 master-0 kubenswrapper[29097]: I0312 18:50:00.140127 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cb659fff7-7fq4q" event={"ID":"165a6ae9-40c5-4e7c-891c-699ba500e189","Type":"ContainerDied","Data":"66286a08b8f17c57649d1476ac336dfda966ef041fe9d4d74d00536deba6a74b"} Mar 12 18:50:00.140472 master-0 kubenswrapper[29097]: I0312 18:50:00.140175 29097 scope.go:117] "RemoveContainer" containerID="c2d56974df3ea9b38100073bd35aa998fa097a98623a69cdc83688fcee12cf05" Mar 12 18:50:00.150240 master-0 kubenswrapper[29097]: I0312 18:50:00.149999 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"7fb8fbc7-949d-4526-8456-fbf8277cee2f","Type":"ContainerStarted","Data":"8db9032e93bea0211c6dc97832b9b10e2a250defe72eff43b02d5988d608e917"} Mar 12 18:50:00.168645 master-0 kubenswrapper[29097]: I0312 18:50:00.167792 29097 scope.go:117] "RemoveContainer" containerID="2907ff8598bc263162a6b38451d63c3d5fb481dfd569bd90213336af6de27e5c" Mar 12 18:50:00.199811 master-0 kubenswrapper[29097]: I0312 18:50:00.198757 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cb659fff7-7fq4q"] Mar 12 18:50:00.207145 master-0 kubenswrapper[29097]: I0312 18:50:00.206561 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 18:50:00.221641 master-0 kubenswrapper[29097]: I0312 18:50:00.220218 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5cb659fff7-7fq4q"] Mar 12 18:50:00.332300 master-0 kubenswrapper[29097]: I0312 18:50:00.332166 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 18:50:00.337576 master-0 kubenswrapper[29097]: W0312 18:50:00.337419 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f1991af_1c66_45fe_beeb_42c620bc4836.slice/crio-ac1a096ed9c7ea9a548d46f769472c9ac812cc71abb2d3616f381cc1abd7fe3b WatchSource:0}: Error finding container ac1a096ed9c7ea9a548d46f769472c9ac812cc71abb2d3616f381cc1abd7fe3b: Status 404 returned error can't find the container with id ac1a096ed9c7ea9a548d46f769472c9ac812cc71abb2d3616f381cc1abd7fe3b Mar 12 18:50:00.652869 master-0 kubenswrapper[29097]: I0312 18:50:00.652835 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-p89p7" Mar 12 18:50:00.755392 master-0 kubenswrapper[29097]: I0312 18:50:00.754727 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="165a6ae9-40c5-4e7c-891c-699ba500e189" path="/var/lib/kubelet/pods/165a6ae9-40c5-4e7c-891c-699ba500e189/volumes" Mar 12 18:50:00.759218 master-0 kubenswrapper[29097]: I0312 18:50:00.758792 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c72ae11-ab3b-4eb4-b46e-9c6505de9df0" path="/var/lib/kubelet/pods/9c72ae11-ab3b-4eb4-b46e-9c6505de9df0/volumes" Mar 12 18:50:00.812847 master-0 kubenswrapper[29097]: I0312 18:50:00.798227 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebb59236-a534-4ed8-9f62-1be13d1bdaf9-scripts\") pod \"ebb59236-a534-4ed8-9f62-1be13d1bdaf9\" (UID: \"ebb59236-a534-4ed8-9f62-1be13d1bdaf9\") " Mar 12 18:50:00.812847 master-0 kubenswrapper[29097]: I0312 18:50:00.798429 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz4sp\" (UniqueName: \"kubernetes.io/projected/ebb59236-a534-4ed8-9f62-1be13d1bdaf9-kube-api-access-mz4sp\") pod \"ebb59236-a534-4ed8-9f62-1be13d1bdaf9\" (UID: \"ebb59236-a534-4ed8-9f62-1be13d1bdaf9\") " Mar 12 18:50:00.812847 master-0 kubenswrapper[29097]: I0312 18:50:00.798547 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebb59236-a534-4ed8-9f62-1be13d1bdaf9-config-data\") pod \"ebb59236-a534-4ed8-9f62-1be13d1bdaf9\" (UID: \"ebb59236-a534-4ed8-9f62-1be13d1bdaf9\") " Mar 12 18:50:00.812847 master-0 kubenswrapper[29097]: I0312 18:50:00.798706 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb59236-a534-4ed8-9f62-1be13d1bdaf9-combined-ca-bundle\") pod \"ebb59236-a534-4ed8-9f62-1be13d1bdaf9\" (UID: \"ebb59236-a534-4ed8-9f62-1be13d1bdaf9\") " Mar 12 18:50:00.855119 master-0 kubenswrapper[29097]: I0312 18:50:00.854708 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebb59236-a534-4ed8-9f62-1be13d1bdaf9-scripts" (OuterVolumeSpecName: "scripts") pod "ebb59236-a534-4ed8-9f62-1be13d1bdaf9" (UID: "ebb59236-a534-4ed8-9f62-1be13d1bdaf9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:50:00.855350 master-0 kubenswrapper[29097]: I0312 18:50:00.855293 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebb59236-a534-4ed8-9f62-1be13d1bdaf9-kube-api-access-mz4sp" (OuterVolumeSpecName: "kube-api-access-mz4sp") pod "ebb59236-a534-4ed8-9f62-1be13d1bdaf9" (UID: "ebb59236-a534-4ed8-9f62-1be13d1bdaf9"). InnerVolumeSpecName "kube-api-access-mz4sp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:50:00.874607 master-0 kubenswrapper[29097]: I0312 18:50:00.866675 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebb59236-a534-4ed8-9f62-1be13d1bdaf9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ebb59236-a534-4ed8-9f62-1be13d1bdaf9" (UID: "ebb59236-a534-4ed8-9f62-1be13d1bdaf9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:50:00.906570 master-0 kubenswrapper[29097]: I0312 18:50:00.901635 29097 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebb59236-a534-4ed8-9f62-1be13d1bdaf9-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:00.906570 master-0 kubenswrapper[29097]: I0312 18:50:00.901676 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz4sp\" (UniqueName: \"kubernetes.io/projected/ebb59236-a534-4ed8-9f62-1be13d1bdaf9-kube-api-access-mz4sp\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:00.906570 master-0 kubenswrapper[29097]: I0312 18:50:00.901688 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb59236-a534-4ed8-9f62-1be13d1bdaf9-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:00.944559 master-0 kubenswrapper[29097]: I0312 18:50:00.943721 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebb59236-a534-4ed8-9f62-1be13d1bdaf9-config-data" (OuterVolumeSpecName: "config-data") pod "ebb59236-a534-4ed8-9f62-1be13d1bdaf9" (UID: "ebb59236-a534-4ed8-9f62-1be13d1bdaf9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:50:01.005811 master-0 kubenswrapper[29097]: I0312 18:50:01.005766 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lfv8d" Mar 12 18:50:01.008874 master-0 kubenswrapper[29097]: I0312 18:50:01.008305 29097 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebb59236-a534-4ed8-9f62-1be13d1bdaf9-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:01.110265 master-0 kubenswrapper[29097]: I0312 18:50:01.110206 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4177d20-cc30-4b7f-872e-2c8692ee6b8e-scripts\") pod \"b4177d20-cc30-4b7f-872e-2c8692ee6b8e\" (UID: \"b4177d20-cc30-4b7f-872e-2c8692ee6b8e\") " Mar 12 18:50:01.110564 master-0 kubenswrapper[29097]: I0312 18:50:01.110547 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrlmt\" (UniqueName: \"kubernetes.io/projected/b4177d20-cc30-4b7f-872e-2c8692ee6b8e-kube-api-access-zrlmt\") pod \"b4177d20-cc30-4b7f-872e-2c8692ee6b8e\" (UID: \"b4177d20-cc30-4b7f-872e-2c8692ee6b8e\") " Mar 12 18:50:01.110743 master-0 kubenswrapper[29097]: I0312 18:50:01.110725 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4177d20-cc30-4b7f-872e-2c8692ee6b8e-config-data\") pod \"b4177d20-cc30-4b7f-872e-2c8692ee6b8e\" (UID: \"b4177d20-cc30-4b7f-872e-2c8692ee6b8e\") " Mar 12 18:50:01.110871 master-0 kubenswrapper[29097]: I0312 18:50:01.110855 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4177d20-cc30-4b7f-872e-2c8692ee6b8e-combined-ca-bundle\") pod \"b4177d20-cc30-4b7f-872e-2c8692ee6b8e\" (UID: \"b4177d20-cc30-4b7f-872e-2c8692ee6b8e\") " Mar 12 18:50:01.112866 master-0 kubenswrapper[29097]: I0312 18:50:01.112822 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4177d20-cc30-4b7f-872e-2c8692ee6b8e-scripts" (OuterVolumeSpecName: "scripts") pod "b4177d20-cc30-4b7f-872e-2c8692ee6b8e" (UID: "b4177d20-cc30-4b7f-872e-2c8692ee6b8e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:50:01.115452 master-0 kubenswrapper[29097]: I0312 18:50:01.115427 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4177d20-cc30-4b7f-872e-2c8692ee6b8e-kube-api-access-zrlmt" (OuterVolumeSpecName: "kube-api-access-zrlmt") pod "b4177d20-cc30-4b7f-872e-2c8692ee6b8e" (UID: "b4177d20-cc30-4b7f-872e-2c8692ee6b8e"). InnerVolumeSpecName "kube-api-access-zrlmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:50:01.136832 master-0 kubenswrapper[29097]: I0312 18:50:01.136769 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4177d20-cc30-4b7f-872e-2c8692ee6b8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4177d20-cc30-4b7f-872e-2c8692ee6b8e" (UID: "b4177d20-cc30-4b7f-872e-2c8692ee6b8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:50:01.137047 master-0 kubenswrapper[29097]: I0312 18:50:01.136985 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4177d20-cc30-4b7f-872e-2c8692ee6b8e-config-data" (OuterVolumeSpecName: "config-data") pod "b4177d20-cc30-4b7f-872e-2c8692ee6b8e" (UID: "b4177d20-cc30-4b7f-872e-2c8692ee6b8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:50:01.162199 master-0 kubenswrapper[29097]: I0312 18:50:01.161557 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lfv8d" Mar 12 18:50:01.162859 master-0 kubenswrapper[29097]: I0312 18:50:01.162759 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lfv8d" event={"ID":"b4177d20-cc30-4b7f-872e-2c8692ee6b8e","Type":"ContainerDied","Data":"49b7959a4663e18ec213f99c9d616bbdbe5dfc41a1aa877fd7ded6eb1c9ccf88"} Mar 12 18:50:01.162859 master-0 kubenswrapper[29097]: I0312 18:50:01.162808 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49b7959a4663e18ec213f99c9d616bbdbe5dfc41a1aa877fd7ded6eb1c9ccf88" Mar 12 18:50:01.168614 master-0 kubenswrapper[29097]: I0312 18:50:01.167261 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6f1991af-1c66-45fe-beeb-42c620bc4836","Type":"ContainerStarted","Data":"5e74e6b9ff7cbd6322c507be7d301307cd51c2c705e9d24741d41cad162c413d"} Mar 12 18:50:01.168614 master-0 kubenswrapper[29097]: I0312 18:50:01.167289 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6f1991af-1c66-45fe-beeb-42c620bc4836","Type":"ContainerStarted","Data":"a544dc91029c7b7a59cf2b054149334d0c31b5c0791f4cec1449101c0575a1c5"} Mar 12 18:50:01.168614 master-0 kubenswrapper[29097]: I0312 18:50:01.167302 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6f1991af-1c66-45fe-beeb-42c620bc4836","Type":"ContainerStarted","Data":"ac1a096ed9c7ea9a548d46f769472c9ac812cc71abb2d3616f381cc1abd7fe3b"} Mar 12 18:50:01.171396 master-0 kubenswrapper[29097]: I0312 18:50:01.171374 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-p89p7" event={"ID":"ebb59236-a534-4ed8-9f62-1be13d1bdaf9","Type":"ContainerDied","Data":"65e177e295981aa50e51566e71ae15bdf2d9e9380109cfbd6efb25d75edbed44"} Mar 12 18:50:01.171549 master-0 kubenswrapper[29097]: I0312 18:50:01.171532 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65e177e295981aa50e51566e71ae15bdf2d9e9380109cfbd6efb25d75edbed44" Mar 12 18:50:01.171659 master-0 kubenswrapper[29097]: I0312 18:50:01.171594 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-p89p7" Mar 12 18:50:01.182405 master-0 kubenswrapper[29097]: I0312 18:50:01.182351 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"7fb8fbc7-949d-4526-8456-fbf8277cee2f","Type":"ContainerStarted","Data":"4467d2ce405c0434111528362180bb1c999998a3adb775b863ce20bce879e8d5"} Mar 12 18:50:01.182501 master-0 kubenswrapper[29097]: I0312 18:50:01.182416 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"7fb8fbc7-949d-4526-8456-fbf8277cee2f","Type":"ContainerStarted","Data":"2786862b58a6f383b6ceecfad0be5dbe548a59446570ce0d3ac9cf6cc52d4d05"} Mar 12 18:50:01.182839 master-0 kubenswrapper[29097]: I0312 18:50:01.182813 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Mar 12 18:50:01.182939 master-0 kubenswrapper[29097]: I0312 18:50:01.182919 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Mar 12 18:50:01.195892 master-0 kubenswrapper[29097]: I0312 18:50:01.195806 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.195787962 podStartE2EDuration="2.195787962s" podCreationTimestamp="2026-03-12 18:49:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:50:01.185426786 +0000 UTC m=+1240.739406893" watchObservedRunningTime="2026-03-12 18:50:01.195787962 +0000 UTC m=+1240.749768049" Mar 12 18:50:01.214023 master-0 kubenswrapper[29097]: I0312 18:50:01.213953 29097 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4177d20-cc30-4b7f-872e-2c8692ee6b8e-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:01.214401 master-0 kubenswrapper[29097]: I0312 18:50:01.214383 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrlmt\" (UniqueName: \"kubernetes.io/projected/b4177d20-cc30-4b7f-872e-2c8692ee6b8e-kube-api-access-zrlmt\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:01.217668 master-0 kubenswrapper[29097]: I0312 18:50:01.217630 29097 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4177d20-cc30-4b7f-872e-2c8692ee6b8e-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:01.217817 master-0 kubenswrapper[29097]: I0312 18:50:01.217671 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4177d20-cc30-4b7f-872e-2c8692ee6b8e-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:01.236571 master-0 kubenswrapper[29097]: I0312 18:50:01.236444 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-conductor-0" podStartSLOduration=63.386737924 podStartE2EDuration="1m44.236425187s" podCreationTimestamp="2026-03-12 18:48:17 +0000 UTC" firstStartedPulling="2026-03-12 18:48:28.519317557 +0000 UTC m=+1148.073297654" lastFinishedPulling="2026-03-12 18:49:09.36900482 +0000 UTC m=+1188.922984917" observedRunningTime="2026-03-12 18:50:01.223434106 +0000 UTC m=+1240.777414223" watchObservedRunningTime="2026-03-12 18:50:01.236425187 +0000 UTC m=+1240.790405294" Mar 12 18:50:01.319813 master-0 kubenswrapper[29097]: I0312 18:50:01.319752 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-conductor-0" Mar 12 18:50:01.539759 master-0 kubenswrapper[29097]: I0312 18:50:01.533729 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 12 18:50:01.539759 master-0 kubenswrapper[29097]: E0312 18:50:01.534348 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb59236-a534-4ed8-9f62-1be13d1bdaf9" containerName="nova-manage" Mar 12 18:50:01.539759 master-0 kubenswrapper[29097]: I0312 18:50:01.534369 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb59236-a534-4ed8-9f62-1be13d1bdaf9" containerName="nova-manage" Mar 12 18:50:01.539759 master-0 kubenswrapper[29097]: E0312 18:50:01.534386 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4177d20-cc30-4b7f-872e-2c8692ee6b8e" containerName="nova-cell1-conductor-db-sync" Mar 12 18:50:01.539759 master-0 kubenswrapper[29097]: I0312 18:50:01.534393 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4177d20-cc30-4b7f-872e-2c8692ee6b8e" containerName="nova-cell1-conductor-db-sync" Mar 12 18:50:01.539759 master-0 kubenswrapper[29097]: I0312 18:50:01.534659 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebb59236-a534-4ed8-9f62-1be13d1bdaf9" containerName="nova-manage" Mar 12 18:50:01.539759 master-0 kubenswrapper[29097]: I0312 18:50:01.534686 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4177d20-cc30-4b7f-872e-2c8692ee6b8e" containerName="nova-cell1-conductor-db-sync" Mar 12 18:50:01.539759 master-0 kubenswrapper[29097]: I0312 18:50:01.535417 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 12 18:50:01.539759 master-0 kubenswrapper[29097]: I0312 18:50:01.538260 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 12 18:50:01.566904 master-0 kubenswrapper[29097]: I0312 18:50:01.566856 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 12 18:50:01.578866 master-0 kubenswrapper[29097]: I0312 18:50:01.578753 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 18:50:01.579049 master-0 kubenswrapper[29097]: I0312 18:50:01.579000 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="528b36cc-fcf4-4ab9-9723-8974a70458fa" containerName="nova-scheduler-scheduler" containerID="cri-o://d88f0e93f9a8e877d7b8acc4a3470c0c61e85839d8013057558b5735ad92c013" gracePeriod=30 Mar 12 18:50:01.592556 master-0 kubenswrapper[29097]: I0312 18:50:01.592217 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 18:50:01.592556 master-0 kubenswrapper[29097]: I0312 18:50:01.592460 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7" containerName="nova-api-log" containerID="cri-o://1b71d5e934cf86261ca5ad3a501cf664777fd060a8a51a72aebd292020b287c1" gracePeriod=30 Mar 12 18:50:01.595931 master-0 kubenswrapper[29097]: I0312 18:50:01.595735 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7" containerName="nova-api-api" containerID="cri-o://fc4b5285fa64750c63fb337cc8a2961735cc483e198eedab2ace1691a3f129bc" gracePeriod=30 Mar 12 18:50:01.630539 master-0 kubenswrapper[29097]: I0312 18:50:01.625855 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b4f4e23-ac5a-40db-a2bd-e0c71a571fb8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4b4f4e23-ac5a-40db-a2bd-e0c71a571fb8\") " pod="openstack/nova-cell1-conductor-0" Mar 12 18:50:01.630539 master-0 kubenswrapper[29097]: I0312 18:50:01.625952 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b4f4e23-ac5a-40db-a2bd-e0c71a571fb8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4b4f4e23-ac5a-40db-a2bd-e0c71a571fb8\") " pod="openstack/nova-cell1-conductor-0" Mar 12 18:50:01.630539 master-0 kubenswrapper[29097]: I0312 18:50:01.625985 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tg4c\" (UniqueName: \"kubernetes.io/projected/4b4f4e23-ac5a-40db-a2bd-e0c71a571fb8-kube-api-access-9tg4c\") pod \"nova-cell1-conductor-0\" (UID: \"4b4f4e23-ac5a-40db-a2bd-e0c71a571fb8\") " pod="openstack/nova-cell1-conductor-0" Mar 12 18:50:01.670634 master-0 kubenswrapper[29097]: I0312 18:50:01.669848 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 18:50:01.744179 master-0 kubenswrapper[29097]: I0312 18:50:01.742685 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b4f4e23-ac5a-40db-a2bd-e0c71a571fb8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4b4f4e23-ac5a-40db-a2bd-e0c71a571fb8\") " pod="openstack/nova-cell1-conductor-0" Mar 12 18:50:01.744179 master-0 kubenswrapper[29097]: I0312 18:50:01.742785 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b4f4e23-ac5a-40db-a2bd-e0c71a571fb8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4b4f4e23-ac5a-40db-a2bd-e0c71a571fb8\") " pod="openstack/nova-cell1-conductor-0" Mar 12 18:50:01.744179 master-0 kubenswrapper[29097]: I0312 18:50:01.742823 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tg4c\" (UniqueName: \"kubernetes.io/projected/4b4f4e23-ac5a-40db-a2bd-e0c71a571fb8-kube-api-access-9tg4c\") pod \"nova-cell1-conductor-0\" (UID: \"4b4f4e23-ac5a-40db-a2bd-e0c71a571fb8\") " pod="openstack/nova-cell1-conductor-0" Mar 12 18:50:01.755543 master-0 kubenswrapper[29097]: I0312 18:50:01.746141 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b4f4e23-ac5a-40db-a2bd-e0c71a571fb8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4b4f4e23-ac5a-40db-a2bd-e0c71a571fb8\") " pod="openstack/nova-cell1-conductor-0" Mar 12 18:50:01.755543 master-0 kubenswrapper[29097]: I0312 18:50:01.746952 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b4f4e23-ac5a-40db-a2bd-e0c71a571fb8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4b4f4e23-ac5a-40db-a2bd-e0c71a571fb8\") " pod="openstack/nova-cell1-conductor-0" Mar 12 18:50:01.761651 master-0 kubenswrapper[29097]: I0312 18:50:01.757786 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tg4c\" (UniqueName: \"kubernetes.io/projected/4b4f4e23-ac5a-40db-a2bd-e0c71a571fb8-kube-api-access-9tg4c\") pod \"nova-cell1-conductor-0\" (UID: \"4b4f4e23-ac5a-40db-a2bd-e0c71a571fb8\") " pod="openstack/nova-cell1-conductor-0" Mar 12 18:50:01.863382 master-0 kubenswrapper[29097]: I0312 18:50:01.863283 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 12 18:50:02.212725 master-0 kubenswrapper[29097]: I0312 18:50:02.212600 29097 generic.go:334] "Generic (PLEG): container finished" podID="fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7" containerID="1b71d5e934cf86261ca5ad3a501cf664777fd060a8a51a72aebd292020b287c1" exitCode=143 Mar 12 18:50:02.213236 master-0 kubenswrapper[29097]: I0312 18:50:02.212651 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7","Type":"ContainerDied","Data":"1b71d5e934cf86261ca5ad3a501cf664777fd060a8a51a72aebd292020b287c1"} Mar 12 18:50:02.350571 master-0 kubenswrapper[29097]: I0312 18:50:02.349226 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 12 18:50:02.867716 master-0 kubenswrapper[29097]: I0312 18:50:02.866754 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/ironic-conductor-0" podUID="7fb8fbc7-949d-4526-8456-fbf8277cee2f" containerName="ironic-conductor" probeResult="failure" output=< Mar 12 18:50:02.867716 master-0 kubenswrapper[29097]: ironic-conductor-0 is offline Mar 12 18:50:02.867716 master-0 kubenswrapper[29097]: > Mar 12 18:50:02.906620 master-0 kubenswrapper[29097]: I0312 18:50:02.906580 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 18:50:02.987287 master-0 kubenswrapper[29097]: I0312 18:50:02.987172 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbgxk\" (UniqueName: \"kubernetes.io/projected/528b36cc-fcf4-4ab9-9723-8974a70458fa-kube-api-access-wbgxk\") pod \"528b36cc-fcf4-4ab9-9723-8974a70458fa\" (UID: \"528b36cc-fcf4-4ab9-9723-8974a70458fa\") " Mar 12 18:50:02.987476 master-0 kubenswrapper[29097]: I0312 18:50:02.987303 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/528b36cc-fcf4-4ab9-9723-8974a70458fa-combined-ca-bundle\") pod \"528b36cc-fcf4-4ab9-9723-8974a70458fa\" (UID: \"528b36cc-fcf4-4ab9-9723-8974a70458fa\") " Mar 12 18:50:02.987476 master-0 kubenswrapper[29097]: I0312 18:50:02.987404 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/528b36cc-fcf4-4ab9-9723-8974a70458fa-config-data\") pod \"528b36cc-fcf4-4ab9-9723-8974a70458fa\" (UID: \"528b36cc-fcf4-4ab9-9723-8974a70458fa\") " Mar 12 18:50:02.998550 master-0 kubenswrapper[29097]: I0312 18:50:02.994758 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/528b36cc-fcf4-4ab9-9723-8974a70458fa-kube-api-access-wbgxk" (OuterVolumeSpecName: "kube-api-access-wbgxk") pod "528b36cc-fcf4-4ab9-9723-8974a70458fa" (UID: "528b36cc-fcf4-4ab9-9723-8974a70458fa"). InnerVolumeSpecName "kube-api-access-wbgxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:50:03.031258 master-0 kubenswrapper[29097]: I0312 18:50:03.026935 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/528b36cc-fcf4-4ab9-9723-8974a70458fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "528b36cc-fcf4-4ab9-9723-8974a70458fa" (UID: "528b36cc-fcf4-4ab9-9723-8974a70458fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:50:03.032566 master-0 kubenswrapper[29097]: I0312 18:50:03.032256 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/528b36cc-fcf4-4ab9-9723-8974a70458fa-config-data" (OuterVolumeSpecName: "config-data") pod "528b36cc-fcf4-4ab9-9723-8974a70458fa" (UID: "528b36cc-fcf4-4ab9-9723-8974a70458fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:50:03.097254 master-0 kubenswrapper[29097]: I0312 18:50:03.096831 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbgxk\" (UniqueName: \"kubernetes.io/projected/528b36cc-fcf4-4ab9-9723-8974a70458fa-kube-api-access-wbgxk\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:03.097435 master-0 kubenswrapper[29097]: I0312 18:50:03.097427 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/528b36cc-fcf4-4ab9-9723-8974a70458fa-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:03.097488 master-0 kubenswrapper[29097]: I0312 18:50:03.097444 29097 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/528b36cc-fcf4-4ab9-9723-8974a70458fa-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:03.223132 master-0 kubenswrapper[29097]: I0312 18:50:03.223069 29097 generic.go:334] "Generic (PLEG): container finished" podID="528b36cc-fcf4-4ab9-9723-8974a70458fa" containerID="d88f0e93f9a8e877d7b8acc4a3470c0c61e85839d8013057558b5735ad92c013" exitCode=0 Mar 12 18:50:03.223132 master-0 kubenswrapper[29097]: I0312 18:50:03.223137 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"528b36cc-fcf4-4ab9-9723-8974a70458fa","Type":"ContainerDied","Data":"d88f0e93f9a8e877d7b8acc4a3470c0c61e85839d8013057558b5735ad92c013"} Mar 12 18:50:03.223695 master-0 kubenswrapper[29097]: I0312 18:50:03.223162 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"528b36cc-fcf4-4ab9-9723-8974a70458fa","Type":"ContainerDied","Data":"ad05b63c96564f4f8c19ab9bdd5f1ba5e69f9776b719604e8d71b463ed597bda"} Mar 12 18:50:03.223695 master-0 kubenswrapper[29097]: I0312 18:50:03.223178 29097 scope.go:117] "RemoveContainer" containerID="d88f0e93f9a8e877d7b8acc4a3470c0c61e85839d8013057558b5735ad92c013" Mar 12 18:50:03.223695 master-0 kubenswrapper[29097]: I0312 18:50:03.223277 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 18:50:03.229776 master-0 kubenswrapper[29097]: I0312 18:50:03.229730 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4b4f4e23-ac5a-40db-a2bd-e0c71a571fb8","Type":"ContainerStarted","Data":"f2d4b840e8083b48bf2a0dccfc13438a3a2f2919b12245880d38822587fddc9f"} Mar 12 18:50:03.229865 master-0 kubenswrapper[29097]: I0312 18:50:03.229803 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4b4f4e23-ac5a-40db-a2bd-e0c71a571fb8","Type":"ContainerStarted","Data":"65b6d55b71d5b16d22bd51b69ca21fe3223cd34606210dfd728a4e95d7289c49"} Mar 12 18:50:03.229983 master-0 kubenswrapper[29097]: I0312 18:50:03.229965 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 12 18:50:03.229983 master-0 kubenswrapper[29097]: I0312 18:50:03.229964 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6f1991af-1c66-45fe-beeb-42c620bc4836" containerName="nova-metadata-log" containerID="cri-o://a544dc91029c7b7a59cf2b054149334d0c31b5c0791f4cec1449101c0575a1c5" gracePeriod=30 Mar 12 18:50:03.230066 master-0 kubenswrapper[29097]: I0312 18:50:03.230011 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6f1991af-1c66-45fe-beeb-42c620bc4836" containerName="nova-metadata-metadata" containerID="cri-o://5e74e6b9ff7cbd6322c507be7d301307cd51c2c705e9d24741d41cad162c413d" gracePeriod=30 Mar 12 18:50:03.268504 master-0 kubenswrapper[29097]: I0312 18:50:03.268460 29097 scope.go:117] "RemoveContainer" containerID="d88f0e93f9a8e877d7b8acc4a3470c0c61e85839d8013057558b5735ad92c013" Mar 12 18:50:03.268952 master-0 kubenswrapper[29097]: I0312 18:50:03.268656 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.268634022 podStartE2EDuration="2.268634022s" podCreationTimestamp="2026-03-12 18:50:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:50:03.254598075 +0000 UTC m=+1242.808578212" watchObservedRunningTime="2026-03-12 18:50:03.268634022 +0000 UTC m=+1242.822614129" Mar 12 18:50:03.269996 master-0 kubenswrapper[29097]: E0312 18:50:03.269950 29097 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d88f0e93f9a8e877d7b8acc4a3470c0c61e85839d8013057558b5735ad92c013\": container with ID starting with d88f0e93f9a8e877d7b8acc4a3470c0c61e85839d8013057558b5735ad92c013 not found: ID does not exist" containerID="d88f0e93f9a8e877d7b8acc4a3470c0c61e85839d8013057558b5735ad92c013" Mar 12 18:50:03.270087 master-0 kubenswrapper[29097]: I0312 18:50:03.270010 29097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d88f0e93f9a8e877d7b8acc4a3470c0c61e85839d8013057558b5735ad92c013"} err="failed to get container status \"d88f0e93f9a8e877d7b8acc4a3470c0c61e85839d8013057558b5735ad92c013\": rpc error: code = NotFound desc = could not find container \"d88f0e93f9a8e877d7b8acc4a3470c0c61e85839d8013057558b5735ad92c013\": container with ID starting with d88f0e93f9a8e877d7b8acc4a3470c0c61e85839d8013057558b5735ad92c013 not found: ID does not exist" Mar 12 18:50:03.291592 master-0 kubenswrapper[29097]: I0312 18:50:03.291527 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 18:50:03.309575 master-0 kubenswrapper[29097]: I0312 18:50:03.309475 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 18:50:03.324628 master-0 kubenswrapper[29097]: I0312 18:50:03.324562 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 18:50:03.325142 master-0 kubenswrapper[29097]: E0312 18:50:03.325122 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="528b36cc-fcf4-4ab9-9723-8974a70458fa" containerName="nova-scheduler-scheduler" Mar 12 18:50:03.325142 master-0 kubenswrapper[29097]: I0312 18:50:03.325142 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="528b36cc-fcf4-4ab9-9723-8974a70458fa" containerName="nova-scheduler-scheduler" Mar 12 18:50:03.325414 master-0 kubenswrapper[29097]: I0312 18:50:03.325398 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="528b36cc-fcf4-4ab9-9723-8974a70458fa" containerName="nova-scheduler-scheduler" Mar 12 18:50:03.326297 master-0 kubenswrapper[29097]: I0312 18:50:03.326272 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 18:50:03.329911 master-0 kubenswrapper[29097]: I0312 18:50:03.329876 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 12 18:50:03.345554 master-0 kubenswrapper[29097]: I0312 18:50:03.339708 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 18:50:03.405655 master-0 kubenswrapper[29097]: I0312 18:50:03.405580 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvfbm\" (UniqueName: \"kubernetes.io/projected/2e9566f3-47c9-4d4a-b503-ac3374cc7ecd-kube-api-access-cvfbm\") pod \"nova-scheduler-0\" (UID: \"2e9566f3-47c9-4d4a-b503-ac3374cc7ecd\") " pod="openstack/nova-scheduler-0" Mar 12 18:50:03.405822 master-0 kubenswrapper[29097]: I0312 18:50:03.405689 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e9566f3-47c9-4d4a-b503-ac3374cc7ecd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2e9566f3-47c9-4d4a-b503-ac3374cc7ecd\") " pod="openstack/nova-scheduler-0" Mar 12 18:50:03.405822 master-0 kubenswrapper[29097]: I0312 18:50:03.405728 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9566f3-47c9-4d4a-b503-ac3374cc7ecd-config-data\") pod \"nova-scheduler-0\" (UID: \"2e9566f3-47c9-4d4a-b503-ac3374cc7ecd\") " pod="openstack/nova-scheduler-0" Mar 12 18:50:03.507776 master-0 kubenswrapper[29097]: I0312 18:50:03.507636 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvfbm\" (UniqueName: \"kubernetes.io/projected/2e9566f3-47c9-4d4a-b503-ac3374cc7ecd-kube-api-access-cvfbm\") pod \"nova-scheduler-0\" (UID: \"2e9566f3-47c9-4d4a-b503-ac3374cc7ecd\") " pod="openstack/nova-scheduler-0" Mar 12 18:50:03.508070 master-0 kubenswrapper[29097]: I0312 18:50:03.508051 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e9566f3-47c9-4d4a-b503-ac3374cc7ecd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2e9566f3-47c9-4d4a-b503-ac3374cc7ecd\") " pod="openstack/nova-scheduler-0" Mar 12 18:50:03.508176 master-0 kubenswrapper[29097]: I0312 18:50:03.508162 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9566f3-47c9-4d4a-b503-ac3374cc7ecd-config-data\") pod \"nova-scheduler-0\" (UID: \"2e9566f3-47c9-4d4a-b503-ac3374cc7ecd\") " pod="openstack/nova-scheduler-0" Mar 12 18:50:03.511711 master-0 kubenswrapper[29097]: I0312 18:50:03.511683 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9566f3-47c9-4d4a-b503-ac3374cc7ecd-config-data\") pod \"nova-scheduler-0\" (UID: \"2e9566f3-47c9-4d4a-b503-ac3374cc7ecd\") " pod="openstack/nova-scheduler-0" Mar 12 18:50:03.512468 master-0 kubenswrapper[29097]: I0312 18:50:03.512410 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e9566f3-47c9-4d4a-b503-ac3374cc7ecd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2e9566f3-47c9-4d4a-b503-ac3374cc7ecd\") " pod="openstack/nova-scheduler-0" Mar 12 18:50:03.525019 master-0 kubenswrapper[29097]: I0312 18:50:03.524980 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvfbm\" (UniqueName: \"kubernetes.io/projected/2e9566f3-47c9-4d4a-b503-ac3374cc7ecd-kube-api-access-cvfbm\") pod \"nova-scheduler-0\" (UID: \"2e9566f3-47c9-4d4a-b503-ac3374cc7ecd\") " pod="openstack/nova-scheduler-0" Mar 12 18:50:03.659270 master-0 kubenswrapper[29097]: I0312 18:50:03.659200 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 18:50:03.937920 master-0 kubenswrapper[29097]: I0312 18:50:03.937688 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 18:50:04.021734 master-0 kubenswrapper[29097]: I0312 18:50:04.021618 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f1991af-1c66-45fe-beeb-42c620bc4836-nova-metadata-tls-certs\") pod \"6f1991af-1c66-45fe-beeb-42c620bc4836\" (UID: \"6f1991af-1c66-45fe-beeb-42c620bc4836\") " Mar 12 18:50:04.021734 master-0 kubenswrapper[29097]: I0312 18:50:04.021718 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f1991af-1c66-45fe-beeb-42c620bc4836-combined-ca-bundle\") pod \"6f1991af-1c66-45fe-beeb-42c620bc4836\" (UID: \"6f1991af-1c66-45fe-beeb-42c620bc4836\") " Mar 12 18:50:04.022064 master-0 kubenswrapper[29097]: I0312 18:50:04.021858 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5548d\" (UniqueName: \"kubernetes.io/projected/6f1991af-1c66-45fe-beeb-42c620bc4836-kube-api-access-5548d\") pod \"6f1991af-1c66-45fe-beeb-42c620bc4836\" (UID: \"6f1991af-1c66-45fe-beeb-42c620bc4836\") " Mar 12 18:50:04.022064 master-0 kubenswrapper[29097]: I0312 18:50:04.021901 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f1991af-1c66-45fe-beeb-42c620bc4836-logs\") pod \"6f1991af-1c66-45fe-beeb-42c620bc4836\" (UID: \"6f1991af-1c66-45fe-beeb-42c620bc4836\") " Mar 12 18:50:04.022064 master-0 kubenswrapper[29097]: I0312 18:50:04.021957 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f1991af-1c66-45fe-beeb-42c620bc4836-config-data\") pod \"6f1991af-1c66-45fe-beeb-42c620bc4836\" (UID: \"6f1991af-1c66-45fe-beeb-42c620bc4836\") " Mar 12 18:50:04.030039 master-0 kubenswrapper[29097]: I0312 18:50:04.029981 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f1991af-1c66-45fe-beeb-42c620bc4836-logs" (OuterVolumeSpecName: "logs") pod "6f1991af-1c66-45fe-beeb-42c620bc4836" (UID: "6f1991af-1c66-45fe-beeb-42c620bc4836"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:50:04.048254 master-0 kubenswrapper[29097]: I0312 18:50:04.046307 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f1991af-1c66-45fe-beeb-42c620bc4836-kube-api-access-5548d" (OuterVolumeSpecName: "kube-api-access-5548d") pod "6f1991af-1c66-45fe-beeb-42c620bc4836" (UID: "6f1991af-1c66-45fe-beeb-42c620bc4836"). InnerVolumeSpecName "kube-api-access-5548d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:50:04.052637 master-0 kubenswrapper[29097]: I0312 18:50:04.051266 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f1991af-1c66-45fe-beeb-42c620bc4836-config-data" (OuterVolumeSpecName: "config-data") pod "6f1991af-1c66-45fe-beeb-42c620bc4836" (UID: "6f1991af-1c66-45fe-beeb-42c620bc4836"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:50:04.068137 master-0 kubenswrapper[29097]: I0312 18:50:04.068080 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f1991af-1c66-45fe-beeb-42c620bc4836-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f1991af-1c66-45fe-beeb-42c620bc4836" (UID: "6f1991af-1c66-45fe-beeb-42c620bc4836"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:50:04.093142 master-0 kubenswrapper[29097]: I0312 18:50:04.093094 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f1991af-1c66-45fe-beeb-42c620bc4836-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "6f1991af-1c66-45fe-beeb-42c620bc4836" (UID: "6f1991af-1c66-45fe-beeb-42c620bc4836"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:50:04.124297 master-0 kubenswrapper[29097]: I0312 18:50:04.124236 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5548d\" (UniqueName: \"kubernetes.io/projected/6f1991af-1c66-45fe-beeb-42c620bc4836-kube-api-access-5548d\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:04.124297 master-0 kubenswrapper[29097]: I0312 18:50:04.124277 29097 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f1991af-1c66-45fe-beeb-42c620bc4836-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:04.124297 master-0 kubenswrapper[29097]: I0312 18:50:04.124289 29097 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f1991af-1c66-45fe-beeb-42c620bc4836-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:04.124297 master-0 kubenswrapper[29097]: I0312 18:50:04.124299 29097 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f1991af-1c66-45fe-beeb-42c620bc4836-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:04.124605 master-0 kubenswrapper[29097]: I0312 18:50:04.124327 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f1991af-1c66-45fe-beeb-42c620bc4836-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:04.260418 master-0 kubenswrapper[29097]: I0312 18:50:04.259269 29097 generic.go:334] "Generic (PLEG): container finished" podID="6f1991af-1c66-45fe-beeb-42c620bc4836" containerID="5e74e6b9ff7cbd6322c507be7d301307cd51c2c705e9d24741d41cad162c413d" exitCode=0 Mar 12 18:50:04.260418 master-0 kubenswrapper[29097]: I0312 18:50:04.259310 29097 generic.go:334] "Generic (PLEG): container finished" podID="6f1991af-1c66-45fe-beeb-42c620bc4836" containerID="a544dc91029c7b7a59cf2b054149334d0c31b5c0791f4cec1449101c0575a1c5" exitCode=143 Mar 12 18:50:04.260418 master-0 kubenswrapper[29097]: I0312 18:50:04.259361 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6f1991af-1c66-45fe-beeb-42c620bc4836","Type":"ContainerDied","Data":"5e74e6b9ff7cbd6322c507be7d301307cd51c2c705e9d24741d41cad162c413d"} Mar 12 18:50:04.260418 master-0 kubenswrapper[29097]: I0312 18:50:04.259398 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6f1991af-1c66-45fe-beeb-42c620bc4836","Type":"ContainerDied","Data":"a544dc91029c7b7a59cf2b054149334d0c31b5c0791f4cec1449101c0575a1c5"} Mar 12 18:50:04.260418 master-0 kubenswrapper[29097]: I0312 18:50:04.259412 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6f1991af-1c66-45fe-beeb-42c620bc4836","Type":"ContainerDied","Data":"ac1a096ed9c7ea9a548d46f769472c9ac812cc71abb2d3616f381cc1abd7fe3b"} Mar 12 18:50:04.260418 master-0 kubenswrapper[29097]: I0312 18:50:04.259433 29097 scope.go:117] "RemoveContainer" containerID="5e74e6b9ff7cbd6322c507be7d301307cd51c2c705e9d24741d41cad162c413d" Mar 12 18:50:04.260418 master-0 kubenswrapper[29097]: I0312 18:50:04.259598 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 18:50:04.262871 master-0 kubenswrapper[29097]: I0312 18:50:04.262684 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 18:50:04.273458 master-0 kubenswrapper[29097]: I0312 18:50:04.273409 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2e9566f3-47c9-4d4a-b503-ac3374cc7ecd","Type":"ContainerStarted","Data":"62b5c66518e299424eef81f57b1cc6398b7adeb94525604f0d1a41552c22c332"} Mar 12 18:50:04.321365 master-0 kubenswrapper[29097]: I0312 18:50:04.316126 29097 scope.go:117] "RemoveContainer" containerID="a544dc91029c7b7a59cf2b054149334d0c31b5c0791f4cec1449101c0575a1c5" Mar 12 18:50:04.334623 master-0 kubenswrapper[29097]: I0312 18:50:04.334578 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 18:50:04.347770 master-0 kubenswrapper[29097]: I0312 18:50:04.347731 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 18:50:04.370173 master-0 kubenswrapper[29097]: I0312 18:50:04.369686 29097 scope.go:117] "RemoveContainer" containerID="5e74e6b9ff7cbd6322c507be7d301307cd51c2c705e9d24741d41cad162c413d" Mar 12 18:50:04.370574 master-0 kubenswrapper[29097]: E0312 18:50:04.370434 29097 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e74e6b9ff7cbd6322c507be7d301307cd51c2c705e9d24741d41cad162c413d\": container with ID starting with 5e74e6b9ff7cbd6322c507be7d301307cd51c2c705e9d24741d41cad162c413d not found: ID does not exist" containerID="5e74e6b9ff7cbd6322c507be7d301307cd51c2c705e9d24741d41cad162c413d" Mar 12 18:50:04.370574 master-0 kubenswrapper[29097]: I0312 18:50:04.370465 29097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e74e6b9ff7cbd6322c507be7d301307cd51c2c705e9d24741d41cad162c413d"} err="failed to get container status \"5e74e6b9ff7cbd6322c507be7d301307cd51c2c705e9d24741d41cad162c413d\": rpc error: code = NotFound desc = could not find container \"5e74e6b9ff7cbd6322c507be7d301307cd51c2c705e9d24741d41cad162c413d\": container with ID starting with 5e74e6b9ff7cbd6322c507be7d301307cd51c2c705e9d24741d41cad162c413d not found: ID does not exist" Mar 12 18:50:04.370574 master-0 kubenswrapper[29097]: I0312 18:50:04.370487 29097 scope.go:117] "RemoveContainer" containerID="a544dc91029c7b7a59cf2b054149334d0c31b5c0791f4cec1449101c0575a1c5" Mar 12 18:50:04.370863 master-0 kubenswrapper[29097]: E0312 18:50:04.370787 29097 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a544dc91029c7b7a59cf2b054149334d0c31b5c0791f4cec1449101c0575a1c5\": container with ID starting with a544dc91029c7b7a59cf2b054149334d0c31b5c0791f4cec1449101c0575a1c5 not found: ID does not exist" containerID="a544dc91029c7b7a59cf2b054149334d0c31b5c0791f4cec1449101c0575a1c5" Mar 12 18:50:04.370863 master-0 kubenswrapper[29097]: I0312 18:50:04.370810 29097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a544dc91029c7b7a59cf2b054149334d0c31b5c0791f4cec1449101c0575a1c5"} err="failed to get container status \"a544dc91029c7b7a59cf2b054149334d0c31b5c0791f4cec1449101c0575a1c5\": rpc error: code = NotFound desc = could not find container \"a544dc91029c7b7a59cf2b054149334d0c31b5c0791f4cec1449101c0575a1c5\": container with ID starting with a544dc91029c7b7a59cf2b054149334d0c31b5c0791f4cec1449101c0575a1c5 not found: ID does not exist" Mar 12 18:50:04.370863 master-0 kubenswrapper[29097]: I0312 18:50:04.370827 29097 scope.go:117] "RemoveContainer" containerID="5e74e6b9ff7cbd6322c507be7d301307cd51c2c705e9d24741d41cad162c413d" Mar 12 18:50:04.371190 master-0 kubenswrapper[29097]: I0312 18:50:04.371136 29097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e74e6b9ff7cbd6322c507be7d301307cd51c2c705e9d24741d41cad162c413d"} err="failed to get container status \"5e74e6b9ff7cbd6322c507be7d301307cd51c2c705e9d24741d41cad162c413d\": rpc error: code = NotFound desc = could not find container \"5e74e6b9ff7cbd6322c507be7d301307cd51c2c705e9d24741d41cad162c413d\": container with ID starting with 5e74e6b9ff7cbd6322c507be7d301307cd51c2c705e9d24741d41cad162c413d not found: ID does not exist" Mar 12 18:50:04.371190 master-0 kubenswrapper[29097]: I0312 18:50:04.371178 29097 scope.go:117] "RemoveContainer" containerID="a544dc91029c7b7a59cf2b054149334d0c31b5c0791f4cec1449101c0575a1c5" Mar 12 18:50:04.371478 master-0 kubenswrapper[29097]: I0312 18:50:04.371450 29097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a544dc91029c7b7a59cf2b054149334d0c31b5c0791f4cec1449101c0575a1c5"} err="failed to get container status \"a544dc91029c7b7a59cf2b054149334d0c31b5c0791f4cec1449101c0575a1c5\": rpc error: code = NotFound desc = could not find container \"a544dc91029c7b7a59cf2b054149334d0c31b5c0791f4cec1449101c0575a1c5\": container with ID starting with a544dc91029c7b7a59cf2b054149334d0c31b5c0791f4cec1449101c0575a1c5 not found: ID does not exist" Mar 12 18:50:04.442638 master-0 kubenswrapper[29097]: I0312 18:50:04.442590 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 12 18:50:04.443436 master-0 kubenswrapper[29097]: E0312 18:50:04.443417 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f1991af-1c66-45fe-beeb-42c620bc4836" containerName="nova-metadata-log" Mar 12 18:50:04.443522 master-0 kubenswrapper[29097]: I0312 18:50:04.443499 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f1991af-1c66-45fe-beeb-42c620bc4836" containerName="nova-metadata-log" Mar 12 18:50:04.443632 master-0 kubenswrapper[29097]: E0312 18:50:04.443620 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f1991af-1c66-45fe-beeb-42c620bc4836" containerName="nova-metadata-metadata" Mar 12 18:50:04.443692 master-0 kubenswrapper[29097]: I0312 18:50:04.443682 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f1991af-1c66-45fe-beeb-42c620bc4836" containerName="nova-metadata-metadata" Mar 12 18:50:04.443950 master-0 kubenswrapper[29097]: I0312 18:50:04.443937 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f1991af-1c66-45fe-beeb-42c620bc4836" containerName="nova-metadata-log" Mar 12 18:50:04.444455 master-0 kubenswrapper[29097]: I0312 18:50:04.444441 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f1991af-1c66-45fe-beeb-42c620bc4836" containerName="nova-metadata-metadata" Mar 12 18:50:04.449034 master-0 kubenswrapper[29097]: I0312 18:50:04.449004 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 18:50:04.452590 master-0 kubenswrapper[29097]: I0312 18:50:04.452498 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 12 18:50:04.452723 master-0 kubenswrapper[29097]: I0312 18:50:04.452699 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 12 18:50:04.472019 master-0 kubenswrapper[29097]: I0312 18:50:04.471761 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 18:50:04.533870 master-0 kubenswrapper[29097]: I0312 18:50:04.533760 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0-logs\") pod \"nova-metadata-0\" (UID: \"7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0\") " pod="openstack/nova-metadata-0" Mar 12 18:50:04.534171 master-0 kubenswrapper[29097]: I0312 18:50:04.534120 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0\") " pod="openstack/nova-metadata-0" Mar 12 18:50:04.534243 master-0 kubenswrapper[29097]: I0312 18:50:04.534213 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0\") " pod="openstack/nova-metadata-0" Mar 12 18:50:04.534585 master-0 kubenswrapper[29097]: I0312 18:50:04.534551 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zhhw\" (UniqueName: \"kubernetes.io/projected/7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0-kube-api-access-4zhhw\") pod \"nova-metadata-0\" (UID: \"7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0\") " pod="openstack/nova-metadata-0" Mar 12 18:50:04.534725 master-0 kubenswrapper[29097]: I0312 18:50:04.534699 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0-config-data\") pod \"nova-metadata-0\" (UID: \"7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0\") " pod="openstack/nova-metadata-0" Mar 12 18:50:04.627672 master-0 kubenswrapper[29097]: I0312 18:50:04.627566 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-conductor-0" Mar 12 18:50:04.636507 master-0 kubenswrapper[29097]: I0312 18:50:04.636455 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0\") " pod="openstack/nova-metadata-0" Mar 12 18:50:04.636633 master-0 kubenswrapper[29097]: I0312 18:50:04.636532 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0\") " pod="openstack/nova-metadata-0" Mar 12 18:50:04.636673 master-0 kubenswrapper[29097]: I0312 18:50:04.636642 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zhhw\" (UniqueName: \"kubernetes.io/projected/7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0-kube-api-access-4zhhw\") pod \"nova-metadata-0\" (UID: \"7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0\") " pod="openstack/nova-metadata-0" Mar 12 18:50:04.636726 master-0 kubenswrapper[29097]: I0312 18:50:04.636706 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0-config-data\") pod \"nova-metadata-0\" (UID: \"7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0\") " pod="openstack/nova-metadata-0" Mar 12 18:50:04.636780 master-0 kubenswrapper[29097]: I0312 18:50:04.636735 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0-logs\") pod \"nova-metadata-0\" (UID: \"7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0\") " pod="openstack/nova-metadata-0" Mar 12 18:50:04.638939 master-0 kubenswrapper[29097]: I0312 18:50:04.638870 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0-logs\") pod \"nova-metadata-0\" (UID: \"7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0\") " pod="openstack/nova-metadata-0" Mar 12 18:50:04.642296 master-0 kubenswrapper[29097]: I0312 18:50:04.642265 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0\") " pod="openstack/nova-metadata-0" Mar 12 18:50:04.642865 master-0 kubenswrapper[29097]: I0312 18:50:04.642758 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0-config-data\") pod \"nova-metadata-0\" (UID: \"7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0\") " pod="openstack/nova-metadata-0" Mar 12 18:50:04.643921 master-0 kubenswrapper[29097]: I0312 18:50:04.643893 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0\") " pod="openstack/nova-metadata-0" Mar 12 18:50:04.651339 master-0 kubenswrapper[29097]: I0312 18:50:04.651308 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zhhw\" (UniqueName: \"kubernetes.io/projected/7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0-kube-api-access-4zhhw\") pod \"nova-metadata-0\" (UID: \"7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0\") " pod="openstack/nova-metadata-0" Mar 12 18:50:04.665768 master-0 kubenswrapper[29097]: I0312 18:50:04.665732 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Mar 12 18:50:04.738552 master-0 kubenswrapper[29097]: I0312 18:50:04.738491 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="528b36cc-fcf4-4ab9-9723-8974a70458fa" path="/var/lib/kubelet/pods/528b36cc-fcf4-4ab9-9723-8974a70458fa/volumes" Mar 12 18:50:04.739331 master-0 kubenswrapper[29097]: I0312 18:50:04.739316 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f1991af-1c66-45fe-beeb-42c620bc4836" path="/var/lib/kubelet/pods/6f1991af-1c66-45fe-beeb-42c620bc4836/volumes" Mar 12 18:50:04.786428 master-0 kubenswrapper[29097]: I0312 18:50:04.786387 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 18:50:05.291190 master-0 kubenswrapper[29097]: I0312 18:50:05.291134 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 18:50:05.291190 master-0 kubenswrapper[29097]: I0312 18:50:05.291148 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2e9566f3-47c9-4d4a-b503-ac3374cc7ecd","Type":"ContainerStarted","Data":"8dc638aaf2d22a3e5e2a2537c8cfff8ed7a45b0ce2a06c3fc72e55e8782bb11d"} Mar 12 18:50:05.293216 master-0 kubenswrapper[29097]: I0312 18:50:05.293185 29097 generic.go:334] "Generic (PLEG): container finished" podID="fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7" containerID="fc4b5285fa64750c63fb337cc8a2961735cc483e198eedab2ace1691a3f129bc" exitCode=0 Mar 12 18:50:05.293328 master-0 kubenswrapper[29097]: I0312 18:50:05.293260 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7","Type":"ContainerDied","Data":"fc4b5285fa64750c63fb337cc8a2961735cc483e198eedab2ace1691a3f129bc"} Mar 12 18:50:05.293328 master-0 kubenswrapper[29097]: I0312 18:50:05.293291 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7","Type":"ContainerDied","Data":"66b18f8ba5465d5788303e56b3bf567ffcc955601a151080d806de7e2c528be0"} Mar 12 18:50:05.293328 master-0 kubenswrapper[29097]: I0312 18:50:05.293311 29097 scope.go:117] "RemoveContainer" containerID="fc4b5285fa64750c63fb337cc8a2961735cc483e198eedab2ace1691a3f129bc" Mar 12 18:50:05.302642 master-0 kubenswrapper[29097]: I0312 18:50:05.302298 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Mar 12 18:50:05.344933 master-0 kubenswrapper[29097]: I0312 18:50:05.344901 29097 scope.go:117] "RemoveContainer" containerID="1b71d5e934cf86261ca5ad3a501cf664777fd060a8a51a72aebd292020b287c1" Mar 12 18:50:05.354325 master-0 kubenswrapper[29097]: I0312 18:50:05.353715 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7-logs\") pod \"fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7\" (UID: \"fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7\") " Mar 12 18:50:05.354325 master-0 kubenswrapper[29097]: I0312 18:50:05.353871 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpx2l\" (UniqueName: \"kubernetes.io/projected/fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7-kube-api-access-tpx2l\") pod \"fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7\" (UID: \"fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7\") " Mar 12 18:50:05.354325 master-0 kubenswrapper[29097]: I0312 18:50:05.353920 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7-combined-ca-bundle\") pod \"fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7\" (UID: \"fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7\") " Mar 12 18:50:05.354325 master-0 kubenswrapper[29097]: I0312 18:50:05.353973 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7-config-data\") pod \"fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7\" (UID: \"fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7\") " Mar 12 18:50:05.355737 master-0 kubenswrapper[29097]: I0312 18:50:05.355704 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7-logs" (OuterVolumeSpecName: "logs") pod "fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7" (UID: "fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:50:05.359759 master-0 kubenswrapper[29097]: I0312 18:50:05.359715 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7-kube-api-access-tpx2l" (OuterVolumeSpecName: "kube-api-access-tpx2l") pod "fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7" (UID: "fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7"). InnerVolumeSpecName "kube-api-access-tpx2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:50:05.369969 master-0 kubenswrapper[29097]: I0312 18:50:05.369908 29097 scope.go:117] "RemoveContainer" containerID="fc4b5285fa64750c63fb337cc8a2961735cc483e198eedab2ace1691a3f129bc" Mar 12 18:50:05.370466 master-0 kubenswrapper[29097]: E0312 18:50:05.370303 29097 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc4b5285fa64750c63fb337cc8a2961735cc483e198eedab2ace1691a3f129bc\": container with ID starting with fc4b5285fa64750c63fb337cc8a2961735cc483e198eedab2ace1691a3f129bc not found: ID does not exist" containerID="fc4b5285fa64750c63fb337cc8a2961735cc483e198eedab2ace1691a3f129bc" Mar 12 18:50:05.370466 master-0 kubenswrapper[29097]: I0312 18:50:05.370344 29097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc4b5285fa64750c63fb337cc8a2961735cc483e198eedab2ace1691a3f129bc"} err="failed to get container status \"fc4b5285fa64750c63fb337cc8a2961735cc483e198eedab2ace1691a3f129bc\": rpc error: code = NotFound desc = could not find container \"fc4b5285fa64750c63fb337cc8a2961735cc483e198eedab2ace1691a3f129bc\": container with ID starting with fc4b5285fa64750c63fb337cc8a2961735cc483e198eedab2ace1691a3f129bc not found: ID does not exist" Mar 12 18:50:05.370466 master-0 kubenswrapper[29097]: I0312 18:50:05.370371 29097 scope.go:117] "RemoveContainer" containerID="1b71d5e934cf86261ca5ad3a501cf664777fd060a8a51a72aebd292020b287c1" Mar 12 18:50:05.370871 master-0 kubenswrapper[29097]: E0312 18:50:05.370807 29097 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b71d5e934cf86261ca5ad3a501cf664777fd060a8a51a72aebd292020b287c1\": container with ID starting with 1b71d5e934cf86261ca5ad3a501cf664777fd060a8a51a72aebd292020b287c1 not found: ID does not exist" containerID="1b71d5e934cf86261ca5ad3a501cf664777fd060a8a51a72aebd292020b287c1" Mar 12 18:50:05.370871 master-0 kubenswrapper[29097]: I0312 18:50:05.370840 29097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b71d5e934cf86261ca5ad3a501cf664777fd060a8a51a72aebd292020b287c1"} err="failed to get container status \"1b71d5e934cf86261ca5ad3a501cf664777fd060a8a51a72aebd292020b287c1\": rpc error: code = NotFound desc = could not find container \"1b71d5e934cf86261ca5ad3a501cf664777fd060a8a51a72aebd292020b287c1\": container with ID starting with 1b71d5e934cf86261ca5ad3a501cf664777fd060a8a51a72aebd292020b287c1 not found: ID does not exist" Mar 12 18:50:05.372266 master-0 kubenswrapper[29097]: I0312 18:50:05.372217 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.372204001 podStartE2EDuration="2.372204001s" podCreationTimestamp="2026-03-12 18:50:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:50:05.341485941 +0000 UTC m=+1244.895466038" watchObservedRunningTime="2026-03-12 18:50:05.372204001 +0000 UTC m=+1244.926184098" Mar 12 18:50:05.408226 master-0 kubenswrapper[29097]: I0312 18:50:05.408163 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7-config-data" (OuterVolumeSpecName: "config-data") pod "fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7" (UID: "fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:50:05.413282 master-0 kubenswrapper[29097]: I0312 18:50:05.413213 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7" (UID: "fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:50:05.423995 master-0 kubenswrapper[29097]: I0312 18:50:05.423949 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 18:50:05.428353 master-0 kubenswrapper[29097]: W0312 18:50:05.428327 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cf93afc_4f3a_4ce9_9a63_aea8eef5c5c0.slice/crio-0418ffee65c2461734b595d8184e43a1f7652327942758f53004792ebffc9428 WatchSource:0}: Error finding container 0418ffee65c2461734b595d8184e43a1f7652327942758f53004792ebffc9428: Status 404 returned error can't find the container with id 0418ffee65c2461734b595d8184e43a1f7652327942758f53004792ebffc9428 Mar 12 18:50:05.457410 master-0 kubenswrapper[29097]: I0312 18:50:05.457266 29097 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:05.457650 master-0 kubenswrapper[29097]: I0312 18:50:05.457603 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpx2l\" (UniqueName: \"kubernetes.io/projected/fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7-kube-api-access-tpx2l\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:05.457650 master-0 kubenswrapper[29097]: I0312 18:50:05.457621 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:05.457650 master-0 kubenswrapper[29097]: I0312 18:50:05.457632 29097 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:06.330915 master-0 kubenswrapper[29097]: I0312 18:50:06.330845 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0","Type":"ContainerStarted","Data":"2bbc4f55bb9f69e9a349973ceefb5c27f4ff3d66fed89e9520e33e5785c30b01"} Mar 12 18:50:06.331669 master-0 kubenswrapper[29097]: I0312 18:50:06.330920 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0","Type":"ContainerStarted","Data":"e68f61115681f205d51167cd68aca6bd143be1ae0ba9d6d3e9cb0339d38f4ed6"} Mar 12 18:50:06.331669 master-0 kubenswrapper[29097]: I0312 18:50:06.330942 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0","Type":"ContainerStarted","Data":"0418ffee65c2461734b595d8184e43a1f7652327942758f53004792ebffc9428"} Mar 12 18:50:06.335483 master-0 kubenswrapper[29097]: I0312 18:50:06.335422 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 18:50:06.370425 master-0 kubenswrapper[29097]: I0312 18:50:06.370326 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.370301613 podStartE2EDuration="2.370301613s" podCreationTimestamp="2026-03-12 18:50:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:50:06.355850486 +0000 UTC m=+1245.909830593" watchObservedRunningTime="2026-03-12 18:50:06.370301613 +0000 UTC m=+1245.924281710" Mar 12 18:50:06.426760 master-0 kubenswrapper[29097]: I0312 18:50:06.422698 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 18:50:06.459561 master-0 kubenswrapper[29097]: I0312 18:50:06.459484 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 12 18:50:06.476060 master-0 kubenswrapper[29097]: I0312 18:50:06.476013 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 12 18:50:06.476601 master-0 kubenswrapper[29097]: E0312 18:50:06.476581 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7" containerName="nova-api-api" Mar 12 18:50:06.476670 master-0 kubenswrapper[29097]: I0312 18:50:06.476602 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7" containerName="nova-api-api" Mar 12 18:50:06.476670 master-0 kubenswrapper[29097]: E0312 18:50:06.476657 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7" containerName="nova-api-log" Mar 12 18:50:06.476670 master-0 kubenswrapper[29097]: I0312 18:50:06.476663 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7" containerName="nova-api-log" Mar 12 18:50:06.476900 master-0 kubenswrapper[29097]: I0312 18:50:06.476882 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7" containerName="nova-api-api" Mar 12 18:50:06.476950 master-0 kubenswrapper[29097]: I0312 18:50:06.476902 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7" containerName="nova-api-log" Mar 12 18:50:06.478256 master-0 kubenswrapper[29097]: I0312 18:50:06.478227 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 18:50:06.480372 master-0 kubenswrapper[29097]: I0312 18:50:06.480330 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 12 18:50:06.491929 master-0 kubenswrapper[29097]: I0312 18:50:06.491853 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 18:50:06.609672 master-0 kubenswrapper[29097]: I0312 18:50:06.609533 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv865\" (UniqueName: \"kubernetes.io/projected/e6c93825-221d-45e5-a53b-6d3ca6382898-kube-api-access-tv865\") pod \"nova-api-0\" (UID: \"e6c93825-221d-45e5-a53b-6d3ca6382898\") " pod="openstack/nova-api-0" Mar 12 18:50:06.610064 master-0 kubenswrapper[29097]: I0312 18:50:06.609940 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6c93825-221d-45e5-a53b-6d3ca6382898-logs\") pod \"nova-api-0\" (UID: \"e6c93825-221d-45e5-a53b-6d3ca6382898\") " pod="openstack/nova-api-0" Mar 12 18:50:06.610298 master-0 kubenswrapper[29097]: I0312 18:50:06.610239 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6c93825-221d-45e5-a53b-6d3ca6382898-config-data\") pod \"nova-api-0\" (UID: \"e6c93825-221d-45e5-a53b-6d3ca6382898\") " pod="openstack/nova-api-0" Mar 12 18:50:06.610358 master-0 kubenswrapper[29097]: I0312 18:50:06.610316 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6c93825-221d-45e5-a53b-6d3ca6382898-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e6c93825-221d-45e5-a53b-6d3ca6382898\") " pod="openstack/nova-api-0" Mar 12 18:50:06.713054 master-0 kubenswrapper[29097]: I0312 18:50:06.712986 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6c93825-221d-45e5-a53b-6d3ca6382898-config-data\") pod \"nova-api-0\" (UID: \"e6c93825-221d-45e5-a53b-6d3ca6382898\") " pod="openstack/nova-api-0" Mar 12 18:50:06.713270 master-0 kubenswrapper[29097]: I0312 18:50:06.713066 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6c93825-221d-45e5-a53b-6d3ca6382898-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e6c93825-221d-45e5-a53b-6d3ca6382898\") " pod="openstack/nova-api-0" Mar 12 18:50:06.713270 master-0 kubenswrapper[29097]: I0312 18:50:06.713120 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv865\" (UniqueName: \"kubernetes.io/projected/e6c93825-221d-45e5-a53b-6d3ca6382898-kube-api-access-tv865\") pod \"nova-api-0\" (UID: \"e6c93825-221d-45e5-a53b-6d3ca6382898\") " pod="openstack/nova-api-0" Mar 12 18:50:06.713270 master-0 kubenswrapper[29097]: I0312 18:50:06.713258 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6c93825-221d-45e5-a53b-6d3ca6382898-logs\") pod \"nova-api-0\" (UID: \"e6c93825-221d-45e5-a53b-6d3ca6382898\") " pod="openstack/nova-api-0" Mar 12 18:50:06.713862 master-0 kubenswrapper[29097]: I0312 18:50:06.713787 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6c93825-221d-45e5-a53b-6d3ca6382898-logs\") pod \"nova-api-0\" (UID: \"e6c93825-221d-45e5-a53b-6d3ca6382898\") " pod="openstack/nova-api-0" Mar 12 18:50:06.718270 master-0 kubenswrapper[29097]: I0312 18:50:06.718226 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6c93825-221d-45e5-a53b-6d3ca6382898-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e6c93825-221d-45e5-a53b-6d3ca6382898\") " pod="openstack/nova-api-0" Mar 12 18:50:06.718461 master-0 kubenswrapper[29097]: I0312 18:50:06.718417 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6c93825-221d-45e5-a53b-6d3ca6382898-config-data\") pod \"nova-api-0\" (UID: \"e6c93825-221d-45e5-a53b-6d3ca6382898\") " pod="openstack/nova-api-0" Mar 12 18:50:06.744223 master-0 kubenswrapper[29097]: I0312 18:50:06.744155 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7" path="/var/lib/kubelet/pods/fbbcc60a-f9e6-4fc1-bb10-f4416aa4c3b7/volumes" Mar 12 18:50:06.744695 master-0 kubenswrapper[29097]: I0312 18:50:06.744663 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv865\" (UniqueName: \"kubernetes.io/projected/e6c93825-221d-45e5-a53b-6d3ca6382898-kube-api-access-tv865\") pod \"nova-api-0\" (UID: \"e6c93825-221d-45e5-a53b-6d3ca6382898\") " pod="openstack/nova-api-0" Mar 12 18:50:06.811631 master-0 kubenswrapper[29097]: I0312 18:50:06.811583 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 18:50:07.278954 master-0 kubenswrapper[29097]: I0312 18:50:07.278903 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 18:50:07.349070 master-0 kubenswrapper[29097]: I0312 18:50:07.349003 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e6c93825-221d-45e5-a53b-6d3ca6382898","Type":"ContainerStarted","Data":"97c78e1c35f5bcd6126986b1be78cbd3132aae8e8d62297bec9623eaa4e5f5ce"} Mar 12 18:50:08.363555 master-0 kubenswrapper[29097]: I0312 18:50:08.362582 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e6c93825-221d-45e5-a53b-6d3ca6382898","Type":"ContainerStarted","Data":"600f19a7338269927288c15224723daf8bf8502c936d02ab34b25009b3e0ed3b"} Mar 12 18:50:08.363555 master-0 kubenswrapper[29097]: I0312 18:50:08.362643 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e6c93825-221d-45e5-a53b-6d3ca6382898","Type":"ContainerStarted","Data":"34efb7f55bedff0664f2a36171941f64e101733a19cdb1bccb5fa9eba53da4d2"} Mar 12 18:50:08.399738 master-0 kubenswrapper[29097]: I0312 18:50:08.399638 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.399593655 podStartE2EDuration="2.399593655s" podCreationTimestamp="2026-03-12 18:50:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:50:08.38161103 +0000 UTC m=+1247.935591167" watchObservedRunningTime="2026-03-12 18:50:08.399593655 +0000 UTC m=+1247.953573802" Mar 12 18:50:08.660196 master-0 kubenswrapper[29097]: I0312 18:50:08.660059 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 12 18:50:09.787131 master-0 kubenswrapper[29097]: I0312 18:50:09.787062 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 18:50:09.787131 master-0 kubenswrapper[29097]: I0312 18:50:09.787138 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 18:50:11.915111 master-0 kubenswrapper[29097]: I0312 18:50:11.915047 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 12 18:50:13.661030 master-0 kubenswrapper[29097]: I0312 18:50:13.660951 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 12 18:50:13.699350 master-0 kubenswrapper[29097]: I0312 18:50:13.699300 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 12 18:50:14.486605 master-0 kubenswrapper[29097]: I0312 18:50:14.486537 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 12 18:50:14.791440 master-0 kubenswrapper[29097]: I0312 18:50:14.791363 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 12 18:50:14.791440 master-0 kubenswrapper[29097]: I0312 18:50:14.791423 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 12 18:50:15.810556 master-0 kubenswrapper[29097]: I0312 18:50:15.809972 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.12:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:50:15.810556 master-0 kubenswrapper[29097]: I0312 18:50:15.810438 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.12:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:50:16.812959 master-0 kubenswrapper[29097]: I0312 18:50:16.812906 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 18:50:16.812959 master-0 kubenswrapper[29097]: I0312 18:50:16.812954 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 18:50:17.899852 master-0 kubenswrapper[29097]: I0312 18:50:17.899730 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e6c93825-221d-45e5-a53b-6d3ca6382898" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.1.13:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 18:50:17.899852 master-0 kubenswrapper[29097]: I0312 18:50:17.899745 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e6c93825-221d-45e5-a53b-6d3ca6382898" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.1.13:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 18:50:21.236633 master-0 kubenswrapper[29097]: E0312 18:50:21.236449 29097 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33feec78_4592_4343_965b_aa1b7044fcf3.slice/crio-26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Error finding container 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Status 404 returned error can't find the container with id 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547 Mar 12 18:50:22.571582 master-0 kubenswrapper[29097]: I0312 18:50:22.571527 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:50:22.574506 master-0 kubenswrapper[29097]: I0312 18:50:22.574226 29097 generic.go:334] "Generic (PLEG): container finished" podID="a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a" containerID="d931809c5cd7037170a35fb8841a94a5210a157e8a5dcec597397a98da5e0388" exitCode=137 Mar 12 18:50:22.574506 master-0 kubenswrapper[29097]: I0312 18:50:22.574277 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a","Type":"ContainerDied","Data":"d931809c5cd7037170a35fb8841a94a5210a157e8a5dcec597397a98da5e0388"} Mar 12 18:50:22.574506 master-0 kubenswrapper[29097]: I0312 18:50:22.574305 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a","Type":"ContainerDied","Data":"ad4818b617904a783f91f138c8c01c8a7bd617ab25c78fe5542ac88674c0cd50"} Mar 12 18:50:22.574506 master-0 kubenswrapper[29097]: I0312 18:50:22.574323 29097 scope.go:117] "RemoveContainer" containerID="d931809c5cd7037170a35fb8841a94a5210a157e8a5dcec597397a98da5e0388" Mar 12 18:50:22.574506 master-0 kubenswrapper[29097]: I0312 18:50:22.574472 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:50:22.597800 master-0 kubenswrapper[29097]: I0312 18:50:22.597679 29097 scope.go:117] "RemoveContainer" containerID="d931809c5cd7037170a35fb8841a94a5210a157e8a5dcec597397a98da5e0388" Mar 12 18:50:22.600811 master-0 kubenswrapper[29097]: E0312 18:50:22.600765 29097 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d931809c5cd7037170a35fb8841a94a5210a157e8a5dcec597397a98da5e0388\": container with ID starting with d931809c5cd7037170a35fb8841a94a5210a157e8a5dcec597397a98da5e0388 not found: ID does not exist" containerID="d931809c5cd7037170a35fb8841a94a5210a157e8a5dcec597397a98da5e0388" Mar 12 18:50:22.600896 master-0 kubenswrapper[29097]: I0312 18:50:22.600823 29097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d931809c5cd7037170a35fb8841a94a5210a157e8a5dcec597397a98da5e0388"} err="failed to get container status \"d931809c5cd7037170a35fb8841a94a5210a157e8a5dcec597397a98da5e0388\": rpc error: code = NotFound desc = could not find container \"d931809c5cd7037170a35fb8841a94a5210a157e8a5dcec597397a98da5e0388\": container with ID starting with d931809c5cd7037170a35fb8841a94a5210a157e8a5dcec597397a98da5e0388 not found: ID does not exist" Mar 12 18:50:22.657992 master-0 kubenswrapper[29097]: I0312 18:50:22.657940 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a-config-data\") pod \"a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a\" (UID: \"a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a\") " Mar 12 18:50:22.658211 master-0 kubenswrapper[29097]: I0312 18:50:22.658101 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f7mx\" (UniqueName: \"kubernetes.io/projected/a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a-kube-api-access-4f7mx\") pod \"a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a\" (UID: \"a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a\") " Mar 12 18:50:22.658211 master-0 kubenswrapper[29097]: I0312 18:50:22.658147 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a-combined-ca-bundle\") pod \"a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a\" (UID: \"a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a\") " Mar 12 18:50:22.663311 master-0 kubenswrapper[29097]: I0312 18:50:22.663275 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a-kube-api-access-4f7mx" (OuterVolumeSpecName: "kube-api-access-4f7mx") pod "a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a" (UID: "a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a"). InnerVolumeSpecName "kube-api-access-4f7mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:50:22.689140 master-0 kubenswrapper[29097]: I0312 18:50:22.689078 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a-config-data" (OuterVolumeSpecName: "config-data") pod "a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a" (UID: "a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:50:22.699034 master-0 kubenswrapper[29097]: I0312 18:50:22.698985 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a" (UID: "a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:50:22.761328 master-0 kubenswrapper[29097]: I0312 18:50:22.761284 29097 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:22.761436 master-0 kubenswrapper[29097]: I0312 18:50:22.761324 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4f7mx\" (UniqueName: \"kubernetes.io/projected/a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a-kube-api-access-4f7mx\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:22.761472 master-0 kubenswrapper[29097]: I0312 18:50:22.761440 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:22.906141 master-0 kubenswrapper[29097]: I0312 18:50:22.906092 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 18:50:22.920121 master-0 kubenswrapper[29097]: I0312 18:50:22.920060 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 18:50:22.938418 master-0 kubenswrapper[29097]: I0312 18:50:22.938369 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 18:50:22.938935 master-0 kubenswrapper[29097]: E0312 18:50:22.938914 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a" containerName="nova-cell1-novncproxy-novncproxy" Mar 12 18:50:22.938935 master-0 kubenswrapper[29097]: I0312 18:50:22.938934 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a" containerName="nova-cell1-novncproxy-novncproxy" Mar 12 18:50:22.939250 master-0 kubenswrapper[29097]: I0312 18:50:22.939232 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a" containerName="nova-cell1-novncproxy-novncproxy" Mar 12 18:50:22.940044 master-0 kubenswrapper[29097]: I0312 18:50:22.940025 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:50:22.942135 master-0 kubenswrapper[29097]: I0312 18:50:22.942092 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Mar 12 18:50:22.942212 master-0 kubenswrapper[29097]: I0312 18:50:22.942164 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Mar 12 18:50:22.942249 master-0 kubenswrapper[29097]: I0312 18:50:22.942193 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 12 18:50:22.948886 master-0 kubenswrapper[29097]: I0312 18:50:22.948824 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 18:50:23.067352 master-0 kubenswrapper[29097]: I0312 18:50:23.067304 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13894bb3-4d17-4821-ba67-6563c3dc676c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"13894bb3-4d17-4821-ba67-6563c3dc676c\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:50:23.067352 master-0 kubenswrapper[29097]: I0312 18:50:23.067344 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/13894bb3-4d17-4821-ba67-6563c3dc676c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"13894bb3-4d17-4821-ba67-6563c3dc676c\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:50:23.067607 master-0 kubenswrapper[29097]: I0312 18:50:23.067365 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqb94\" (UniqueName: \"kubernetes.io/projected/13894bb3-4d17-4821-ba67-6563c3dc676c-kube-api-access-dqb94\") pod \"nova-cell1-novncproxy-0\" (UID: \"13894bb3-4d17-4821-ba67-6563c3dc676c\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:50:23.067607 master-0 kubenswrapper[29097]: I0312 18:50:23.067416 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/13894bb3-4d17-4821-ba67-6563c3dc676c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"13894bb3-4d17-4821-ba67-6563c3dc676c\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:50:23.068109 master-0 kubenswrapper[29097]: I0312 18:50:23.068024 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13894bb3-4d17-4821-ba67-6563c3dc676c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"13894bb3-4d17-4821-ba67-6563c3dc676c\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:50:23.171183 master-0 kubenswrapper[29097]: I0312 18:50:23.171109 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13894bb3-4d17-4821-ba67-6563c3dc676c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"13894bb3-4d17-4821-ba67-6563c3dc676c\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:50:23.171183 master-0 kubenswrapper[29097]: I0312 18:50:23.171172 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/13894bb3-4d17-4821-ba67-6563c3dc676c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"13894bb3-4d17-4821-ba67-6563c3dc676c\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:50:23.171458 master-0 kubenswrapper[29097]: I0312 18:50:23.171200 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqb94\" (UniqueName: \"kubernetes.io/projected/13894bb3-4d17-4821-ba67-6563c3dc676c-kube-api-access-dqb94\") pod \"nova-cell1-novncproxy-0\" (UID: \"13894bb3-4d17-4821-ba67-6563c3dc676c\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:50:23.171458 master-0 kubenswrapper[29097]: I0312 18:50:23.171253 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/13894bb3-4d17-4821-ba67-6563c3dc676c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"13894bb3-4d17-4821-ba67-6563c3dc676c\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:50:23.171458 master-0 kubenswrapper[29097]: I0312 18:50:23.171429 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13894bb3-4d17-4821-ba67-6563c3dc676c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"13894bb3-4d17-4821-ba67-6563c3dc676c\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:50:23.174797 master-0 kubenswrapper[29097]: I0312 18:50:23.174762 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/13894bb3-4d17-4821-ba67-6563c3dc676c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"13894bb3-4d17-4821-ba67-6563c3dc676c\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:50:23.175274 master-0 kubenswrapper[29097]: I0312 18:50:23.175243 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13894bb3-4d17-4821-ba67-6563c3dc676c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"13894bb3-4d17-4821-ba67-6563c3dc676c\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:50:23.178213 master-0 kubenswrapper[29097]: I0312 18:50:23.178166 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/13894bb3-4d17-4821-ba67-6563c3dc676c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"13894bb3-4d17-4821-ba67-6563c3dc676c\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:50:23.180025 master-0 kubenswrapper[29097]: I0312 18:50:23.179986 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13894bb3-4d17-4821-ba67-6563c3dc676c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"13894bb3-4d17-4821-ba67-6563c3dc676c\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:50:23.193589 master-0 kubenswrapper[29097]: I0312 18:50:23.190426 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqb94\" (UniqueName: \"kubernetes.io/projected/13894bb3-4d17-4821-ba67-6563c3dc676c-kube-api-access-dqb94\") pod \"nova-cell1-novncproxy-0\" (UID: \"13894bb3-4d17-4821-ba67-6563c3dc676c\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:50:23.320101 master-0 kubenswrapper[29097]: I0312 18:50:23.319970 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:50:23.798168 master-0 kubenswrapper[29097]: I0312 18:50:23.797897 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 18:50:24.613633 master-0 kubenswrapper[29097]: I0312 18:50:24.613522 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"13894bb3-4d17-4821-ba67-6563c3dc676c","Type":"ContainerStarted","Data":"a833dac7552b83f3aea8373d214429c230bfc3cd2123336e5e8a4e6730499d7a"} Mar 12 18:50:24.613633 master-0 kubenswrapper[29097]: I0312 18:50:24.613613 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"13894bb3-4d17-4821-ba67-6563c3dc676c","Type":"ContainerStarted","Data":"00ee3623b9a001888bc168203360b5f9b929ff3ea744bc37597b552ad5b597f0"} Mar 12 18:50:24.638391 master-0 kubenswrapper[29097]: I0312 18:50:24.637799 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.637772289 podStartE2EDuration="2.637772289s" podCreationTimestamp="2026-03-12 18:50:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:50:24.635247187 +0000 UTC m=+1264.189227334" watchObservedRunningTime="2026-03-12 18:50:24.637772289 +0000 UTC m=+1264.191752396" Mar 12 18:50:24.742206 master-0 kubenswrapper[29097]: I0312 18:50:24.742155 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a" path="/var/lib/kubelet/pods/a370f436-e3c2-4e3b-a5cb-e3927a5a4b4a/volumes" Mar 12 18:50:24.797196 master-0 kubenswrapper[29097]: I0312 18:50:24.797154 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 12 18:50:24.797597 master-0 kubenswrapper[29097]: I0312 18:50:24.797543 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 12 18:50:24.802191 master-0 kubenswrapper[29097]: I0312 18:50:24.802144 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 12 18:50:25.628203 master-0 kubenswrapper[29097]: I0312 18:50:25.628125 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 12 18:50:26.828823 master-0 kubenswrapper[29097]: I0312 18:50:26.828775 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 12 18:50:26.829423 master-0 kubenswrapper[29097]: I0312 18:50:26.829192 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 12 18:50:26.832844 master-0 kubenswrapper[29097]: I0312 18:50:26.832799 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 12 18:50:26.843919 master-0 kubenswrapper[29097]: I0312 18:50:26.843855 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 12 18:50:27.691668 master-0 kubenswrapper[29097]: I0312 18:50:27.691457 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 12 18:50:27.696279 master-0 kubenswrapper[29097]: I0312 18:50:27.696166 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 12 18:50:27.995530 master-0 kubenswrapper[29097]: I0312 18:50:27.994887 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-94597dfc-f5psj"] Mar 12 18:50:27.997879 master-0 kubenswrapper[29097]: I0312 18:50:27.997840 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-94597dfc-f5psj" Mar 12 18:50:28.030330 master-0 kubenswrapper[29097]: I0312 18:50:28.030267 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-94597dfc-f5psj"] Mar 12 18:50:28.158912 master-0 kubenswrapper[29097]: I0312 18:50:28.158655 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c8759f5-06c8-4292-ac0b-13fae5fe1b3b-ovsdbserver-sb\") pod \"dnsmasq-dns-94597dfc-f5psj\" (UID: \"3c8759f5-06c8-4292-ac0b-13fae5fe1b3b\") " pod="openstack/dnsmasq-dns-94597dfc-f5psj" Mar 12 18:50:28.159106 master-0 kubenswrapper[29097]: I0312 18:50:28.158932 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c8759f5-06c8-4292-ac0b-13fae5fe1b3b-dns-swift-storage-0\") pod \"dnsmasq-dns-94597dfc-f5psj\" (UID: \"3c8759f5-06c8-4292-ac0b-13fae5fe1b3b\") " pod="openstack/dnsmasq-dns-94597dfc-f5psj" Mar 12 18:50:28.159106 master-0 kubenswrapper[29097]: I0312 18:50:28.159020 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c8759f5-06c8-4292-ac0b-13fae5fe1b3b-dns-svc\") pod \"dnsmasq-dns-94597dfc-f5psj\" (UID: \"3c8759f5-06c8-4292-ac0b-13fae5fe1b3b\") " pod="openstack/dnsmasq-dns-94597dfc-f5psj" Mar 12 18:50:28.159106 master-0 kubenswrapper[29097]: I0312 18:50:28.159056 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr9f5\" (UniqueName: \"kubernetes.io/projected/3c8759f5-06c8-4292-ac0b-13fae5fe1b3b-kube-api-access-pr9f5\") pod \"dnsmasq-dns-94597dfc-f5psj\" (UID: \"3c8759f5-06c8-4292-ac0b-13fae5fe1b3b\") " pod="openstack/dnsmasq-dns-94597dfc-f5psj" Mar 12 18:50:28.159437 master-0 kubenswrapper[29097]: I0312 18:50:28.159135 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c8759f5-06c8-4292-ac0b-13fae5fe1b3b-ovsdbserver-nb\") pod \"dnsmasq-dns-94597dfc-f5psj\" (UID: \"3c8759f5-06c8-4292-ac0b-13fae5fe1b3b\") " pod="openstack/dnsmasq-dns-94597dfc-f5psj" Mar 12 18:50:28.159437 master-0 kubenswrapper[29097]: I0312 18:50:28.159155 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c8759f5-06c8-4292-ac0b-13fae5fe1b3b-config\") pod \"dnsmasq-dns-94597dfc-f5psj\" (UID: \"3c8759f5-06c8-4292-ac0b-13fae5fe1b3b\") " pod="openstack/dnsmasq-dns-94597dfc-f5psj" Mar 12 18:50:28.260808 master-0 kubenswrapper[29097]: I0312 18:50:28.260684 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c8759f5-06c8-4292-ac0b-13fae5fe1b3b-dns-svc\") pod \"dnsmasq-dns-94597dfc-f5psj\" (UID: \"3c8759f5-06c8-4292-ac0b-13fae5fe1b3b\") " pod="openstack/dnsmasq-dns-94597dfc-f5psj" Mar 12 18:50:28.260808 master-0 kubenswrapper[29097]: I0312 18:50:28.260754 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr9f5\" (UniqueName: \"kubernetes.io/projected/3c8759f5-06c8-4292-ac0b-13fae5fe1b3b-kube-api-access-pr9f5\") pod \"dnsmasq-dns-94597dfc-f5psj\" (UID: \"3c8759f5-06c8-4292-ac0b-13fae5fe1b3b\") " pod="openstack/dnsmasq-dns-94597dfc-f5psj" Mar 12 18:50:28.261072 master-0 kubenswrapper[29097]: I0312 18:50:28.260841 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c8759f5-06c8-4292-ac0b-13fae5fe1b3b-ovsdbserver-nb\") pod \"dnsmasq-dns-94597dfc-f5psj\" (UID: \"3c8759f5-06c8-4292-ac0b-13fae5fe1b3b\") " pod="openstack/dnsmasq-dns-94597dfc-f5psj" Mar 12 18:50:28.261072 master-0 kubenswrapper[29097]: I0312 18:50:28.260861 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c8759f5-06c8-4292-ac0b-13fae5fe1b3b-config\") pod \"dnsmasq-dns-94597dfc-f5psj\" (UID: \"3c8759f5-06c8-4292-ac0b-13fae5fe1b3b\") " pod="openstack/dnsmasq-dns-94597dfc-f5psj" Mar 12 18:50:28.261072 master-0 kubenswrapper[29097]: I0312 18:50:28.260914 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c8759f5-06c8-4292-ac0b-13fae5fe1b3b-ovsdbserver-sb\") pod \"dnsmasq-dns-94597dfc-f5psj\" (UID: \"3c8759f5-06c8-4292-ac0b-13fae5fe1b3b\") " pod="openstack/dnsmasq-dns-94597dfc-f5psj" Mar 12 18:50:28.261072 master-0 kubenswrapper[29097]: I0312 18:50:28.260948 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c8759f5-06c8-4292-ac0b-13fae5fe1b3b-dns-swift-storage-0\") pod \"dnsmasq-dns-94597dfc-f5psj\" (UID: \"3c8759f5-06c8-4292-ac0b-13fae5fe1b3b\") " pod="openstack/dnsmasq-dns-94597dfc-f5psj" Mar 12 18:50:28.261752 master-0 kubenswrapper[29097]: I0312 18:50:28.261709 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c8759f5-06c8-4292-ac0b-13fae5fe1b3b-dns-swift-storage-0\") pod \"dnsmasq-dns-94597dfc-f5psj\" (UID: \"3c8759f5-06c8-4292-ac0b-13fae5fe1b3b\") " pod="openstack/dnsmasq-dns-94597dfc-f5psj" Mar 12 18:50:28.262264 master-0 kubenswrapper[29097]: I0312 18:50:28.262229 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c8759f5-06c8-4292-ac0b-13fae5fe1b3b-dns-svc\") pod \"dnsmasq-dns-94597dfc-f5psj\" (UID: \"3c8759f5-06c8-4292-ac0b-13fae5fe1b3b\") " pod="openstack/dnsmasq-dns-94597dfc-f5psj" Mar 12 18:50:28.262783 master-0 kubenswrapper[29097]: I0312 18:50:28.262746 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c8759f5-06c8-4292-ac0b-13fae5fe1b3b-ovsdbserver-nb\") pod \"dnsmasq-dns-94597dfc-f5psj\" (UID: \"3c8759f5-06c8-4292-ac0b-13fae5fe1b3b\") " pod="openstack/dnsmasq-dns-94597dfc-f5psj" Mar 12 18:50:28.262783 master-0 kubenswrapper[29097]: I0312 18:50:28.262772 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c8759f5-06c8-4292-ac0b-13fae5fe1b3b-config\") pod \"dnsmasq-dns-94597dfc-f5psj\" (UID: \"3c8759f5-06c8-4292-ac0b-13fae5fe1b3b\") " pod="openstack/dnsmasq-dns-94597dfc-f5psj" Mar 12 18:50:28.263415 master-0 kubenswrapper[29097]: I0312 18:50:28.263345 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c8759f5-06c8-4292-ac0b-13fae5fe1b3b-ovsdbserver-sb\") pod \"dnsmasq-dns-94597dfc-f5psj\" (UID: \"3c8759f5-06c8-4292-ac0b-13fae5fe1b3b\") " pod="openstack/dnsmasq-dns-94597dfc-f5psj" Mar 12 18:50:28.279108 master-0 kubenswrapper[29097]: I0312 18:50:28.279054 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr9f5\" (UniqueName: \"kubernetes.io/projected/3c8759f5-06c8-4292-ac0b-13fae5fe1b3b-kube-api-access-pr9f5\") pod \"dnsmasq-dns-94597dfc-f5psj\" (UID: \"3c8759f5-06c8-4292-ac0b-13fae5fe1b3b\") " pod="openstack/dnsmasq-dns-94597dfc-f5psj" Mar 12 18:50:28.320373 master-0 kubenswrapper[29097]: I0312 18:50:28.320302 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:50:28.366409 master-0 kubenswrapper[29097]: I0312 18:50:28.366346 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-94597dfc-f5psj" Mar 12 18:50:28.879550 master-0 kubenswrapper[29097]: I0312 18:50:28.871572 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-94597dfc-f5psj"] Mar 12 18:50:29.719732 master-0 kubenswrapper[29097]: I0312 18:50:29.719679 29097 generic.go:334] "Generic (PLEG): container finished" podID="3c8759f5-06c8-4292-ac0b-13fae5fe1b3b" containerID="62a9472b1fd529b8961c4b6f785f8dae5f1ad4547e6f4d1d397673e0668ff3ef" exitCode=0 Mar 12 18:50:29.720371 master-0 kubenswrapper[29097]: I0312 18:50:29.719840 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-94597dfc-f5psj" event={"ID":"3c8759f5-06c8-4292-ac0b-13fae5fe1b3b","Type":"ContainerDied","Data":"62a9472b1fd529b8961c4b6f785f8dae5f1ad4547e6f4d1d397673e0668ff3ef"} Mar 12 18:50:29.720493 master-0 kubenswrapper[29097]: I0312 18:50:29.720469 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-94597dfc-f5psj" event={"ID":"3c8759f5-06c8-4292-ac0b-13fae5fe1b3b","Type":"ContainerStarted","Data":"692bf112de0a0d6f141adcdca3b9ba751287e5d0686659e4d06e451406c47675"} Mar 12 18:50:30.218420 master-0 kubenswrapper[29097]: I0312 18:50:30.218365 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 18:50:30.218661 master-0 kubenswrapper[29097]: I0312 18:50:30.218600 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e6c93825-221d-45e5-a53b-6d3ca6382898" containerName="nova-api-log" containerID="cri-o://34efb7f55bedff0664f2a36171941f64e101733a19cdb1bccb5fa9eba53da4d2" gracePeriod=30 Mar 12 18:50:30.218959 master-0 kubenswrapper[29097]: I0312 18:50:30.218891 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e6c93825-221d-45e5-a53b-6d3ca6382898" containerName="nova-api-api" containerID="cri-o://600f19a7338269927288c15224723daf8bf8502c936d02ab34b25009b3e0ed3b" gracePeriod=30 Mar 12 18:50:30.738917 master-0 kubenswrapper[29097]: I0312 18:50:30.738787 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-94597dfc-f5psj" event={"ID":"3c8759f5-06c8-4292-ac0b-13fae5fe1b3b","Type":"ContainerStarted","Data":"c1780cb498a2021d7c29b4cb932158c196e1fd3692067ae9aabaaa384b942ee1"} Mar 12 18:50:30.740358 master-0 kubenswrapper[29097]: I0312 18:50:30.740176 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-94597dfc-f5psj" Mar 12 18:50:30.747274 master-0 kubenswrapper[29097]: I0312 18:50:30.747201 29097 generic.go:334] "Generic (PLEG): container finished" podID="e6c93825-221d-45e5-a53b-6d3ca6382898" containerID="34efb7f55bedff0664f2a36171941f64e101733a19cdb1bccb5fa9eba53da4d2" exitCode=143 Mar 12 18:50:30.747274 master-0 kubenswrapper[29097]: I0312 18:50:30.747269 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e6c93825-221d-45e5-a53b-6d3ca6382898","Type":"ContainerDied","Data":"34efb7f55bedff0664f2a36171941f64e101733a19cdb1bccb5fa9eba53da4d2"} Mar 12 18:50:30.808598 master-0 kubenswrapper[29097]: I0312 18:50:30.808524 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-94597dfc-f5psj" podStartSLOduration=3.808486031 podStartE2EDuration="3.808486031s" podCreationTimestamp="2026-03-12 18:50:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:50:30.798955054 +0000 UTC m=+1270.352935161" watchObservedRunningTime="2026-03-12 18:50:30.808486031 +0000 UTC m=+1270.362466128" Mar 12 18:50:33.320238 master-0 kubenswrapper[29097]: I0312 18:50:33.320184 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:50:33.354800 master-0 kubenswrapper[29097]: I0312 18:50:33.354591 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:50:33.789826 master-0 kubenswrapper[29097]: I0312 18:50:33.789657 29097 generic.go:334] "Generic (PLEG): container finished" podID="e6c93825-221d-45e5-a53b-6d3ca6382898" containerID="600f19a7338269927288c15224723daf8bf8502c936d02ab34b25009b3e0ed3b" exitCode=0 Mar 12 18:50:33.790391 master-0 kubenswrapper[29097]: I0312 18:50:33.789759 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e6c93825-221d-45e5-a53b-6d3ca6382898","Type":"ContainerDied","Data":"600f19a7338269927288c15224723daf8bf8502c936d02ab34b25009b3e0ed3b"} Mar 12 18:50:33.811238 master-0 kubenswrapper[29097]: I0312 18:50:33.811175 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Mar 12 18:50:33.961538 master-0 kubenswrapper[29097]: I0312 18:50:33.961465 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 18:50:34.002061 master-0 kubenswrapper[29097]: I0312 18:50:34.002011 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6c93825-221d-45e5-a53b-6d3ca6382898-logs\") pod \"e6c93825-221d-45e5-a53b-6d3ca6382898\" (UID: \"e6c93825-221d-45e5-a53b-6d3ca6382898\") " Mar 12 18:50:34.002254 master-0 kubenswrapper[29097]: I0312 18:50:34.002076 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tv865\" (UniqueName: \"kubernetes.io/projected/e6c93825-221d-45e5-a53b-6d3ca6382898-kube-api-access-tv865\") pod \"e6c93825-221d-45e5-a53b-6d3ca6382898\" (UID: \"e6c93825-221d-45e5-a53b-6d3ca6382898\") " Mar 12 18:50:34.002254 master-0 kubenswrapper[29097]: I0312 18:50:34.002194 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6c93825-221d-45e5-a53b-6d3ca6382898-config-data\") pod \"e6c93825-221d-45e5-a53b-6d3ca6382898\" (UID: \"e6c93825-221d-45e5-a53b-6d3ca6382898\") " Mar 12 18:50:34.002386 master-0 kubenswrapper[29097]: I0312 18:50:34.002363 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6c93825-221d-45e5-a53b-6d3ca6382898-combined-ca-bundle\") pod \"e6c93825-221d-45e5-a53b-6d3ca6382898\" (UID: \"e6c93825-221d-45e5-a53b-6d3ca6382898\") " Mar 12 18:50:34.003871 master-0 kubenswrapper[29097]: I0312 18:50:34.003575 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6c93825-221d-45e5-a53b-6d3ca6382898-logs" (OuterVolumeSpecName: "logs") pod "e6c93825-221d-45e5-a53b-6d3ca6382898" (UID: "e6c93825-221d-45e5-a53b-6d3ca6382898"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:50:34.006226 master-0 kubenswrapper[29097]: I0312 18:50:34.006174 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6c93825-221d-45e5-a53b-6d3ca6382898-kube-api-access-tv865" (OuterVolumeSpecName: "kube-api-access-tv865") pod "e6c93825-221d-45e5-a53b-6d3ca6382898" (UID: "e6c93825-221d-45e5-a53b-6d3ca6382898"). InnerVolumeSpecName "kube-api-access-tv865". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:50:34.064896 master-0 kubenswrapper[29097]: I0312 18:50:34.064634 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6c93825-221d-45e5-a53b-6d3ca6382898-config-data" (OuterVolumeSpecName: "config-data") pod "e6c93825-221d-45e5-a53b-6d3ca6382898" (UID: "e6c93825-221d-45e5-a53b-6d3ca6382898"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:50:34.083705 master-0 kubenswrapper[29097]: I0312 18:50:34.082692 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-68m4m"] Mar 12 18:50:34.083705 master-0 kubenswrapper[29097]: E0312 18:50:34.083453 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6c93825-221d-45e5-a53b-6d3ca6382898" containerName="nova-api-api" Mar 12 18:50:34.083705 master-0 kubenswrapper[29097]: I0312 18:50:34.083468 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6c93825-221d-45e5-a53b-6d3ca6382898" containerName="nova-api-api" Mar 12 18:50:34.083705 master-0 kubenswrapper[29097]: E0312 18:50:34.083487 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6c93825-221d-45e5-a53b-6d3ca6382898" containerName="nova-api-log" Mar 12 18:50:34.083705 master-0 kubenswrapper[29097]: I0312 18:50:34.083495 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6c93825-221d-45e5-a53b-6d3ca6382898" containerName="nova-api-log" Mar 12 18:50:34.084185 master-0 kubenswrapper[29097]: I0312 18:50:34.084157 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6c93825-221d-45e5-a53b-6d3ca6382898" containerName="nova-api-api" Mar 12 18:50:34.084345 master-0 kubenswrapper[29097]: I0312 18:50:34.084252 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6c93825-221d-45e5-a53b-6d3ca6382898" containerName="nova-api-log" Mar 12 18:50:34.087428 master-0 kubenswrapper[29097]: I0312 18:50:34.086326 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-68m4m" Mar 12 18:50:34.097331 master-0 kubenswrapper[29097]: I0312 18:50:34.094383 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Mar 12 18:50:34.097331 master-0 kubenswrapper[29097]: I0312 18:50:34.094622 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Mar 12 18:50:34.107546 master-0 kubenswrapper[29097]: I0312 18:50:34.107475 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cac30a96-44f3-4fdd-9fe5-e64e5c61686b-scripts\") pod \"nova-cell1-cell-mapping-68m4m\" (UID: \"cac30a96-44f3-4fdd-9fe5-e64e5c61686b\") " pod="openstack/nova-cell1-cell-mapping-68m4m" Mar 12 18:50:34.107846 master-0 kubenswrapper[29097]: I0312 18:50:34.107806 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cac30a96-44f3-4fdd-9fe5-e64e5c61686b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-68m4m\" (UID: \"cac30a96-44f3-4fdd-9fe5-e64e5c61686b\") " pod="openstack/nova-cell1-cell-mapping-68m4m" Mar 12 18:50:34.107955 master-0 kubenswrapper[29097]: I0312 18:50:34.107863 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cac30a96-44f3-4fdd-9fe5-e64e5c61686b-config-data\") pod \"nova-cell1-cell-mapping-68m4m\" (UID: \"cac30a96-44f3-4fdd-9fe5-e64e5c61686b\") " pod="openstack/nova-cell1-cell-mapping-68m4m" Mar 12 18:50:34.107955 master-0 kubenswrapper[29097]: I0312 18:50:34.107937 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcsnt\" (UniqueName: \"kubernetes.io/projected/cac30a96-44f3-4fdd-9fe5-e64e5c61686b-kube-api-access-rcsnt\") pod \"nova-cell1-cell-mapping-68m4m\" (UID: \"cac30a96-44f3-4fdd-9fe5-e64e5c61686b\") " pod="openstack/nova-cell1-cell-mapping-68m4m" Mar 12 18:50:34.108150 master-0 kubenswrapper[29097]: I0312 18:50:34.108081 29097 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6c93825-221d-45e5-a53b-6d3ca6382898-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:34.108150 master-0 kubenswrapper[29097]: I0312 18:50:34.108138 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tv865\" (UniqueName: \"kubernetes.io/projected/e6c93825-221d-45e5-a53b-6d3ca6382898-kube-api-access-tv865\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:34.108150 master-0 kubenswrapper[29097]: I0312 18:50:34.108149 29097 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6c93825-221d-45e5-a53b-6d3ca6382898-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:34.109658 master-0 kubenswrapper[29097]: I0312 18:50:34.109584 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-68m4m"] Mar 12 18:50:34.129216 master-0 kubenswrapper[29097]: I0312 18:50:34.129181 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-host-discover-rdkpz"] Mar 12 18:50:34.131059 master-0 kubenswrapper[29097]: I0312 18:50:34.131033 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-rdkpz" Mar 12 18:50:34.142625 master-0 kubenswrapper[29097]: I0312 18:50:34.142559 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6c93825-221d-45e5-a53b-6d3ca6382898-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6c93825-221d-45e5-a53b-6d3ca6382898" (UID: "e6c93825-221d-45e5-a53b-6d3ca6382898"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:50:34.152196 master-0 kubenswrapper[29097]: I0312 18:50:34.152144 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-rdkpz"] Mar 12 18:50:34.210991 master-0 kubenswrapper[29097]: I0312 18:50:34.210475 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cac30a96-44f3-4fdd-9fe5-e64e5c61686b-scripts\") pod \"nova-cell1-cell-mapping-68m4m\" (UID: \"cac30a96-44f3-4fdd-9fe5-e64e5c61686b\") " pod="openstack/nova-cell1-cell-mapping-68m4m" Mar 12 18:50:34.210991 master-0 kubenswrapper[29097]: I0312 18:50:34.210559 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9f3949a-aa88-40a4-b349-be8fb7106a61-scripts\") pod \"nova-cell1-host-discover-rdkpz\" (UID: \"c9f3949a-aa88-40a4-b349-be8fb7106a61\") " pod="openstack/nova-cell1-host-discover-rdkpz" Mar 12 18:50:34.210991 master-0 kubenswrapper[29097]: I0312 18:50:34.210599 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxfzb\" (UniqueName: \"kubernetes.io/projected/c9f3949a-aa88-40a4-b349-be8fb7106a61-kube-api-access-dxfzb\") pod \"nova-cell1-host-discover-rdkpz\" (UID: \"c9f3949a-aa88-40a4-b349-be8fb7106a61\") " pod="openstack/nova-cell1-host-discover-rdkpz" Mar 12 18:50:34.210991 master-0 kubenswrapper[29097]: I0312 18:50:34.210650 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9f3949a-aa88-40a4-b349-be8fb7106a61-combined-ca-bundle\") pod \"nova-cell1-host-discover-rdkpz\" (UID: \"c9f3949a-aa88-40a4-b349-be8fb7106a61\") " pod="openstack/nova-cell1-host-discover-rdkpz" Mar 12 18:50:34.210991 master-0 kubenswrapper[29097]: I0312 18:50:34.210676 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cac30a96-44f3-4fdd-9fe5-e64e5c61686b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-68m4m\" (UID: \"cac30a96-44f3-4fdd-9fe5-e64e5c61686b\") " pod="openstack/nova-cell1-cell-mapping-68m4m" Mar 12 18:50:34.210991 master-0 kubenswrapper[29097]: I0312 18:50:34.210694 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cac30a96-44f3-4fdd-9fe5-e64e5c61686b-config-data\") pod \"nova-cell1-cell-mapping-68m4m\" (UID: \"cac30a96-44f3-4fdd-9fe5-e64e5c61686b\") " pod="openstack/nova-cell1-cell-mapping-68m4m" Mar 12 18:50:34.210991 master-0 kubenswrapper[29097]: I0312 18:50:34.210724 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcsnt\" (UniqueName: \"kubernetes.io/projected/cac30a96-44f3-4fdd-9fe5-e64e5c61686b-kube-api-access-rcsnt\") pod \"nova-cell1-cell-mapping-68m4m\" (UID: \"cac30a96-44f3-4fdd-9fe5-e64e5c61686b\") " pod="openstack/nova-cell1-cell-mapping-68m4m" Mar 12 18:50:34.210991 master-0 kubenswrapper[29097]: I0312 18:50:34.210779 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9f3949a-aa88-40a4-b349-be8fb7106a61-config-data\") pod \"nova-cell1-host-discover-rdkpz\" (UID: \"c9f3949a-aa88-40a4-b349-be8fb7106a61\") " pod="openstack/nova-cell1-host-discover-rdkpz" Mar 12 18:50:34.210991 master-0 kubenswrapper[29097]: I0312 18:50:34.210867 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6c93825-221d-45e5-a53b-6d3ca6382898-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:34.219080 master-0 kubenswrapper[29097]: I0312 18:50:34.214094 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cac30a96-44f3-4fdd-9fe5-e64e5c61686b-scripts\") pod \"nova-cell1-cell-mapping-68m4m\" (UID: \"cac30a96-44f3-4fdd-9fe5-e64e5c61686b\") " pod="openstack/nova-cell1-cell-mapping-68m4m" Mar 12 18:50:34.219080 master-0 kubenswrapper[29097]: I0312 18:50:34.214752 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cac30a96-44f3-4fdd-9fe5-e64e5c61686b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-68m4m\" (UID: \"cac30a96-44f3-4fdd-9fe5-e64e5c61686b\") " pod="openstack/nova-cell1-cell-mapping-68m4m" Mar 12 18:50:34.219080 master-0 kubenswrapper[29097]: I0312 18:50:34.218569 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cac30a96-44f3-4fdd-9fe5-e64e5c61686b-config-data\") pod \"nova-cell1-cell-mapping-68m4m\" (UID: \"cac30a96-44f3-4fdd-9fe5-e64e5c61686b\") " pod="openstack/nova-cell1-cell-mapping-68m4m" Mar 12 18:50:34.231649 master-0 kubenswrapper[29097]: I0312 18:50:34.230176 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcsnt\" (UniqueName: \"kubernetes.io/projected/cac30a96-44f3-4fdd-9fe5-e64e5c61686b-kube-api-access-rcsnt\") pod \"nova-cell1-cell-mapping-68m4m\" (UID: \"cac30a96-44f3-4fdd-9fe5-e64e5c61686b\") " pod="openstack/nova-cell1-cell-mapping-68m4m" Mar 12 18:50:34.313338 master-0 kubenswrapper[29097]: I0312 18:50:34.313193 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxfzb\" (UniqueName: \"kubernetes.io/projected/c9f3949a-aa88-40a4-b349-be8fb7106a61-kube-api-access-dxfzb\") pod \"nova-cell1-host-discover-rdkpz\" (UID: \"c9f3949a-aa88-40a4-b349-be8fb7106a61\") " pod="openstack/nova-cell1-host-discover-rdkpz" Mar 12 18:50:34.313338 master-0 kubenswrapper[29097]: I0312 18:50:34.313317 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9f3949a-aa88-40a4-b349-be8fb7106a61-combined-ca-bundle\") pod \"nova-cell1-host-discover-rdkpz\" (UID: \"c9f3949a-aa88-40a4-b349-be8fb7106a61\") " pod="openstack/nova-cell1-host-discover-rdkpz" Mar 12 18:50:34.313592 master-0 kubenswrapper[29097]: I0312 18:50:34.313545 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9f3949a-aa88-40a4-b349-be8fb7106a61-config-data\") pod \"nova-cell1-host-discover-rdkpz\" (UID: \"c9f3949a-aa88-40a4-b349-be8fb7106a61\") " pod="openstack/nova-cell1-host-discover-rdkpz" Mar 12 18:50:34.314424 master-0 kubenswrapper[29097]: I0312 18:50:34.313675 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9f3949a-aa88-40a4-b349-be8fb7106a61-scripts\") pod \"nova-cell1-host-discover-rdkpz\" (UID: \"c9f3949a-aa88-40a4-b349-be8fb7106a61\") " pod="openstack/nova-cell1-host-discover-rdkpz" Mar 12 18:50:34.316801 master-0 kubenswrapper[29097]: I0312 18:50:34.316708 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9f3949a-aa88-40a4-b349-be8fb7106a61-combined-ca-bundle\") pod \"nova-cell1-host-discover-rdkpz\" (UID: \"c9f3949a-aa88-40a4-b349-be8fb7106a61\") " pod="openstack/nova-cell1-host-discover-rdkpz" Mar 12 18:50:34.316801 master-0 kubenswrapper[29097]: I0312 18:50:34.316740 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9f3949a-aa88-40a4-b349-be8fb7106a61-config-data\") pod \"nova-cell1-host-discover-rdkpz\" (UID: \"c9f3949a-aa88-40a4-b349-be8fb7106a61\") " pod="openstack/nova-cell1-host-discover-rdkpz" Mar 12 18:50:34.317650 master-0 kubenswrapper[29097]: I0312 18:50:34.317619 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9f3949a-aa88-40a4-b349-be8fb7106a61-scripts\") pod \"nova-cell1-host-discover-rdkpz\" (UID: \"c9f3949a-aa88-40a4-b349-be8fb7106a61\") " pod="openstack/nova-cell1-host-discover-rdkpz" Mar 12 18:50:34.329714 master-0 kubenswrapper[29097]: I0312 18:50:34.329677 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxfzb\" (UniqueName: \"kubernetes.io/projected/c9f3949a-aa88-40a4-b349-be8fb7106a61-kube-api-access-dxfzb\") pod \"nova-cell1-host-discover-rdkpz\" (UID: \"c9f3949a-aa88-40a4-b349-be8fb7106a61\") " pod="openstack/nova-cell1-host-discover-rdkpz" Mar 12 18:50:34.446823 master-0 kubenswrapper[29097]: I0312 18:50:34.446769 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-68m4m" Mar 12 18:50:34.473392 master-0 kubenswrapper[29097]: I0312 18:50:34.473327 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-rdkpz" Mar 12 18:50:34.805218 master-0 kubenswrapper[29097]: I0312 18:50:34.805153 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e6c93825-221d-45e5-a53b-6d3ca6382898","Type":"ContainerDied","Data":"97c78e1c35f5bcd6126986b1be78cbd3132aae8e8d62297bec9623eaa4e5f5ce"} Mar 12 18:50:34.805582 master-0 kubenswrapper[29097]: I0312 18:50:34.805499 29097 scope.go:117] "RemoveContainer" containerID="600f19a7338269927288c15224723daf8bf8502c936d02ab34b25009b3e0ed3b" Mar 12 18:50:34.805723 master-0 kubenswrapper[29097]: I0312 18:50:34.805190 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 18:50:34.825826 master-0 kubenswrapper[29097]: I0312 18:50:34.825786 29097 scope.go:117] "RemoveContainer" containerID="34efb7f55bedff0664f2a36171941f64e101733a19cdb1bccb5fa9eba53da4d2" Mar 12 18:50:34.874472 master-0 kubenswrapper[29097]: I0312 18:50:34.874263 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 18:50:34.900148 master-0 kubenswrapper[29097]: I0312 18:50:34.900087 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 12 18:50:34.933756 master-0 kubenswrapper[29097]: I0312 18:50:34.933715 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 12 18:50:34.935866 master-0 kubenswrapper[29097]: I0312 18:50:34.935844 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 18:50:34.943611 master-0 kubenswrapper[29097]: I0312 18:50:34.943563 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 12 18:50:34.950889 master-0 kubenswrapper[29097]: I0312 18:50:34.950846 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 12 18:50:34.951466 master-0 kubenswrapper[29097]: I0312 18:50:34.950867 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 12 18:50:34.958793 master-0 kubenswrapper[29097]: I0312 18:50:34.958745 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 18:50:35.034254 master-0 kubenswrapper[29097]: I0312 18:50:35.034178 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-rdkpz"] Mar 12 18:50:35.044331 master-0 kubenswrapper[29097]: I0312 18:50:35.044282 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-logs\") pod \"nova-api-0\" (UID: \"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8\") " pod="openstack/nova-api-0" Mar 12 18:50:35.044462 master-0 kubenswrapper[29097]: I0312 18:50:35.044338 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8\") " pod="openstack/nova-api-0" Mar 12 18:50:35.044462 master-0 kubenswrapper[29097]: I0312 18:50:35.044363 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rgps\" (UniqueName: \"kubernetes.io/projected/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-kube-api-access-5rgps\") pod \"nova-api-0\" (UID: \"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8\") " pod="openstack/nova-api-0" Mar 12 18:50:35.044462 master-0 kubenswrapper[29097]: I0312 18:50:35.044398 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-config-data\") pod \"nova-api-0\" (UID: \"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8\") " pod="openstack/nova-api-0" Mar 12 18:50:35.044462 master-0 kubenswrapper[29097]: I0312 18:50:35.044423 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8\") " pod="openstack/nova-api-0" Mar 12 18:50:35.044671 master-0 kubenswrapper[29097]: I0312 18:50:35.044546 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-public-tls-certs\") pod \"nova-api-0\" (UID: \"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8\") " pod="openstack/nova-api-0" Mar 12 18:50:35.054306 master-0 kubenswrapper[29097]: I0312 18:50:35.052353 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-68m4m"] Mar 12 18:50:35.147310 master-0 kubenswrapper[29097]: I0312 18:50:35.147267 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-logs\") pod \"nova-api-0\" (UID: \"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8\") " pod="openstack/nova-api-0" Mar 12 18:50:35.147422 master-0 kubenswrapper[29097]: I0312 18:50:35.147317 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8\") " pod="openstack/nova-api-0" Mar 12 18:50:35.147422 master-0 kubenswrapper[29097]: I0312 18:50:35.147337 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rgps\" (UniqueName: \"kubernetes.io/projected/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-kube-api-access-5rgps\") pod \"nova-api-0\" (UID: \"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8\") " pod="openstack/nova-api-0" Mar 12 18:50:35.147422 master-0 kubenswrapper[29097]: I0312 18:50:35.147372 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-config-data\") pod \"nova-api-0\" (UID: \"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8\") " pod="openstack/nova-api-0" Mar 12 18:50:35.147422 master-0 kubenswrapper[29097]: I0312 18:50:35.147396 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8\") " pod="openstack/nova-api-0" Mar 12 18:50:35.147579 master-0 kubenswrapper[29097]: I0312 18:50:35.147496 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-public-tls-certs\") pod \"nova-api-0\" (UID: \"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8\") " pod="openstack/nova-api-0" Mar 12 18:50:35.148749 master-0 kubenswrapper[29097]: I0312 18:50:35.148654 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-logs\") pod \"nova-api-0\" (UID: \"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8\") " pod="openstack/nova-api-0" Mar 12 18:50:35.151416 master-0 kubenswrapper[29097]: I0312 18:50:35.151347 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8\") " pod="openstack/nova-api-0" Mar 12 18:50:35.151693 master-0 kubenswrapper[29097]: I0312 18:50:35.151657 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-config-data\") pod \"nova-api-0\" (UID: \"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8\") " pod="openstack/nova-api-0" Mar 12 18:50:35.152109 master-0 kubenswrapper[29097]: I0312 18:50:35.152089 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-public-tls-certs\") pod \"nova-api-0\" (UID: \"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8\") " pod="openstack/nova-api-0" Mar 12 18:50:35.152555 master-0 kubenswrapper[29097]: I0312 18:50:35.152502 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8\") " pod="openstack/nova-api-0" Mar 12 18:50:35.162991 master-0 kubenswrapper[29097]: I0312 18:50:35.162959 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rgps\" (UniqueName: \"kubernetes.io/projected/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-kube-api-access-5rgps\") pod \"nova-api-0\" (UID: \"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8\") " pod="openstack/nova-api-0" Mar 12 18:50:35.255780 master-0 kubenswrapper[29097]: I0312 18:50:35.255711 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 18:50:35.718131 master-0 kubenswrapper[29097]: I0312 18:50:35.717060 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 18:50:35.820725 master-0 kubenswrapper[29097]: I0312 18:50:35.820077 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-rdkpz" event={"ID":"c9f3949a-aa88-40a4-b349-be8fb7106a61","Type":"ContainerStarted","Data":"2cdffd61ec11c7b94148e289655fdf57d39d5dd111ba4e3152b0f160ae1bae48"} Mar 12 18:50:35.820725 master-0 kubenswrapper[29097]: I0312 18:50:35.820127 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-rdkpz" event={"ID":"c9f3949a-aa88-40a4-b349-be8fb7106a61","Type":"ContainerStarted","Data":"b02d8f86703b7de13bf2f038d9c90800e94353924ebff323258788657951a37c"} Mar 12 18:50:35.826990 master-0 kubenswrapper[29097]: I0312 18:50:35.826944 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8","Type":"ContainerStarted","Data":"c527aa67113745d7d3bd9ddf2cc57096fb687c06e4dabd719b77f1ac5f1760ad"} Mar 12 18:50:35.835198 master-0 kubenswrapper[29097]: I0312 18:50:35.835136 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-68m4m" event={"ID":"cac30a96-44f3-4fdd-9fe5-e64e5c61686b","Type":"ContainerStarted","Data":"1f24c03965adcebf9718c0c246d2da8f0801382b7f4e7c16bf9311f8e3079618"} Mar 12 18:50:35.835388 master-0 kubenswrapper[29097]: I0312 18:50:35.835206 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-68m4m" event={"ID":"cac30a96-44f3-4fdd-9fe5-e64e5c61686b","Type":"ContainerStarted","Data":"65393fef93007f8213ee3c8c5ceddf50a8d580f64e668ddb3e7e908e87142729"} Mar 12 18:50:35.846701 master-0 kubenswrapper[29097]: I0312 18:50:35.846626 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-host-discover-rdkpz" podStartSLOduration=1.846588116 podStartE2EDuration="1.846588116s" podCreationTimestamp="2026-03-12 18:50:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:50:35.845965151 +0000 UTC m=+1275.399945248" watchObservedRunningTime="2026-03-12 18:50:35.846588116 +0000 UTC m=+1275.400568223" Mar 12 18:50:35.873105 master-0 kubenswrapper[29097]: I0312 18:50:35.873025 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-68m4m" podStartSLOduration=1.873005355 podStartE2EDuration="1.873005355s" podCreationTimestamp="2026-03-12 18:50:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:50:35.871013517 +0000 UTC m=+1275.424993624" watchObservedRunningTime="2026-03-12 18:50:35.873005355 +0000 UTC m=+1275.426985452" Mar 12 18:50:36.742178 master-0 kubenswrapper[29097]: I0312 18:50:36.741583 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6c93825-221d-45e5-a53b-6d3ca6382898" path="/var/lib/kubelet/pods/e6c93825-221d-45e5-a53b-6d3ca6382898/volumes" Mar 12 18:50:36.852358 master-0 kubenswrapper[29097]: I0312 18:50:36.852306 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8","Type":"ContainerStarted","Data":"cfdfc1699d0c84b348d967eb213bb29cb059a73b150d621d0bab7fb6c34eeccc"} Mar 12 18:50:36.852358 master-0 kubenswrapper[29097]: I0312 18:50:36.852363 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8","Type":"ContainerStarted","Data":"34c8ea110cf9729c23a084a67054819103d4947b40972c599f7ef338dc4c410f"} Mar 12 18:50:36.897147 master-0 kubenswrapper[29097]: I0312 18:50:36.896576 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.896553737 podStartE2EDuration="2.896553737s" podCreationTimestamp="2026-03-12 18:50:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:50:36.879836379 +0000 UTC m=+1276.433816496" watchObservedRunningTime="2026-03-12 18:50:36.896553737 +0000 UTC m=+1276.450533834" Mar 12 18:50:37.866020 master-0 kubenswrapper[29097]: I0312 18:50:37.864782 29097 generic.go:334] "Generic (PLEG): container finished" podID="c9f3949a-aa88-40a4-b349-be8fb7106a61" containerID="2cdffd61ec11c7b94148e289655fdf57d39d5dd111ba4e3152b0f160ae1bae48" exitCode=0 Mar 12 18:50:37.866020 master-0 kubenswrapper[29097]: I0312 18:50:37.865820 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-rdkpz" event={"ID":"c9f3949a-aa88-40a4-b349-be8fb7106a61","Type":"ContainerDied","Data":"2cdffd61ec11c7b94148e289655fdf57d39d5dd111ba4e3152b0f160ae1bae48"} Mar 12 18:50:38.369042 master-0 kubenswrapper[29097]: I0312 18:50:38.368954 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-94597dfc-f5psj" Mar 12 18:50:38.477944 master-0 kubenswrapper[29097]: I0312 18:50:38.477393 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58d8bd468f-79rnx"] Mar 12 18:50:38.477944 master-0 kubenswrapper[29097]: I0312 18:50:38.477897 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58d8bd468f-79rnx" podUID="8363492c-c44f-48da-b2a5-b6f83718f64e" containerName="dnsmasq-dns" containerID="cri-o://5a257d067e60f425b0627f1e96dd4950d73102a1f6be1387bfdc014b23faa944" gracePeriod=10 Mar 12 18:50:38.899236 master-0 kubenswrapper[29097]: I0312 18:50:38.899065 29097 generic.go:334] "Generic (PLEG): container finished" podID="8363492c-c44f-48da-b2a5-b6f83718f64e" containerID="5a257d067e60f425b0627f1e96dd4950d73102a1f6be1387bfdc014b23faa944" exitCode=0 Mar 12 18:50:38.900000 master-0 kubenswrapper[29097]: I0312 18:50:38.899262 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58d8bd468f-79rnx" event={"ID":"8363492c-c44f-48da-b2a5-b6f83718f64e","Type":"ContainerDied","Data":"5a257d067e60f425b0627f1e96dd4950d73102a1f6be1387bfdc014b23faa944"} Mar 12 18:50:39.257769 master-0 kubenswrapper[29097]: I0312 18:50:39.257725 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58d8bd468f-79rnx" Mar 12 18:50:39.276713 master-0 kubenswrapper[29097]: I0312 18:50:39.274168 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8363492c-c44f-48da-b2a5-b6f83718f64e-ovsdbserver-sb\") pod \"8363492c-c44f-48da-b2a5-b6f83718f64e\" (UID: \"8363492c-c44f-48da-b2a5-b6f83718f64e\") " Mar 12 18:50:39.276713 master-0 kubenswrapper[29097]: I0312 18:50:39.274304 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xtb7\" (UniqueName: \"kubernetes.io/projected/8363492c-c44f-48da-b2a5-b6f83718f64e-kube-api-access-5xtb7\") pod \"8363492c-c44f-48da-b2a5-b6f83718f64e\" (UID: \"8363492c-c44f-48da-b2a5-b6f83718f64e\") " Mar 12 18:50:39.276713 master-0 kubenswrapper[29097]: I0312 18:50:39.274401 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8363492c-c44f-48da-b2a5-b6f83718f64e-config\") pod \"8363492c-c44f-48da-b2a5-b6f83718f64e\" (UID: \"8363492c-c44f-48da-b2a5-b6f83718f64e\") " Mar 12 18:50:39.276713 master-0 kubenswrapper[29097]: I0312 18:50:39.274427 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8363492c-c44f-48da-b2a5-b6f83718f64e-dns-svc\") pod \"8363492c-c44f-48da-b2a5-b6f83718f64e\" (UID: \"8363492c-c44f-48da-b2a5-b6f83718f64e\") " Mar 12 18:50:39.276713 master-0 kubenswrapper[29097]: I0312 18:50:39.274489 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8363492c-c44f-48da-b2a5-b6f83718f64e-dns-swift-storage-0\") pod \"8363492c-c44f-48da-b2a5-b6f83718f64e\" (UID: \"8363492c-c44f-48da-b2a5-b6f83718f64e\") " Mar 12 18:50:39.276713 master-0 kubenswrapper[29097]: I0312 18:50:39.274544 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8363492c-c44f-48da-b2a5-b6f83718f64e-ovsdbserver-nb\") pod \"8363492c-c44f-48da-b2a5-b6f83718f64e\" (UID: \"8363492c-c44f-48da-b2a5-b6f83718f64e\") " Mar 12 18:50:39.291850 master-0 kubenswrapper[29097]: I0312 18:50:39.291745 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8363492c-c44f-48da-b2a5-b6f83718f64e-kube-api-access-5xtb7" (OuterVolumeSpecName: "kube-api-access-5xtb7") pod "8363492c-c44f-48da-b2a5-b6f83718f64e" (UID: "8363492c-c44f-48da-b2a5-b6f83718f64e"). InnerVolumeSpecName "kube-api-access-5xtb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:50:39.354602 master-0 kubenswrapper[29097]: I0312 18:50:39.348038 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8363492c-c44f-48da-b2a5-b6f83718f64e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8363492c-c44f-48da-b2a5-b6f83718f64e" (UID: "8363492c-c44f-48da-b2a5-b6f83718f64e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:50:39.366160 master-0 kubenswrapper[29097]: I0312 18:50:39.366096 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8363492c-c44f-48da-b2a5-b6f83718f64e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8363492c-c44f-48da-b2a5-b6f83718f64e" (UID: "8363492c-c44f-48da-b2a5-b6f83718f64e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:50:39.381493 master-0 kubenswrapper[29097]: I0312 18:50:39.381319 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8363492c-c44f-48da-b2a5-b6f83718f64e-config" (OuterVolumeSpecName: "config") pod "8363492c-c44f-48da-b2a5-b6f83718f64e" (UID: "8363492c-c44f-48da-b2a5-b6f83718f64e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:50:39.384142 master-0 kubenswrapper[29097]: I0312 18:50:39.383929 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xtb7\" (UniqueName: \"kubernetes.io/projected/8363492c-c44f-48da-b2a5-b6f83718f64e-kube-api-access-5xtb7\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:39.384142 master-0 kubenswrapper[29097]: I0312 18:50:39.383962 29097 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8363492c-c44f-48da-b2a5-b6f83718f64e-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:39.384142 master-0 kubenswrapper[29097]: I0312 18:50:39.383971 29097 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8363492c-c44f-48da-b2a5-b6f83718f64e-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:39.384142 master-0 kubenswrapper[29097]: I0312 18:50:39.383979 29097 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8363492c-c44f-48da-b2a5-b6f83718f64e-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:39.397353 master-0 kubenswrapper[29097]: I0312 18:50:39.397276 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8363492c-c44f-48da-b2a5-b6f83718f64e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8363492c-c44f-48da-b2a5-b6f83718f64e" (UID: "8363492c-c44f-48da-b2a5-b6f83718f64e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:50:39.442071 master-0 kubenswrapper[29097]: I0312 18:50:39.441717 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8363492c-c44f-48da-b2a5-b6f83718f64e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8363492c-c44f-48da-b2a5-b6f83718f64e" (UID: "8363492c-c44f-48da-b2a5-b6f83718f64e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:50:39.451673 master-0 kubenswrapper[29097]: I0312 18:50:39.451629 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-rdkpz" Mar 12 18:50:39.492115 master-0 kubenswrapper[29097]: I0312 18:50:39.484965 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxfzb\" (UniqueName: \"kubernetes.io/projected/c9f3949a-aa88-40a4-b349-be8fb7106a61-kube-api-access-dxfzb\") pod \"c9f3949a-aa88-40a4-b349-be8fb7106a61\" (UID: \"c9f3949a-aa88-40a4-b349-be8fb7106a61\") " Mar 12 18:50:39.492115 master-0 kubenswrapper[29097]: I0312 18:50:39.485202 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9f3949a-aa88-40a4-b349-be8fb7106a61-config-data\") pod \"c9f3949a-aa88-40a4-b349-be8fb7106a61\" (UID: \"c9f3949a-aa88-40a4-b349-be8fb7106a61\") " Mar 12 18:50:39.492115 master-0 kubenswrapper[29097]: I0312 18:50:39.485278 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9f3949a-aa88-40a4-b349-be8fb7106a61-scripts\") pod \"c9f3949a-aa88-40a4-b349-be8fb7106a61\" (UID: \"c9f3949a-aa88-40a4-b349-be8fb7106a61\") " Mar 12 18:50:39.492115 master-0 kubenswrapper[29097]: I0312 18:50:39.485479 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9f3949a-aa88-40a4-b349-be8fb7106a61-combined-ca-bundle\") pod \"c9f3949a-aa88-40a4-b349-be8fb7106a61\" (UID: \"c9f3949a-aa88-40a4-b349-be8fb7106a61\") " Mar 12 18:50:39.492115 master-0 kubenswrapper[29097]: I0312 18:50:39.486068 29097 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8363492c-c44f-48da-b2a5-b6f83718f64e-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:39.492115 master-0 kubenswrapper[29097]: I0312 18:50:39.486083 29097 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8363492c-c44f-48da-b2a5-b6f83718f64e-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:39.492115 master-0 kubenswrapper[29097]: I0312 18:50:39.490928 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9f3949a-aa88-40a4-b349-be8fb7106a61-kube-api-access-dxfzb" (OuterVolumeSpecName: "kube-api-access-dxfzb") pod "c9f3949a-aa88-40a4-b349-be8fb7106a61" (UID: "c9f3949a-aa88-40a4-b349-be8fb7106a61"). InnerVolumeSpecName "kube-api-access-dxfzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:50:39.492839 master-0 kubenswrapper[29097]: I0312 18:50:39.492711 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9f3949a-aa88-40a4-b349-be8fb7106a61-scripts" (OuterVolumeSpecName: "scripts") pod "c9f3949a-aa88-40a4-b349-be8fb7106a61" (UID: "c9f3949a-aa88-40a4-b349-be8fb7106a61"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:50:39.515477 master-0 kubenswrapper[29097]: I0312 18:50:39.515339 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9f3949a-aa88-40a4-b349-be8fb7106a61-config-data" (OuterVolumeSpecName: "config-data") pod "c9f3949a-aa88-40a4-b349-be8fb7106a61" (UID: "c9f3949a-aa88-40a4-b349-be8fb7106a61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:50:39.517246 master-0 kubenswrapper[29097]: I0312 18:50:39.517199 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9f3949a-aa88-40a4-b349-be8fb7106a61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9f3949a-aa88-40a4-b349-be8fb7106a61" (UID: "c9f3949a-aa88-40a4-b349-be8fb7106a61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:50:39.587615 master-0 kubenswrapper[29097]: I0312 18:50:39.587561 29097 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9f3949a-aa88-40a4-b349-be8fb7106a61-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:39.587615 master-0 kubenswrapper[29097]: I0312 18:50:39.587606 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9f3949a-aa88-40a4-b349-be8fb7106a61-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:39.587615 master-0 kubenswrapper[29097]: I0312 18:50:39.587620 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxfzb\" (UniqueName: \"kubernetes.io/projected/c9f3949a-aa88-40a4-b349-be8fb7106a61-kube-api-access-dxfzb\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:39.587615 master-0 kubenswrapper[29097]: I0312 18:50:39.587629 29097 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9f3949a-aa88-40a4-b349-be8fb7106a61-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:39.932703 master-0 kubenswrapper[29097]: I0312 18:50:39.932150 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-rdkpz" event={"ID":"c9f3949a-aa88-40a4-b349-be8fb7106a61","Type":"ContainerDied","Data":"b02d8f86703b7de13bf2f038d9c90800e94353924ebff323258788657951a37c"} Mar 12 18:50:39.932703 master-0 kubenswrapper[29097]: I0312 18:50:39.932201 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b02d8f86703b7de13bf2f038d9c90800e94353924ebff323258788657951a37c" Mar 12 18:50:39.932703 master-0 kubenswrapper[29097]: I0312 18:50:39.932278 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-rdkpz" Mar 12 18:50:39.941910 master-0 kubenswrapper[29097]: I0312 18:50:39.939903 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58d8bd468f-79rnx" event={"ID":"8363492c-c44f-48da-b2a5-b6f83718f64e","Type":"ContainerDied","Data":"0735e7e9986ae7f8e4906d26943e15746dbdb04e601999c71c39b1c12aa77e34"} Mar 12 18:50:39.941910 master-0 kubenswrapper[29097]: I0312 18:50:39.940559 29097 scope.go:117] "RemoveContainer" containerID="5a257d067e60f425b0627f1e96dd4950d73102a1f6be1387bfdc014b23faa944" Mar 12 18:50:39.941910 master-0 kubenswrapper[29097]: I0312 18:50:39.940861 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58d8bd468f-79rnx" Mar 12 18:50:40.015704 master-0 kubenswrapper[29097]: I0312 18:50:40.013669 29097 scope.go:117] "RemoveContainer" containerID="98725e3abc3cd28e83a621ea318ea7db27140b699523eb769d81a2f46112eb93" Mar 12 18:50:40.699575 master-0 kubenswrapper[29097]: I0312 18:50:40.699502 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58d8bd468f-79rnx"] Mar 12 18:50:41.000226 master-0 kubenswrapper[29097]: I0312 18:50:40.999549 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58d8bd468f-79rnx"] Mar 12 18:50:41.999683 master-0 kubenswrapper[29097]: I0312 18:50:41.999630 29097 generic.go:334] "Generic (PLEG): container finished" podID="cac30a96-44f3-4fdd-9fe5-e64e5c61686b" containerID="1f24c03965adcebf9718c0c246d2da8f0801382b7f4e7c16bf9311f8e3079618" exitCode=0 Mar 12 18:50:41.999683 master-0 kubenswrapper[29097]: I0312 18:50:41.999679 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-68m4m" event={"ID":"cac30a96-44f3-4fdd-9fe5-e64e5c61686b","Type":"ContainerDied","Data":"1f24c03965adcebf9718c0c246d2da8f0801382b7f4e7c16bf9311f8e3079618"} Mar 12 18:50:42.734119 master-0 kubenswrapper[29097]: I0312 18:50:42.733745 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8363492c-c44f-48da-b2a5-b6f83718f64e" path="/var/lib/kubelet/pods/8363492c-c44f-48da-b2a5-b6f83718f64e/volumes" Mar 12 18:50:43.487096 master-0 kubenswrapper[29097]: I0312 18:50:43.487052 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-68m4m" Mar 12 18:50:43.503318 master-0 kubenswrapper[29097]: I0312 18:50:43.500761 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cac30a96-44f3-4fdd-9fe5-e64e5c61686b-combined-ca-bundle\") pod \"cac30a96-44f3-4fdd-9fe5-e64e5c61686b\" (UID: \"cac30a96-44f3-4fdd-9fe5-e64e5c61686b\") " Mar 12 18:50:43.503318 master-0 kubenswrapper[29097]: I0312 18:50:43.500969 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cac30a96-44f3-4fdd-9fe5-e64e5c61686b-scripts\") pod \"cac30a96-44f3-4fdd-9fe5-e64e5c61686b\" (UID: \"cac30a96-44f3-4fdd-9fe5-e64e5c61686b\") " Mar 12 18:50:43.503318 master-0 kubenswrapper[29097]: I0312 18:50:43.501158 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cac30a96-44f3-4fdd-9fe5-e64e5c61686b-config-data\") pod \"cac30a96-44f3-4fdd-9fe5-e64e5c61686b\" (UID: \"cac30a96-44f3-4fdd-9fe5-e64e5c61686b\") " Mar 12 18:50:43.503318 master-0 kubenswrapper[29097]: I0312 18:50:43.501366 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcsnt\" (UniqueName: \"kubernetes.io/projected/cac30a96-44f3-4fdd-9fe5-e64e5c61686b-kube-api-access-rcsnt\") pod \"cac30a96-44f3-4fdd-9fe5-e64e5c61686b\" (UID: \"cac30a96-44f3-4fdd-9fe5-e64e5c61686b\") " Mar 12 18:50:43.508820 master-0 kubenswrapper[29097]: I0312 18:50:43.508714 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cac30a96-44f3-4fdd-9fe5-e64e5c61686b-kube-api-access-rcsnt" (OuterVolumeSpecName: "kube-api-access-rcsnt") pod "cac30a96-44f3-4fdd-9fe5-e64e5c61686b" (UID: "cac30a96-44f3-4fdd-9fe5-e64e5c61686b"). InnerVolumeSpecName "kube-api-access-rcsnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:50:43.509869 master-0 kubenswrapper[29097]: I0312 18:50:43.509817 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cac30a96-44f3-4fdd-9fe5-e64e5c61686b-scripts" (OuterVolumeSpecName: "scripts") pod "cac30a96-44f3-4fdd-9fe5-e64e5c61686b" (UID: "cac30a96-44f3-4fdd-9fe5-e64e5c61686b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:50:43.547813 master-0 kubenswrapper[29097]: I0312 18:50:43.547463 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cac30a96-44f3-4fdd-9fe5-e64e5c61686b-config-data" (OuterVolumeSpecName: "config-data") pod "cac30a96-44f3-4fdd-9fe5-e64e5c61686b" (UID: "cac30a96-44f3-4fdd-9fe5-e64e5c61686b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:50:43.589957 master-0 kubenswrapper[29097]: I0312 18:50:43.588366 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cac30a96-44f3-4fdd-9fe5-e64e5c61686b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cac30a96-44f3-4fdd-9fe5-e64e5c61686b" (UID: "cac30a96-44f3-4fdd-9fe5-e64e5c61686b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:50:43.604203 master-0 kubenswrapper[29097]: I0312 18:50:43.604145 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cac30a96-44f3-4fdd-9fe5-e64e5c61686b-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:43.604203 master-0 kubenswrapper[29097]: I0312 18:50:43.604195 29097 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cac30a96-44f3-4fdd-9fe5-e64e5c61686b-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:43.604203 master-0 kubenswrapper[29097]: I0312 18:50:43.604211 29097 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cac30a96-44f3-4fdd-9fe5-e64e5c61686b-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:43.604429 master-0 kubenswrapper[29097]: I0312 18:50:43.604225 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcsnt\" (UniqueName: \"kubernetes.io/projected/cac30a96-44f3-4fdd-9fe5-e64e5c61686b-kube-api-access-rcsnt\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:44.068662 master-0 kubenswrapper[29097]: I0312 18:50:44.063808 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-68m4m" event={"ID":"cac30a96-44f3-4fdd-9fe5-e64e5c61686b","Type":"ContainerDied","Data":"65393fef93007f8213ee3c8c5ceddf50a8d580f64e668ddb3e7e908e87142729"} Mar 12 18:50:44.068662 master-0 kubenswrapper[29097]: I0312 18:50:44.063862 29097 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65393fef93007f8213ee3c8c5ceddf50a8d580f64e668ddb3e7e908e87142729" Mar 12 18:50:44.068662 master-0 kubenswrapper[29097]: I0312 18:50:44.063938 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-68m4m" Mar 12 18:50:44.244759 master-0 kubenswrapper[29097]: I0312 18:50:44.244682 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 18:50:44.245088 master-0 kubenswrapper[29097]: I0312 18:50:44.245043 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1a93fe07-cdf6-4c77-9282-eb9e1b7909c8" containerName="nova-api-log" containerID="cri-o://34c8ea110cf9729c23a084a67054819103d4947b40972c599f7ef338dc4c410f" gracePeriod=30 Mar 12 18:50:44.247319 master-0 kubenswrapper[29097]: I0312 18:50:44.245770 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1a93fe07-cdf6-4c77-9282-eb9e1b7909c8" containerName="nova-api-api" containerID="cri-o://cfdfc1699d0c84b348d967eb213bb29cb059a73b150d621d0bab7fb6c34eeccc" gracePeriod=30 Mar 12 18:50:44.321506 master-0 kubenswrapper[29097]: I0312 18:50:44.313277 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 18:50:44.321506 master-0 kubenswrapper[29097]: I0312 18:50:44.313653 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="2e9566f3-47c9-4d4a-b503-ac3374cc7ecd" containerName="nova-scheduler-scheduler" containerID="cri-o://8dc638aaf2d22a3e5e2a2537c8cfff8ed7a45b0ce2a06c3fc72e55e8782bb11d" gracePeriod=30 Mar 12 18:50:44.350559 master-0 kubenswrapper[29097]: I0312 18:50:44.347947 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 18:50:44.350559 master-0 kubenswrapper[29097]: I0312 18:50:44.348222 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0" containerName="nova-metadata-log" containerID="cri-o://e68f61115681f205d51167cd68aca6bd143be1ae0ba9d6d3e9cb0339d38f4ed6" gracePeriod=30 Mar 12 18:50:44.350559 master-0 kubenswrapper[29097]: I0312 18:50:44.348286 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0" containerName="nova-metadata-metadata" containerID="cri-o://2bbc4f55bb9f69e9a349973ceefb5c27f4ff3d66fed89e9520e33e5785c30b01" gracePeriod=30 Mar 12 18:50:45.080673 master-0 kubenswrapper[29097]: I0312 18:50:45.080610 29097 generic.go:334] "Generic (PLEG): container finished" podID="7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0" containerID="e68f61115681f205d51167cd68aca6bd143be1ae0ba9d6d3e9cb0339d38f4ed6" exitCode=143 Mar 12 18:50:45.081201 master-0 kubenswrapper[29097]: I0312 18:50:45.080717 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0","Type":"ContainerDied","Data":"e68f61115681f205d51167cd68aca6bd143be1ae0ba9d6d3e9cb0339d38f4ed6"} Mar 12 18:50:45.084090 master-0 kubenswrapper[29097]: I0312 18:50:45.084054 29097 generic.go:334] "Generic (PLEG): container finished" podID="1a93fe07-cdf6-4c77-9282-eb9e1b7909c8" containerID="cfdfc1699d0c84b348d967eb213bb29cb059a73b150d621d0bab7fb6c34eeccc" exitCode=0 Mar 12 18:50:45.084090 master-0 kubenswrapper[29097]: I0312 18:50:45.084081 29097 generic.go:334] "Generic (PLEG): container finished" podID="1a93fe07-cdf6-4c77-9282-eb9e1b7909c8" containerID="34c8ea110cf9729c23a084a67054819103d4947b40972c599f7ef338dc4c410f" exitCode=143 Mar 12 18:50:45.084200 master-0 kubenswrapper[29097]: I0312 18:50:45.084100 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8","Type":"ContainerDied","Data":"cfdfc1699d0c84b348d967eb213bb29cb059a73b150d621d0bab7fb6c34eeccc"} Mar 12 18:50:45.084200 master-0 kubenswrapper[29097]: I0312 18:50:45.084138 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8","Type":"ContainerDied","Data":"34c8ea110cf9729c23a084a67054819103d4947b40972c599f7ef338dc4c410f"} Mar 12 18:50:45.162715 master-0 kubenswrapper[29097]: I0312 18:50:45.162659 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 18:50:45.280310 master-0 kubenswrapper[29097]: I0312 18:50:45.280261 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-internal-tls-certs\") pod \"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8\" (UID: \"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8\") " Mar 12 18:50:45.280622 master-0 kubenswrapper[29097]: I0312 18:50:45.280602 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rgps\" (UniqueName: \"kubernetes.io/projected/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-kube-api-access-5rgps\") pod \"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8\" (UID: \"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8\") " Mar 12 18:50:45.280919 master-0 kubenswrapper[29097]: I0312 18:50:45.280903 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-public-tls-certs\") pod \"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8\" (UID: \"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8\") " Mar 12 18:50:45.281172 master-0 kubenswrapper[29097]: I0312 18:50:45.281159 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-logs\") pod \"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8\" (UID: \"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8\") " Mar 12 18:50:45.281376 master-0 kubenswrapper[29097]: I0312 18:50:45.281347 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-combined-ca-bundle\") pod \"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8\" (UID: \"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8\") " Mar 12 18:50:45.281523 master-0 kubenswrapper[29097]: I0312 18:50:45.281497 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-config-data\") pod \"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8\" (UID: \"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8\") " Mar 12 18:50:45.282260 master-0 kubenswrapper[29097]: I0312 18:50:45.282210 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-logs" (OuterVolumeSpecName: "logs") pod "1a93fe07-cdf6-4c77-9282-eb9e1b7909c8" (UID: "1a93fe07-cdf6-4c77-9282-eb9e1b7909c8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:50:45.283297 master-0 kubenswrapper[29097]: I0312 18:50:45.283253 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-kube-api-access-5rgps" (OuterVolumeSpecName: "kube-api-access-5rgps") pod "1a93fe07-cdf6-4c77-9282-eb9e1b7909c8" (UID: "1a93fe07-cdf6-4c77-9282-eb9e1b7909c8"). InnerVolumeSpecName "kube-api-access-5rgps". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:50:45.314664 master-0 kubenswrapper[29097]: I0312 18:50:45.308820 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-config-data" (OuterVolumeSpecName: "config-data") pod "1a93fe07-cdf6-4c77-9282-eb9e1b7909c8" (UID: "1a93fe07-cdf6-4c77-9282-eb9e1b7909c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:50:45.316727 master-0 kubenswrapper[29097]: I0312 18:50:45.316682 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a93fe07-cdf6-4c77-9282-eb9e1b7909c8" (UID: "1a93fe07-cdf6-4c77-9282-eb9e1b7909c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:50:45.346723 master-0 kubenswrapper[29097]: I0312 18:50:45.346612 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1a93fe07-cdf6-4c77-9282-eb9e1b7909c8" (UID: "1a93fe07-cdf6-4c77-9282-eb9e1b7909c8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:50:45.351750 master-0 kubenswrapper[29097]: I0312 18:50:45.351707 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1a93fe07-cdf6-4c77-9282-eb9e1b7909c8" (UID: "1a93fe07-cdf6-4c77-9282-eb9e1b7909c8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:50:45.384405 master-0 kubenswrapper[29097]: I0312 18:50:45.384358 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:45.384405 master-0 kubenswrapper[29097]: I0312 18:50:45.384401 29097 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:45.384405 master-0 kubenswrapper[29097]: I0312 18:50:45.384411 29097 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:45.384641 master-0 kubenswrapper[29097]: I0312 18:50:45.384423 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rgps\" (UniqueName: \"kubernetes.io/projected/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-kube-api-access-5rgps\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:45.384641 master-0 kubenswrapper[29097]: I0312 18:50:45.384434 29097 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:45.384641 master-0 kubenswrapper[29097]: I0312 18:50:45.384443 29097 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:46.096623 master-0 kubenswrapper[29097]: I0312 18:50:46.096553 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1a93fe07-cdf6-4c77-9282-eb9e1b7909c8","Type":"ContainerDied","Data":"c527aa67113745d7d3bd9ddf2cc57096fb687c06e4dabd719b77f1ac5f1760ad"} Mar 12 18:50:46.097175 master-0 kubenswrapper[29097]: I0312 18:50:46.096644 29097 scope.go:117] "RemoveContainer" containerID="cfdfc1699d0c84b348d967eb213bb29cb059a73b150d621d0bab7fb6c34eeccc" Mar 12 18:50:46.097175 master-0 kubenswrapper[29097]: I0312 18:50:46.096579 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 18:50:46.134376 master-0 kubenswrapper[29097]: I0312 18:50:46.132140 29097 scope.go:117] "RemoveContainer" containerID="34c8ea110cf9729c23a084a67054819103d4947b40972c599f7ef338dc4c410f" Mar 12 18:50:46.164304 master-0 kubenswrapper[29097]: I0312 18:50:46.164238 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 18:50:46.230552 master-0 kubenswrapper[29097]: I0312 18:50:46.227413 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 12 18:50:46.263704 master-0 kubenswrapper[29097]: I0312 18:50:46.261971 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 12 18:50:46.263704 master-0 kubenswrapper[29097]: E0312 18:50:46.262685 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a93fe07-cdf6-4c77-9282-eb9e1b7909c8" containerName="nova-api-log" Mar 12 18:50:46.263704 master-0 kubenswrapper[29097]: I0312 18:50:46.262701 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a93fe07-cdf6-4c77-9282-eb9e1b7909c8" containerName="nova-api-log" Mar 12 18:50:46.263704 master-0 kubenswrapper[29097]: E0312 18:50:46.262756 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9f3949a-aa88-40a4-b349-be8fb7106a61" containerName="nova-manage" Mar 12 18:50:46.263704 master-0 kubenswrapper[29097]: I0312 18:50:46.262763 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9f3949a-aa88-40a4-b349-be8fb7106a61" containerName="nova-manage" Mar 12 18:50:46.263704 master-0 kubenswrapper[29097]: E0312 18:50:46.262783 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a93fe07-cdf6-4c77-9282-eb9e1b7909c8" containerName="nova-api-api" Mar 12 18:50:46.263704 master-0 kubenswrapper[29097]: I0312 18:50:46.262791 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a93fe07-cdf6-4c77-9282-eb9e1b7909c8" containerName="nova-api-api" Mar 12 18:50:46.263704 master-0 kubenswrapper[29097]: E0312 18:50:46.262821 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cac30a96-44f3-4fdd-9fe5-e64e5c61686b" containerName="nova-manage" Mar 12 18:50:46.263704 master-0 kubenswrapper[29097]: I0312 18:50:46.262827 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="cac30a96-44f3-4fdd-9fe5-e64e5c61686b" containerName="nova-manage" Mar 12 18:50:46.263704 master-0 kubenswrapper[29097]: E0312 18:50:46.262837 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8363492c-c44f-48da-b2a5-b6f83718f64e" containerName="init" Mar 12 18:50:46.263704 master-0 kubenswrapper[29097]: I0312 18:50:46.262843 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="8363492c-c44f-48da-b2a5-b6f83718f64e" containerName="init" Mar 12 18:50:46.263704 master-0 kubenswrapper[29097]: E0312 18:50:46.262855 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8363492c-c44f-48da-b2a5-b6f83718f64e" containerName="dnsmasq-dns" Mar 12 18:50:46.263704 master-0 kubenswrapper[29097]: I0312 18:50:46.262862 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="8363492c-c44f-48da-b2a5-b6f83718f64e" containerName="dnsmasq-dns" Mar 12 18:50:46.263704 master-0 kubenswrapper[29097]: I0312 18:50:46.263192 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9f3949a-aa88-40a4-b349-be8fb7106a61" containerName="nova-manage" Mar 12 18:50:46.263704 master-0 kubenswrapper[29097]: I0312 18:50:46.263213 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a93fe07-cdf6-4c77-9282-eb9e1b7909c8" containerName="nova-api-log" Mar 12 18:50:46.263704 master-0 kubenswrapper[29097]: I0312 18:50:46.263233 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="8363492c-c44f-48da-b2a5-b6f83718f64e" containerName="dnsmasq-dns" Mar 12 18:50:46.263704 master-0 kubenswrapper[29097]: I0312 18:50:46.263250 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a93fe07-cdf6-4c77-9282-eb9e1b7909c8" containerName="nova-api-api" Mar 12 18:50:46.263704 master-0 kubenswrapper[29097]: I0312 18:50:46.263271 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="cac30a96-44f3-4fdd-9fe5-e64e5c61686b" containerName="nova-manage" Mar 12 18:50:46.264660 master-0 kubenswrapper[29097]: I0312 18:50:46.264639 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 18:50:46.267318 master-0 kubenswrapper[29097]: I0312 18:50:46.267269 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 12 18:50:46.267432 master-0 kubenswrapper[29097]: I0312 18:50:46.267344 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 12 18:50:46.274868 master-0 kubenswrapper[29097]: I0312 18:50:46.273474 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 12 18:50:46.276662 master-0 kubenswrapper[29097]: I0312 18:50:46.276608 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 18:50:46.321267 master-0 kubenswrapper[29097]: I0312 18:50:46.321203 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc6da5f0-599f-4be3-a369-5e4c452e1e8d-public-tls-certs\") pod \"nova-api-0\" (UID: \"bc6da5f0-599f-4be3-a369-5e4c452e1e8d\") " pod="openstack/nova-api-0" Mar 12 18:50:46.322645 master-0 kubenswrapper[29097]: I0312 18:50:46.321857 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfg7d\" (UniqueName: \"kubernetes.io/projected/bc6da5f0-599f-4be3-a369-5e4c452e1e8d-kube-api-access-zfg7d\") pod \"nova-api-0\" (UID: \"bc6da5f0-599f-4be3-a369-5e4c452e1e8d\") " pod="openstack/nova-api-0" Mar 12 18:50:46.322645 master-0 kubenswrapper[29097]: I0312 18:50:46.321940 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc6da5f0-599f-4be3-a369-5e4c452e1e8d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bc6da5f0-599f-4be3-a369-5e4c452e1e8d\") " pod="openstack/nova-api-0" Mar 12 18:50:46.322645 master-0 kubenswrapper[29097]: I0312 18:50:46.322119 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc6da5f0-599f-4be3-a369-5e4c452e1e8d-logs\") pod \"nova-api-0\" (UID: \"bc6da5f0-599f-4be3-a369-5e4c452e1e8d\") " pod="openstack/nova-api-0" Mar 12 18:50:46.322645 master-0 kubenswrapper[29097]: I0312 18:50:46.322313 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc6da5f0-599f-4be3-a369-5e4c452e1e8d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bc6da5f0-599f-4be3-a369-5e4c452e1e8d\") " pod="openstack/nova-api-0" Mar 12 18:50:46.322645 master-0 kubenswrapper[29097]: I0312 18:50:46.322346 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc6da5f0-599f-4be3-a369-5e4c452e1e8d-config-data\") pod \"nova-api-0\" (UID: \"bc6da5f0-599f-4be3-a369-5e4c452e1e8d\") " pod="openstack/nova-api-0" Mar 12 18:50:46.423725 master-0 kubenswrapper[29097]: I0312 18:50:46.423537 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfg7d\" (UniqueName: \"kubernetes.io/projected/bc6da5f0-599f-4be3-a369-5e4c452e1e8d-kube-api-access-zfg7d\") pod \"nova-api-0\" (UID: \"bc6da5f0-599f-4be3-a369-5e4c452e1e8d\") " pod="openstack/nova-api-0" Mar 12 18:50:46.423725 master-0 kubenswrapper[29097]: I0312 18:50:46.423598 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc6da5f0-599f-4be3-a369-5e4c452e1e8d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bc6da5f0-599f-4be3-a369-5e4c452e1e8d\") " pod="openstack/nova-api-0" Mar 12 18:50:46.424077 master-0 kubenswrapper[29097]: I0312 18:50:46.423796 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc6da5f0-599f-4be3-a369-5e4c452e1e8d-logs\") pod \"nova-api-0\" (UID: \"bc6da5f0-599f-4be3-a369-5e4c452e1e8d\") " pod="openstack/nova-api-0" Mar 12 18:50:46.424146 master-0 kubenswrapper[29097]: I0312 18:50:46.424115 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc6da5f0-599f-4be3-a369-5e4c452e1e8d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bc6da5f0-599f-4be3-a369-5e4c452e1e8d\") " pod="openstack/nova-api-0" Mar 12 18:50:46.424232 master-0 kubenswrapper[29097]: I0312 18:50:46.424167 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc6da5f0-599f-4be3-a369-5e4c452e1e8d-config-data\") pod \"nova-api-0\" (UID: \"bc6da5f0-599f-4be3-a369-5e4c452e1e8d\") " pod="openstack/nova-api-0" Mar 12 18:50:46.424302 master-0 kubenswrapper[29097]: I0312 18:50:46.424247 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc6da5f0-599f-4be3-a369-5e4c452e1e8d-logs\") pod \"nova-api-0\" (UID: \"bc6da5f0-599f-4be3-a369-5e4c452e1e8d\") " pod="openstack/nova-api-0" Mar 12 18:50:46.424482 master-0 kubenswrapper[29097]: I0312 18:50:46.424435 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc6da5f0-599f-4be3-a369-5e4c452e1e8d-public-tls-certs\") pod \"nova-api-0\" (UID: \"bc6da5f0-599f-4be3-a369-5e4c452e1e8d\") " pod="openstack/nova-api-0" Mar 12 18:50:46.427346 master-0 kubenswrapper[29097]: I0312 18:50:46.427305 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc6da5f0-599f-4be3-a369-5e4c452e1e8d-public-tls-certs\") pod \"nova-api-0\" (UID: \"bc6da5f0-599f-4be3-a369-5e4c452e1e8d\") " pod="openstack/nova-api-0" Mar 12 18:50:46.427878 master-0 kubenswrapper[29097]: I0312 18:50:46.427836 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc6da5f0-599f-4be3-a369-5e4c452e1e8d-config-data\") pod \"nova-api-0\" (UID: \"bc6da5f0-599f-4be3-a369-5e4c452e1e8d\") " pod="openstack/nova-api-0" Mar 12 18:50:46.428897 master-0 kubenswrapper[29097]: I0312 18:50:46.428850 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc6da5f0-599f-4be3-a369-5e4c452e1e8d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bc6da5f0-599f-4be3-a369-5e4c452e1e8d\") " pod="openstack/nova-api-0" Mar 12 18:50:46.441178 master-0 kubenswrapper[29097]: I0312 18:50:46.441131 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc6da5f0-599f-4be3-a369-5e4c452e1e8d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bc6da5f0-599f-4be3-a369-5e4c452e1e8d\") " pod="openstack/nova-api-0" Mar 12 18:50:46.453011 master-0 kubenswrapper[29097]: I0312 18:50:46.452961 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfg7d\" (UniqueName: \"kubernetes.io/projected/bc6da5f0-599f-4be3-a369-5e4c452e1e8d-kube-api-access-zfg7d\") pod \"nova-api-0\" (UID: \"bc6da5f0-599f-4be3-a369-5e4c452e1e8d\") " pod="openstack/nova-api-0" Mar 12 18:50:46.591260 master-0 kubenswrapper[29097]: I0312 18:50:46.591169 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 18:50:46.737966 master-0 kubenswrapper[29097]: I0312 18:50:46.737912 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a93fe07-cdf6-4c77-9282-eb9e1b7909c8" path="/var/lib/kubelet/pods/1a93fe07-cdf6-4c77-9282-eb9e1b7909c8/volumes" Mar 12 18:50:47.156712 master-0 kubenswrapper[29097]: I0312 18:50:47.156603 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 18:50:47.487103 master-0 kubenswrapper[29097]: I0312 18:50:47.487046 29097 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.12:8775/\": read tcp 10.128.0.2:50616->10.128.1.12:8775: read: connection reset by peer" Mar 12 18:50:47.487199 master-0 kubenswrapper[29097]: I0312 18:50:47.487110 29097 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.12:8775/\": read tcp 10.128.0.2:50608->10.128.1.12:8775: read: connection reset by peer" Mar 12 18:50:47.728424 master-0 kubenswrapper[29097]: I0312 18:50:47.727825 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 18:50:47.873583 master-0 kubenswrapper[29097]: I0312 18:50:47.873303 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9566f3-47c9-4d4a-b503-ac3374cc7ecd-config-data\") pod \"2e9566f3-47c9-4d4a-b503-ac3374cc7ecd\" (UID: \"2e9566f3-47c9-4d4a-b503-ac3374cc7ecd\") " Mar 12 18:50:47.873583 master-0 kubenswrapper[29097]: I0312 18:50:47.873450 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvfbm\" (UniqueName: \"kubernetes.io/projected/2e9566f3-47c9-4d4a-b503-ac3374cc7ecd-kube-api-access-cvfbm\") pod \"2e9566f3-47c9-4d4a-b503-ac3374cc7ecd\" (UID: \"2e9566f3-47c9-4d4a-b503-ac3374cc7ecd\") " Mar 12 18:50:47.873583 master-0 kubenswrapper[29097]: I0312 18:50:47.873561 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e9566f3-47c9-4d4a-b503-ac3374cc7ecd-combined-ca-bundle\") pod \"2e9566f3-47c9-4d4a-b503-ac3374cc7ecd\" (UID: \"2e9566f3-47c9-4d4a-b503-ac3374cc7ecd\") " Mar 12 18:50:47.885593 master-0 kubenswrapper[29097]: I0312 18:50:47.885494 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e9566f3-47c9-4d4a-b503-ac3374cc7ecd-kube-api-access-cvfbm" (OuterVolumeSpecName: "kube-api-access-cvfbm") pod "2e9566f3-47c9-4d4a-b503-ac3374cc7ecd" (UID: "2e9566f3-47c9-4d4a-b503-ac3374cc7ecd"). InnerVolumeSpecName "kube-api-access-cvfbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:50:47.907256 master-0 kubenswrapper[29097]: I0312 18:50:47.907005 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e9566f3-47c9-4d4a-b503-ac3374cc7ecd-config-data" (OuterVolumeSpecName: "config-data") pod "2e9566f3-47c9-4d4a-b503-ac3374cc7ecd" (UID: "2e9566f3-47c9-4d4a-b503-ac3374cc7ecd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:50:47.907721 master-0 kubenswrapper[29097]: I0312 18:50:47.907662 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e9566f3-47c9-4d4a-b503-ac3374cc7ecd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e9566f3-47c9-4d4a-b503-ac3374cc7ecd" (UID: "2e9566f3-47c9-4d4a-b503-ac3374cc7ecd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:50:47.940467 master-0 kubenswrapper[29097]: I0312 18:50:47.939565 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 18:50:47.984411 master-0 kubenswrapper[29097]: I0312 18:50:47.983769 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvfbm\" (UniqueName: \"kubernetes.io/projected/2e9566f3-47c9-4d4a-b503-ac3374cc7ecd-kube-api-access-cvfbm\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:47.984411 master-0 kubenswrapper[29097]: I0312 18:50:47.983820 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e9566f3-47c9-4d4a-b503-ac3374cc7ecd-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:47.984411 master-0 kubenswrapper[29097]: I0312 18:50:47.983831 29097 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e9566f3-47c9-4d4a-b503-ac3374cc7ecd-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:48.085123 master-0 kubenswrapper[29097]: I0312 18:50:48.085072 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zhhw\" (UniqueName: \"kubernetes.io/projected/7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0-kube-api-access-4zhhw\") pod \"7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0\" (UID: \"7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0\") " Mar 12 18:50:48.085248 master-0 kubenswrapper[29097]: I0312 18:50:48.085189 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0-config-data\") pod \"7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0\" (UID: \"7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0\") " Mar 12 18:50:48.085248 master-0 kubenswrapper[29097]: I0312 18:50:48.085218 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0-nova-metadata-tls-certs\") pod \"7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0\" (UID: \"7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0\") " Mar 12 18:50:48.085465 master-0 kubenswrapper[29097]: I0312 18:50:48.085440 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0-combined-ca-bundle\") pod \"7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0\" (UID: \"7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0\") " Mar 12 18:50:48.085596 master-0 kubenswrapper[29097]: I0312 18:50:48.085575 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0-logs\") pod \"7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0\" (UID: \"7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0\") " Mar 12 18:50:48.086629 master-0 kubenswrapper[29097]: I0312 18:50:48.086599 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0-logs" (OuterVolumeSpecName: "logs") pod "7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0" (UID: "7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 18:50:48.089436 master-0 kubenswrapper[29097]: I0312 18:50:48.089408 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0-kube-api-access-4zhhw" (OuterVolumeSpecName: "kube-api-access-4zhhw") pod "7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0" (UID: "7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0"). InnerVolumeSpecName "kube-api-access-4zhhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:50:48.147904 master-0 kubenswrapper[29097]: I0312 18:50:48.147841 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0" (UID: "7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:50:48.149816 master-0 kubenswrapper[29097]: I0312 18:50:48.149767 29097 generic.go:334] "Generic (PLEG): container finished" podID="7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0" containerID="2bbc4f55bb9f69e9a349973ceefb5c27f4ff3d66fed89e9520e33e5785c30b01" exitCode=0 Mar 12 18:50:48.149906 master-0 kubenswrapper[29097]: I0312 18:50:48.149878 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0","Type":"ContainerDied","Data":"2bbc4f55bb9f69e9a349973ceefb5c27f4ff3d66fed89e9520e33e5785c30b01"} Mar 12 18:50:48.149951 master-0 kubenswrapper[29097]: I0312 18:50:48.149922 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0","Type":"ContainerDied","Data":"0418ffee65c2461734b595d8184e43a1f7652327942758f53004792ebffc9428"} Mar 12 18:50:48.149951 master-0 kubenswrapper[29097]: I0312 18:50:48.149883 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 18:50:48.150049 master-0 kubenswrapper[29097]: I0312 18:50:48.149941 29097 scope.go:117] "RemoveContainer" containerID="2bbc4f55bb9f69e9a349973ceefb5c27f4ff3d66fed89e9520e33e5785c30b01" Mar 12 18:50:48.152640 master-0 kubenswrapper[29097]: I0312 18:50:48.152603 29097 generic.go:334] "Generic (PLEG): container finished" podID="2e9566f3-47c9-4d4a-b503-ac3374cc7ecd" containerID="8dc638aaf2d22a3e5e2a2537c8cfff8ed7a45b0ce2a06c3fc72e55e8782bb11d" exitCode=0 Mar 12 18:50:48.152704 master-0 kubenswrapper[29097]: I0312 18:50:48.152668 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2e9566f3-47c9-4d4a-b503-ac3374cc7ecd","Type":"ContainerDied","Data":"8dc638aaf2d22a3e5e2a2537c8cfff8ed7a45b0ce2a06c3fc72e55e8782bb11d"} Mar 12 18:50:48.152704 master-0 kubenswrapper[29097]: I0312 18:50:48.152696 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2e9566f3-47c9-4d4a-b503-ac3374cc7ecd","Type":"ContainerDied","Data":"62b5c66518e299424eef81f57b1cc6398b7adeb94525604f0d1a41552c22c332"} Mar 12 18:50:48.152762 master-0 kubenswrapper[29097]: I0312 18:50:48.152737 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 18:50:48.157045 master-0 kubenswrapper[29097]: I0312 18:50:48.157015 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0-config-data" (OuterVolumeSpecName: "config-data") pod "7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0" (UID: "7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:50:48.162320 master-0 kubenswrapper[29097]: I0312 18:50:48.162213 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bc6da5f0-599f-4be3-a369-5e4c452e1e8d","Type":"ContainerStarted","Data":"66e925b8f70d017346344cbf6c3e499b089c8247ddbd8937eb408671e0f171a7"} Mar 12 18:50:48.162320 master-0 kubenswrapper[29097]: I0312 18:50:48.162257 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bc6da5f0-599f-4be3-a369-5e4c452e1e8d","Type":"ContainerStarted","Data":"ed08403d6821b55399d3bdbc731bcf580cbdd7298a86a93958eadccc5ba3dc29"} Mar 12 18:50:48.162320 master-0 kubenswrapper[29097]: I0312 18:50:48.162268 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bc6da5f0-599f-4be3-a369-5e4c452e1e8d","Type":"ContainerStarted","Data":"b5741c45f393ab9b821f6c011091567e7a7d19071d9b2aa7aeec597a0f528712"} Mar 12 18:50:48.178210 master-0 kubenswrapper[29097]: I0312 18:50:48.178163 29097 scope.go:117] "RemoveContainer" containerID="e68f61115681f205d51167cd68aca6bd143be1ae0ba9d6d3e9cb0339d38f4ed6" Mar 12 18:50:48.187208 master-0 kubenswrapper[29097]: I0312 18:50:48.187134 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.187119214 podStartE2EDuration="2.187119214s" podCreationTimestamp="2026-03-12 18:50:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:50:48.186406687 +0000 UTC m=+1287.740386784" watchObservedRunningTime="2026-03-12 18:50:48.187119214 +0000 UTC m=+1287.741099301" Mar 12 18:50:48.188015 master-0 kubenswrapper[29097]: I0312 18:50:48.187974 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zhhw\" (UniqueName: \"kubernetes.io/projected/7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0-kube-api-access-4zhhw\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:48.188075 master-0 kubenswrapper[29097]: I0312 18:50:48.188025 29097 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:48.188075 master-0 kubenswrapper[29097]: I0312 18:50:48.188043 29097 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:48.188075 master-0 kubenswrapper[29097]: I0312 18:50:48.188061 29097 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:48.230037 master-0 kubenswrapper[29097]: I0312 18:50:48.229934 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0" (UID: "7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:50:48.263436 master-0 kubenswrapper[29097]: I0312 18:50:48.263374 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 18:50:48.264187 master-0 kubenswrapper[29097]: I0312 18:50:48.264169 29097 scope.go:117] "RemoveContainer" containerID="2bbc4f55bb9f69e9a349973ceefb5c27f4ff3d66fed89e9520e33e5785c30b01" Mar 12 18:50:48.264760 master-0 kubenswrapper[29097]: E0312 18:50:48.264712 29097 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bbc4f55bb9f69e9a349973ceefb5c27f4ff3d66fed89e9520e33e5785c30b01\": container with ID starting with 2bbc4f55bb9f69e9a349973ceefb5c27f4ff3d66fed89e9520e33e5785c30b01 not found: ID does not exist" containerID="2bbc4f55bb9f69e9a349973ceefb5c27f4ff3d66fed89e9520e33e5785c30b01" Mar 12 18:50:48.264831 master-0 kubenswrapper[29097]: I0312 18:50:48.264764 29097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bbc4f55bb9f69e9a349973ceefb5c27f4ff3d66fed89e9520e33e5785c30b01"} err="failed to get container status \"2bbc4f55bb9f69e9a349973ceefb5c27f4ff3d66fed89e9520e33e5785c30b01\": rpc error: code = NotFound desc = could not find container \"2bbc4f55bb9f69e9a349973ceefb5c27f4ff3d66fed89e9520e33e5785c30b01\": container with ID starting with 2bbc4f55bb9f69e9a349973ceefb5c27f4ff3d66fed89e9520e33e5785c30b01 not found: ID does not exist" Mar 12 18:50:48.264831 master-0 kubenswrapper[29097]: I0312 18:50:48.264789 29097 scope.go:117] "RemoveContainer" containerID="e68f61115681f205d51167cd68aca6bd143be1ae0ba9d6d3e9cb0339d38f4ed6" Mar 12 18:50:48.268815 master-0 kubenswrapper[29097]: E0312 18:50:48.268587 29097 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e68f61115681f205d51167cd68aca6bd143be1ae0ba9d6d3e9cb0339d38f4ed6\": container with ID starting with e68f61115681f205d51167cd68aca6bd143be1ae0ba9d6d3e9cb0339d38f4ed6 not found: ID does not exist" containerID="e68f61115681f205d51167cd68aca6bd143be1ae0ba9d6d3e9cb0339d38f4ed6" Mar 12 18:50:48.268815 master-0 kubenswrapper[29097]: I0312 18:50:48.268613 29097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e68f61115681f205d51167cd68aca6bd143be1ae0ba9d6d3e9cb0339d38f4ed6"} err="failed to get container status \"e68f61115681f205d51167cd68aca6bd143be1ae0ba9d6d3e9cb0339d38f4ed6\": rpc error: code = NotFound desc = could not find container \"e68f61115681f205d51167cd68aca6bd143be1ae0ba9d6d3e9cb0339d38f4ed6\": container with ID starting with e68f61115681f205d51167cd68aca6bd143be1ae0ba9d6d3e9cb0339d38f4ed6 not found: ID does not exist" Mar 12 18:50:48.268815 master-0 kubenswrapper[29097]: I0312 18:50:48.268627 29097 scope.go:117] "RemoveContainer" containerID="8dc638aaf2d22a3e5e2a2537c8cfff8ed7a45b0ce2a06c3fc72e55e8782bb11d" Mar 12 18:50:48.286787 master-0 kubenswrapper[29097]: I0312 18:50:48.286589 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 18:50:48.291157 master-0 kubenswrapper[29097]: I0312 18:50:48.291102 29097 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 18:50:48.310320 master-0 kubenswrapper[29097]: I0312 18:50:48.310216 29097 scope.go:117] "RemoveContainer" containerID="8dc638aaf2d22a3e5e2a2537c8cfff8ed7a45b0ce2a06c3fc72e55e8782bb11d" Mar 12 18:50:48.315672 master-0 kubenswrapper[29097]: E0312 18:50:48.315542 29097 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dc638aaf2d22a3e5e2a2537c8cfff8ed7a45b0ce2a06c3fc72e55e8782bb11d\": container with ID starting with 8dc638aaf2d22a3e5e2a2537c8cfff8ed7a45b0ce2a06c3fc72e55e8782bb11d not found: ID does not exist" containerID="8dc638aaf2d22a3e5e2a2537c8cfff8ed7a45b0ce2a06c3fc72e55e8782bb11d" Mar 12 18:50:48.315672 master-0 kubenswrapper[29097]: I0312 18:50:48.315599 29097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dc638aaf2d22a3e5e2a2537c8cfff8ed7a45b0ce2a06c3fc72e55e8782bb11d"} err="failed to get container status \"8dc638aaf2d22a3e5e2a2537c8cfff8ed7a45b0ce2a06c3fc72e55e8782bb11d\": rpc error: code = NotFound desc = could not find container \"8dc638aaf2d22a3e5e2a2537c8cfff8ed7a45b0ce2a06c3fc72e55e8782bb11d\": container with ID starting with 8dc638aaf2d22a3e5e2a2537c8cfff8ed7a45b0ce2a06c3fc72e55e8782bb11d not found: ID does not exist" Mar 12 18:50:48.325096 master-0 kubenswrapper[29097]: I0312 18:50:48.321554 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 18:50:48.325096 master-0 kubenswrapper[29097]: E0312 18:50:48.323073 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0" containerName="nova-metadata-metadata" Mar 12 18:50:48.325096 master-0 kubenswrapper[29097]: I0312 18:50:48.323091 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0" containerName="nova-metadata-metadata" Mar 12 18:50:48.325096 master-0 kubenswrapper[29097]: E0312 18:50:48.323110 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0" containerName="nova-metadata-log" Mar 12 18:50:48.325096 master-0 kubenswrapper[29097]: I0312 18:50:48.323116 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0" containerName="nova-metadata-log" Mar 12 18:50:48.325096 master-0 kubenswrapper[29097]: E0312 18:50:48.323167 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e9566f3-47c9-4d4a-b503-ac3374cc7ecd" containerName="nova-scheduler-scheduler" Mar 12 18:50:48.325096 master-0 kubenswrapper[29097]: I0312 18:50:48.323175 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e9566f3-47c9-4d4a-b503-ac3374cc7ecd" containerName="nova-scheduler-scheduler" Mar 12 18:50:48.325096 master-0 kubenswrapper[29097]: I0312 18:50:48.323602 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0" containerName="nova-metadata-metadata" Mar 12 18:50:48.325096 master-0 kubenswrapper[29097]: I0312 18:50:48.323635 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e9566f3-47c9-4d4a-b503-ac3374cc7ecd" containerName="nova-scheduler-scheduler" Mar 12 18:50:48.325096 master-0 kubenswrapper[29097]: I0312 18:50:48.323666 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0" containerName="nova-metadata-log" Mar 12 18:50:48.325096 master-0 kubenswrapper[29097]: I0312 18:50:48.324364 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 18:50:48.327803 master-0 kubenswrapper[29097]: I0312 18:50:48.327770 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 12 18:50:48.344230 master-0 kubenswrapper[29097]: I0312 18:50:48.342585 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 18:50:48.393614 master-0 kubenswrapper[29097]: I0312 18:50:48.393535 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/698cc24c-946c-44b7-8ca8-dad9377673d3-config-data\") pod \"nova-scheduler-0\" (UID: \"698cc24c-946c-44b7-8ca8-dad9377673d3\") " pod="openstack/nova-scheduler-0" Mar 12 18:50:48.393614 master-0 kubenswrapper[29097]: I0312 18:50:48.393616 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/698cc24c-946c-44b7-8ca8-dad9377673d3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"698cc24c-946c-44b7-8ca8-dad9377673d3\") " pod="openstack/nova-scheduler-0" Mar 12 18:50:48.394279 master-0 kubenswrapper[29097]: I0312 18:50:48.393884 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vh69\" (UniqueName: \"kubernetes.io/projected/698cc24c-946c-44b7-8ca8-dad9377673d3-kube-api-access-8vh69\") pod \"nova-scheduler-0\" (UID: \"698cc24c-946c-44b7-8ca8-dad9377673d3\") " pod="openstack/nova-scheduler-0" Mar 12 18:50:48.496841 master-0 kubenswrapper[29097]: I0312 18:50:48.495973 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vh69\" (UniqueName: \"kubernetes.io/projected/698cc24c-946c-44b7-8ca8-dad9377673d3-kube-api-access-8vh69\") pod \"nova-scheduler-0\" (UID: \"698cc24c-946c-44b7-8ca8-dad9377673d3\") " pod="openstack/nova-scheduler-0" Mar 12 18:50:48.496841 master-0 kubenswrapper[29097]: I0312 18:50:48.496159 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/698cc24c-946c-44b7-8ca8-dad9377673d3-config-data\") pod \"nova-scheduler-0\" (UID: \"698cc24c-946c-44b7-8ca8-dad9377673d3\") " pod="openstack/nova-scheduler-0" Mar 12 18:50:48.496841 master-0 kubenswrapper[29097]: I0312 18:50:48.496195 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/698cc24c-946c-44b7-8ca8-dad9377673d3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"698cc24c-946c-44b7-8ca8-dad9377673d3\") " pod="openstack/nova-scheduler-0" Mar 12 18:50:48.499611 master-0 kubenswrapper[29097]: I0312 18:50:48.499570 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/698cc24c-946c-44b7-8ca8-dad9377673d3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"698cc24c-946c-44b7-8ca8-dad9377673d3\") " pod="openstack/nova-scheduler-0" Mar 12 18:50:48.505962 master-0 kubenswrapper[29097]: I0312 18:50:48.505910 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/698cc24c-946c-44b7-8ca8-dad9377673d3-config-data\") pod \"nova-scheduler-0\" (UID: \"698cc24c-946c-44b7-8ca8-dad9377673d3\") " pod="openstack/nova-scheduler-0" Mar 12 18:50:48.526222 master-0 kubenswrapper[29097]: I0312 18:50:48.526147 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vh69\" (UniqueName: \"kubernetes.io/projected/698cc24c-946c-44b7-8ca8-dad9377673d3-kube-api-access-8vh69\") pod \"nova-scheduler-0\" (UID: \"698cc24c-946c-44b7-8ca8-dad9377673d3\") " pod="openstack/nova-scheduler-0" Mar 12 18:50:48.571680 master-0 kubenswrapper[29097]: I0312 18:50:48.571608 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 18:50:48.591576 master-0 kubenswrapper[29097]: I0312 18:50:48.591511 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 18:50:48.608701 master-0 kubenswrapper[29097]: I0312 18:50:48.608649 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 12 18:50:48.610901 master-0 kubenswrapper[29097]: I0312 18:50:48.610875 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 18:50:48.615266 master-0 kubenswrapper[29097]: I0312 18:50:48.615225 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 12 18:50:48.616052 master-0 kubenswrapper[29097]: I0312 18:50:48.616031 29097 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 12 18:50:48.621288 master-0 kubenswrapper[29097]: I0312 18:50:48.621192 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 18:50:48.652506 master-0 kubenswrapper[29097]: I0312 18:50:48.652423 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 18:50:48.745373 master-0 kubenswrapper[29097]: I0312 18:50:48.745219 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e9566f3-47c9-4d4a-b503-ac3374cc7ecd" path="/var/lib/kubelet/pods/2e9566f3-47c9-4d4a-b503-ac3374cc7ecd/volumes" Mar 12 18:50:48.750533 master-0 kubenswrapper[29097]: I0312 18:50:48.747924 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0" path="/var/lib/kubelet/pods/7cf93afc-4f3a-4ce9-9a63-aea8eef5c5c0/volumes" Mar 12 18:50:48.803498 master-0 kubenswrapper[29097]: I0312 18:50:48.803448 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dadb86ed-dc28-4a31-b5f7-ca4333b09f52-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"dadb86ed-dc28-4a31-b5f7-ca4333b09f52\") " pod="openstack/nova-metadata-0" Mar 12 18:50:48.803857 master-0 kubenswrapper[29097]: I0312 18:50:48.803751 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dadb86ed-dc28-4a31-b5f7-ca4333b09f52-config-data\") pod \"nova-metadata-0\" (UID: \"dadb86ed-dc28-4a31-b5f7-ca4333b09f52\") " pod="openstack/nova-metadata-0" Mar 12 18:50:48.804046 master-0 kubenswrapper[29097]: I0312 18:50:48.803776 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dadb86ed-dc28-4a31-b5f7-ca4333b09f52-logs\") pod \"nova-metadata-0\" (UID: \"dadb86ed-dc28-4a31-b5f7-ca4333b09f52\") " pod="openstack/nova-metadata-0" Mar 12 18:50:48.804046 master-0 kubenswrapper[29097]: I0312 18:50:48.803944 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dadb86ed-dc28-4a31-b5f7-ca4333b09f52-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dadb86ed-dc28-4a31-b5f7-ca4333b09f52\") " pod="openstack/nova-metadata-0" Mar 12 18:50:48.804046 master-0 kubenswrapper[29097]: I0312 18:50:48.804011 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj2dg\" (UniqueName: \"kubernetes.io/projected/dadb86ed-dc28-4a31-b5f7-ca4333b09f52-kube-api-access-sj2dg\") pod \"nova-metadata-0\" (UID: \"dadb86ed-dc28-4a31-b5f7-ca4333b09f52\") " pod="openstack/nova-metadata-0" Mar 12 18:50:48.906217 master-0 kubenswrapper[29097]: I0312 18:50:48.905726 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dadb86ed-dc28-4a31-b5f7-ca4333b09f52-config-data\") pod \"nova-metadata-0\" (UID: \"dadb86ed-dc28-4a31-b5f7-ca4333b09f52\") " pod="openstack/nova-metadata-0" Mar 12 18:50:48.906217 master-0 kubenswrapper[29097]: I0312 18:50:48.905784 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dadb86ed-dc28-4a31-b5f7-ca4333b09f52-logs\") pod \"nova-metadata-0\" (UID: \"dadb86ed-dc28-4a31-b5f7-ca4333b09f52\") " pod="openstack/nova-metadata-0" Mar 12 18:50:48.906217 master-0 kubenswrapper[29097]: I0312 18:50:48.905806 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dadb86ed-dc28-4a31-b5f7-ca4333b09f52-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dadb86ed-dc28-4a31-b5f7-ca4333b09f52\") " pod="openstack/nova-metadata-0" Mar 12 18:50:48.906217 master-0 kubenswrapper[29097]: I0312 18:50:48.905848 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj2dg\" (UniqueName: \"kubernetes.io/projected/dadb86ed-dc28-4a31-b5f7-ca4333b09f52-kube-api-access-sj2dg\") pod \"nova-metadata-0\" (UID: \"dadb86ed-dc28-4a31-b5f7-ca4333b09f52\") " pod="openstack/nova-metadata-0" Mar 12 18:50:48.906217 master-0 kubenswrapper[29097]: I0312 18:50:48.905894 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dadb86ed-dc28-4a31-b5f7-ca4333b09f52-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"dadb86ed-dc28-4a31-b5f7-ca4333b09f52\") " pod="openstack/nova-metadata-0" Mar 12 18:50:48.906979 master-0 kubenswrapper[29097]: I0312 18:50:48.906854 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dadb86ed-dc28-4a31-b5f7-ca4333b09f52-logs\") pod \"nova-metadata-0\" (UID: \"dadb86ed-dc28-4a31-b5f7-ca4333b09f52\") " pod="openstack/nova-metadata-0" Mar 12 18:50:48.911549 master-0 kubenswrapper[29097]: I0312 18:50:48.910558 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dadb86ed-dc28-4a31-b5f7-ca4333b09f52-config-data\") pod \"nova-metadata-0\" (UID: \"dadb86ed-dc28-4a31-b5f7-ca4333b09f52\") " pod="openstack/nova-metadata-0" Mar 12 18:50:48.911549 master-0 kubenswrapper[29097]: I0312 18:50:48.911476 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dadb86ed-dc28-4a31-b5f7-ca4333b09f52-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dadb86ed-dc28-4a31-b5f7-ca4333b09f52\") " pod="openstack/nova-metadata-0" Mar 12 18:50:48.913347 master-0 kubenswrapper[29097]: I0312 18:50:48.912792 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dadb86ed-dc28-4a31-b5f7-ca4333b09f52-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"dadb86ed-dc28-4a31-b5f7-ca4333b09f52\") " pod="openstack/nova-metadata-0" Mar 12 18:50:48.923700 master-0 kubenswrapper[29097]: I0312 18:50:48.923643 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj2dg\" (UniqueName: \"kubernetes.io/projected/dadb86ed-dc28-4a31-b5f7-ca4333b09f52-kube-api-access-sj2dg\") pod \"nova-metadata-0\" (UID: \"dadb86ed-dc28-4a31-b5f7-ca4333b09f52\") " pod="openstack/nova-metadata-0" Mar 12 18:50:48.936217 master-0 kubenswrapper[29097]: I0312 18:50:48.936165 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 18:50:49.165627 master-0 kubenswrapper[29097]: W0312 18:50:49.165460 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod698cc24c_946c_44b7_8ca8_dad9377673d3.slice/crio-998bcd636241f65c6de5d76918b6c2e962168393a4bd5c01f27a6bd583609c75 WatchSource:0}: Error finding container 998bcd636241f65c6de5d76918b6c2e962168393a4bd5c01f27a6bd583609c75: Status 404 returned error can't find the container with id 998bcd636241f65c6de5d76918b6c2e962168393a4bd5c01f27a6bd583609c75 Mar 12 18:50:49.178656 master-0 kubenswrapper[29097]: I0312 18:50:49.176965 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 18:50:49.463342 master-0 kubenswrapper[29097]: I0312 18:50:49.463292 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 18:50:50.194289 master-0 kubenswrapper[29097]: I0312 18:50:50.194141 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"698cc24c-946c-44b7-8ca8-dad9377673d3","Type":"ContainerStarted","Data":"eb82df102c038b14f9c3c7f983799325db12643650eb4369e4fcdbcc73998c14"} Mar 12 18:50:50.194289 master-0 kubenswrapper[29097]: I0312 18:50:50.194215 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"698cc24c-946c-44b7-8ca8-dad9377673d3","Type":"ContainerStarted","Data":"998bcd636241f65c6de5d76918b6c2e962168393a4bd5c01f27a6bd583609c75"} Mar 12 18:50:50.196211 master-0 kubenswrapper[29097]: I0312 18:50:50.196158 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dadb86ed-dc28-4a31-b5f7-ca4333b09f52","Type":"ContainerStarted","Data":"186e5687ffba14f04f3e4f25ed75efbe63048c5ad95e7448821ec4d390aa9a8d"} Mar 12 18:50:50.196280 master-0 kubenswrapper[29097]: I0312 18:50:50.196213 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dadb86ed-dc28-4a31-b5f7-ca4333b09f52","Type":"ContainerStarted","Data":"2f60be86a298bb9df9ac027c053f51f0a47d672ad6e12ebd5677efa469635137"} Mar 12 18:50:50.196280 master-0 kubenswrapper[29097]: I0312 18:50:50.196225 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dadb86ed-dc28-4a31-b5f7-ca4333b09f52","Type":"ContainerStarted","Data":"a6aac8eed57967e8941748debae0bff7fbe881cc683fa2d594767c8f7c148539"} Mar 12 18:50:50.223960 master-0 kubenswrapper[29097]: I0312 18:50:50.223869 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.223849181 podStartE2EDuration="2.223849181s" podCreationTimestamp="2026-03-12 18:50:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:50:50.21831571 +0000 UTC m=+1289.772295807" watchObservedRunningTime="2026-03-12 18:50:50.223849181 +0000 UTC m=+1289.777829278" Mar 12 18:50:50.258265 master-0 kubenswrapper[29097]: I0312 18:50:50.257915 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.257882452 podStartE2EDuration="2.257882452s" podCreationTimestamp="2026-03-12 18:50:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:50:50.247006763 +0000 UTC m=+1289.800986860" watchObservedRunningTime="2026-03-12 18:50:50.257882452 +0000 UTC m=+1289.811862559" Mar 12 18:50:53.652661 master-0 kubenswrapper[29097]: I0312 18:50:53.652594 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 12 18:50:53.945203 master-0 kubenswrapper[29097]: I0312 18:50:53.945058 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 18:50:53.945873 master-0 kubenswrapper[29097]: I0312 18:50:53.945822 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 18:50:56.592161 master-0 kubenswrapper[29097]: I0312 18:50:56.592071 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 18:50:56.593063 master-0 kubenswrapper[29097]: I0312 18:50:56.592215 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 18:50:57.614893 master-0 kubenswrapper[29097]: I0312 18:50:57.614795 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bc6da5f0-599f-4be3-a369-5e4c452e1e8d" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.128.1.19:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:50:57.615541 master-0 kubenswrapper[29097]: I0312 18:50:57.614813 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bc6da5f0-599f-4be3-a369-5e4c452e1e8d" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.128.1.19:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:50:58.653315 master-0 kubenswrapper[29097]: I0312 18:50:58.653229 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 12 18:50:58.696879 master-0 kubenswrapper[29097]: I0312 18:50:58.696799 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 12 18:50:58.937424 master-0 kubenswrapper[29097]: I0312 18:50:58.937285 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 12 18:50:58.937424 master-0 kubenswrapper[29097]: I0312 18:50:58.937333 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 12 18:50:59.398125 master-0 kubenswrapper[29097]: I0312 18:50:59.398079 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 12 18:50:59.951707 master-0 kubenswrapper[29097]: I0312 18:50:59.951630 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="dadb86ed-dc28-4a31-b5f7-ca4333b09f52" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.21:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:50:59.952429 master-0 kubenswrapper[29097]: I0312 18:50:59.951653 29097 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="dadb86ed-dc28-4a31-b5f7-ca4333b09f52" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.21:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 18:51:06.600466 master-0 kubenswrapper[29097]: I0312 18:51:06.600402 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 12 18:51:06.601802 master-0 kubenswrapper[29097]: I0312 18:51:06.601769 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 12 18:51:06.602262 master-0 kubenswrapper[29097]: I0312 18:51:06.602193 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 12 18:51:06.602383 master-0 kubenswrapper[29097]: I0312 18:51:06.602275 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 12 18:51:06.611508 master-0 kubenswrapper[29097]: I0312 18:51:06.611429 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 12 18:51:06.613149 master-0 kubenswrapper[29097]: I0312 18:51:06.613109 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 12 18:51:08.945874 master-0 kubenswrapper[29097]: I0312 18:51:08.945803 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 12 18:51:08.954316 master-0 kubenswrapper[29097]: I0312 18:51:08.954227 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 12 18:51:08.955671 master-0 kubenswrapper[29097]: I0312 18:51:08.955609 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 12 18:51:09.494406 master-0 kubenswrapper[29097]: I0312 18:51:09.494326 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 12 18:51:21.246335 master-0 kubenswrapper[29097]: E0312 18:51:21.246235 29097 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33feec78_4592_4343_965b_aa1b7044fcf3.slice/crio-26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Error finding container 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Status 404 returned error can't find the container with id 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547 Mar 12 18:51:36.475820 master-0 kubenswrapper[29097]: I0312 18:51:36.475676 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-5c96ff9cd6-xch96"] Mar 12 18:51:36.476870 master-0 kubenswrapper[29097]: I0312 18:51:36.475923 29097 kuberuntime_container.go:808] "Killing container with a grace period" pod="sushy-emulator/sushy-emulator-5c96ff9cd6-xch96" podUID="1be0bb8e-7ef6-4556-a8c9-6c25568be40a" containerName="sushy-emulator" containerID="cri-o://850edff1529a75c2049c177ea282d6c863c40eb3d9fca5fc399a112e3ea1afe6" gracePeriod=30 Mar 12 18:51:36.901014 master-0 kubenswrapper[29097]: I0312 18:51:36.900942 29097 generic.go:334] "Generic (PLEG): container finished" podID="1be0bb8e-7ef6-4556-a8c9-6c25568be40a" containerID="850edff1529a75c2049c177ea282d6c863c40eb3d9fca5fc399a112e3ea1afe6" exitCode=0 Mar 12 18:51:36.901207 master-0 kubenswrapper[29097]: I0312 18:51:36.901018 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-5c96ff9cd6-xch96" event={"ID":"1be0bb8e-7ef6-4556-a8c9-6c25568be40a","Type":"ContainerDied","Data":"850edff1529a75c2049c177ea282d6c863c40eb3d9fca5fc399a112e3ea1afe6"} Mar 12 18:51:37.238856 master-0 kubenswrapper[29097]: I0312 18:51:37.238795 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-5c96ff9cd6-xch96" Mar 12 18:51:37.360453 master-0 kubenswrapper[29097]: I0312 18:51:37.359996 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-55d977ff59-s2h84"] Mar 12 18:51:37.360790 master-0 kubenswrapper[29097]: E0312 18:51:37.360569 29097 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be0bb8e-7ef6-4556-a8c9-6c25568be40a" containerName="sushy-emulator" Mar 12 18:51:37.360790 master-0 kubenswrapper[29097]: I0312 18:51:37.360589 29097 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be0bb8e-7ef6-4556-a8c9-6c25568be40a" containerName="sushy-emulator" Mar 12 18:51:37.360953 master-0 kubenswrapper[29097]: I0312 18:51:37.360876 29097 memory_manager.go:354] "RemoveStaleState removing state" podUID="1be0bb8e-7ef6-4556-a8c9-6c25568be40a" containerName="sushy-emulator" Mar 12 18:51:37.361685 master-0 kubenswrapper[29097]: I0312 18:51:37.361600 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-55d977ff59-s2h84" Mar 12 18:51:37.379595 master-0 kubenswrapper[29097]: I0312 18:51:37.379497 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-55d977ff59-s2h84"] Mar 12 18:51:37.409428 master-0 kubenswrapper[29097]: I0312 18:51:37.409371 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/1be0bb8e-7ef6-4556-a8c9-6c25568be40a-os-client-config\") pod \"1be0bb8e-7ef6-4556-a8c9-6c25568be40a\" (UID: \"1be0bb8e-7ef6-4556-a8c9-6c25568be40a\") " Mar 12 18:51:37.409646 master-0 kubenswrapper[29097]: I0312 18:51:37.409448 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9257j\" (UniqueName: \"kubernetes.io/projected/1be0bb8e-7ef6-4556-a8c9-6c25568be40a-kube-api-access-9257j\") pod \"1be0bb8e-7ef6-4556-a8c9-6c25568be40a\" (UID: \"1be0bb8e-7ef6-4556-a8c9-6c25568be40a\") " Mar 12 18:51:37.409646 master-0 kubenswrapper[29097]: I0312 18:51:37.409491 29097 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/1be0bb8e-7ef6-4556-a8c9-6c25568be40a-sushy-emulator-config\") pod \"1be0bb8e-7ef6-4556-a8c9-6c25568be40a\" (UID: \"1be0bb8e-7ef6-4556-a8c9-6c25568be40a\") " Mar 12 18:51:37.411404 master-0 kubenswrapper[29097]: I0312 18:51:37.411363 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1be0bb8e-7ef6-4556-a8c9-6c25568be40a-sushy-emulator-config" (OuterVolumeSpecName: "sushy-emulator-config") pod "1be0bb8e-7ef6-4556-a8c9-6c25568be40a" (UID: "1be0bb8e-7ef6-4556-a8c9-6c25568be40a"). InnerVolumeSpecName "sushy-emulator-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 18:51:37.427394 master-0 kubenswrapper[29097]: I0312 18:51:37.427313 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1be0bb8e-7ef6-4556-a8c9-6c25568be40a-kube-api-access-9257j" (OuterVolumeSpecName: "kube-api-access-9257j") pod "1be0bb8e-7ef6-4556-a8c9-6c25568be40a" (UID: "1be0bb8e-7ef6-4556-a8c9-6c25568be40a"). InnerVolumeSpecName "kube-api-access-9257j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 18:51:37.427775 master-0 kubenswrapper[29097]: I0312 18:51:37.427720 29097 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1be0bb8e-7ef6-4556-a8c9-6c25568be40a-os-client-config" (OuterVolumeSpecName: "os-client-config") pod "1be0bb8e-7ef6-4556-a8c9-6c25568be40a" (UID: "1be0bb8e-7ef6-4556-a8c9-6c25568be40a"). InnerVolumeSpecName "os-client-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 18:51:37.513223 master-0 kubenswrapper[29097]: I0312 18:51:37.512867 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/e2bb3424-7645-4dfb-ba13-5451a94c82ba-sushy-emulator-config\") pod \"sushy-emulator-55d977ff59-s2h84\" (UID: \"e2bb3424-7645-4dfb-ba13-5451a94c82ba\") " pod="sushy-emulator/sushy-emulator-55d977ff59-s2h84" Mar 12 18:51:37.513223 master-0 kubenswrapper[29097]: I0312 18:51:37.512979 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/e2bb3424-7645-4dfb-ba13-5451a94c82ba-os-client-config\") pod \"sushy-emulator-55d977ff59-s2h84\" (UID: \"e2bb3424-7645-4dfb-ba13-5451a94c82ba\") " pod="sushy-emulator/sushy-emulator-55d977ff59-s2h84" Mar 12 18:51:37.514138 master-0 kubenswrapper[29097]: I0312 18:51:37.513447 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbr8h\" (UniqueName: \"kubernetes.io/projected/e2bb3424-7645-4dfb-ba13-5451a94c82ba-kube-api-access-fbr8h\") pod \"sushy-emulator-55d977ff59-s2h84\" (UID: \"e2bb3424-7645-4dfb-ba13-5451a94c82ba\") " pod="sushy-emulator/sushy-emulator-55d977ff59-s2h84" Mar 12 18:51:37.514138 master-0 kubenswrapper[29097]: I0312 18:51:37.513731 29097 reconciler_common.go:293] "Volume detached for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/1be0bb8e-7ef6-4556-a8c9-6c25568be40a-os-client-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:51:37.514138 master-0 kubenswrapper[29097]: I0312 18:51:37.513748 29097 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9257j\" (UniqueName: \"kubernetes.io/projected/1be0bb8e-7ef6-4556-a8c9-6c25568be40a-kube-api-access-9257j\") on node \"master-0\" DevicePath \"\"" Mar 12 18:51:37.514138 master-0 kubenswrapper[29097]: I0312 18:51:37.513761 29097 reconciler_common.go:293] "Volume detached for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/1be0bb8e-7ef6-4556-a8c9-6c25568be40a-sushy-emulator-config\") on node \"master-0\" DevicePath \"\"" Mar 12 18:51:37.616157 master-0 kubenswrapper[29097]: I0312 18:51:37.616031 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbr8h\" (UniqueName: \"kubernetes.io/projected/e2bb3424-7645-4dfb-ba13-5451a94c82ba-kube-api-access-fbr8h\") pod \"sushy-emulator-55d977ff59-s2h84\" (UID: \"e2bb3424-7645-4dfb-ba13-5451a94c82ba\") " pod="sushy-emulator/sushy-emulator-55d977ff59-s2h84" Mar 12 18:51:37.616612 master-0 kubenswrapper[29097]: I0312 18:51:37.616501 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/e2bb3424-7645-4dfb-ba13-5451a94c82ba-sushy-emulator-config\") pod \"sushy-emulator-55d977ff59-s2h84\" (UID: \"e2bb3424-7645-4dfb-ba13-5451a94c82ba\") " pod="sushy-emulator/sushy-emulator-55d977ff59-s2h84" Mar 12 18:51:37.616612 master-0 kubenswrapper[29097]: I0312 18:51:37.616564 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/e2bb3424-7645-4dfb-ba13-5451a94c82ba-os-client-config\") pod \"sushy-emulator-55d977ff59-s2h84\" (UID: \"e2bb3424-7645-4dfb-ba13-5451a94c82ba\") " pod="sushy-emulator/sushy-emulator-55d977ff59-s2h84" Mar 12 18:51:37.617785 master-0 kubenswrapper[29097]: I0312 18:51:37.617741 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/e2bb3424-7645-4dfb-ba13-5451a94c82ba-sushy-emulator-config\") pod \"sushy-emulator-55d977ff59-s2h84\" (UID: \"e2bb3424-7645-4dfb-ba13-5451a94c82ba\") " pod="sushy-emulator/sushy-emulator-55d977ff59-s2h84" Mar 12 18:51:37.624616 master-0 kubenswrapper[29097]: I0312 18:51:37.619992 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/e2bb3424-7645-4dfb-ba13-5451a94c82ba-os-client-config\") pod \"sushy-emulator-55d977ff59-s2h84\" (UID: \"e2bb3424-7645-4dfb-ba13-5451a94c82ba\") " pod="sushy-emulator/sushy-emulator-55d977ff59-s2h84" Mar 12 18:51:37.634307 master-0 kubenswrapper[29097]: I0312 18:51:37.634258 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbr8h\" (UniqueName: \"kubernetes.io/projected/e2bb3424-7645-4dfb-ba13-5451a94c82ba-kube-api-access-fbr8h\") pod \"sushy-emulator-55d977ff59-s2h84\" (UID: \"e2bb3424-7645-4dfb-ba13-5451a94c82ba\") " pod="sushy-emulator/sushy-emulator-55d977ff59-s2h84" Mar 12 18:51:37.686228 master-0 kubenswrapper[29097]: I0312 18:51:37.686180 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-55d977ff59-s2h84" Mar 12 18:51:37.921215 master-0 kubenswrapper[29097]: I0312 18:51:37.920354 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-5c96ff9cd6-xch96" event={"ID":"1be0bb8e-7ef6-4556-a8c9-6c25568be40a","Type":"ContainerDied","Data":"85f4d901e7104d152555695405940ff4c9d8eca5aedc8e3ef426666936c16672"} Mar 12 18:51:37.921215 master-0 kubenswrapper[29097]: I0312 18:51:37.920438 29097 scope.go:117] "RemoveContainer" containerID="850edff1529a75c2049c177ea282d6c863c40eb3d9fca5fc399a112e3ea1afe6" Mar 12 18:51:37.921215 master-0 kubenswrapper[29097]: I0312 18:51:37.920476 29097 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-5c96ff9cd6-xch96" Mar 12 18:51:38.152505 master-0 kubenswrapper[29097]: I0312 18:51:38.152419 29097 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-5c96ff9cd6-xch96"] Mar 12 18:51:38.246757 master-0 kubenswrapper[29097]: I0312 18:51:38.246694 29097 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["sushy-emulator/sushy-emulator-5c96ff9cd6-xch96"] Mar 12 18:51:38.737372 master-0 kubenswrapper[29097]: I0312 18:51:38.737292 29097 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1be0bb8e-7ef6-4556-a8c9-6c25568be40a" path="/var/lib/kubelet/pods/1be0bb8e-7ef6-4556-a8c9-6c25568be40a/volumes" Mar 12 18:51:39.696600 master-0 kubenswrapper[29097]: I0312 18:51:39.696390 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-55d977ff59-s2h84"] Mar 12 18:51:39.704430 master-0 kubenswrapper[29097]: W0312 18:51:39.704254 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2bb3424_7645_4dfb_ba13_5451a94c82ba.slice/crio-43a59d607e53d92ec4c9a5eddf2105f5a49925a64241e625e018c6acfa7c5bdd WatchSource:0}: Error finding container 43a59d607e53d92ec4c9a5eddf2105f5a49925a64241e625e018c6acfa7c5bdd: Status 404 returned error can't find the container with id 43a59d607e53d92ec4c9a5eddf2105f5a49925a64241e625e018c6acfa7c5bdd Mar 12 18:51:39.948524 master-0 kubenswrapper[29097]: I0312 18:51:39.948437 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-55d977ff59-s2h84" event={"ID":"e2bb3424-7645-4dfb-ba13-5451a94c82ba","Type":"ContainerStarted","Data":"c8dad057f4c481385b78bcaab4292f1e079c30f01e1f0296ff04417628c31828"} Mar 12 18:51:39.948524 master-0 kubenswrapper[29097]: I0312 18:51:39.948504 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-55d977ff59-s2h84" event={"ID":"e2bb3424-7645-4dfb-ba13-5451a94c82ba","Type":"ContainerStarted","Data":"43a59d607e53d92ec4c9a5eddf2105f5a49925a64241e625e018c6acfa7c5bdd"} Mar 12 18:51:39.976600 master-0 kubenswrapper[29097]: I0312 18:51:39.976455 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-55d977ff59-s2h84" podStartSLOduration=2.976428473 podStartE2EDuration="2.976428473s" podCreationTimestamp="2026-03-12 18:51:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:51:39.96583498 +0000 UTC m=+1339.519815087" watchObservedRunningTime="2026-03-12 18:51:39.976428473 +0000 UTC m=+1339.530408580" Mar 12 18:51:47.686879 master-0 kubenswrapper[29097]: I0312 18:51:47.686794 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-55d977ff59-s2h84" Mar 12 18:51:47.686879 master-0 kubenswrapper[29097]: I0312 18:51:47.686875 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-55d977ff59-s2h84" Mar 12 18:51:47.708131 master-0 kubenswrapper[29097]: I0312 18:51:47.708028 29097 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-55d977ff59-s2h84" Mar 12 18:51:48.065902 master-0 kubenswrapper[29097]: I0312 18:51:48.065759 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-55d977ff59-s2h84" Mar 12 18:52:21.267124 master-0 kubenswrapper[29097]: E0312 18:52:21.267032 29097 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33feec78_4592_4343_965b_aa1b7044fcf3.slice/crio-26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Error finding container 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Status 404 returned error can't find the container with id 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547 Mar 12 18:52:42.626927 master-0 kubenswrapper[29097]: I0312 18:52:42.626844 29097 scope.go:117] "RemoveContainer" containerID="44dab1eeebc8306c05e308c70a544c94aff6c88fb948491bada9953740334087" Mar 12 18:52:42.655318 master-0 kubenswrapper[29097]: I0312 18:52:42.655263 29097 scope.go:117] "RemoveContainer" containerID="7b7dbfe4299d85eca43cf217b836e9ac9854e2a32b5c26d0191e32c428a23163" Mar 12 18:52:42.690120 master-0 kubenswrapper[29097]: I0312 18:52:42.690063 29097 scope.go:117] "RemoveContainer" containerID="b19211f40b464894e15be63a8bba3892892832c1f2a504e3bec649306b2d21f5" Mar 12 18:52:42.729481 master-0 kubenswrapper[29097]: I0312 18:52:42.729427 29097 scope.go:117] "RemoveContainer" containerID="d45d2155ac3be9b74c1b97a8b6033ee144b105dba23f89c78647023b3a18f72b" Mar 12 18:52:42.762676 master-0 kubenswrapper[29097]: I0312 18:52:42.760563 29097 scope.go:117] "RemoveContainer" containerID="b934ab3a162ec3d9c46499ca1e821d7622fd38493f58177d7a80ea1bdc0c0725" Mar 12 18:52:58.476129 master-0 kubenswrapper[29097]: E0312 18:52:58.476048 29097 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.32.10:54862->192.168.32.10:35109: write tcp 192.168.32.10:54862->192.168.32.10:35109: write: broken pipe Mar 12 18:53:08.008546 master-0 kubenswrapper[29097]: I0312 18:53:08.008446 29097 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-7b649cbbbb-tkhcf" podUID="c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Mar 12 18:53:21.246202 master-0 kubenswrapper[29097]: E0312 18:53:21.245995 29097 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33feec78_4592_4343_965b_aa1b7044fcf3.slice/crio-26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Error finding container 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Status 404 returned error can't find the container with id 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547 Mar 12 18:53:42.883757 master-0 kubenswrapper[29097]: I0312 18:53:42.883657 29097 scope.go:117] "RemoveContainer" containerID="ef933ccbf34bd1b64a9e361ac9b684403ebc011734f57febfef1626ad1d8ff42" Mar 12 18:53:42.912669 master-0 kubenswrapper[29097]: I0312 18:53:42.912616 29097 scope.go:117] "RemoveContainer" containerID="6408ee724587b198615016f317bd50a06e48a50da1f318412c9cff6ffcf4f26b" Mar 12 18:53:42.954506 master-0 kubenswrapper[29097]: I0312 18:53:42.954476 29097 scope.go:117] "RemoveContainer" containerID="3a9f18cb4fb84893628b85c0e0c2f3e983b38debc61be907f1207c28a89da2c3" Mar 12 18:54:18.950000 master-0 kubenswrapper[29097]: I0312 18:54:18.949898 29097 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-fa62f-scheduler-0" podUID="eae648d4-64f1-4b96-aec2-dd0410be0ffd" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.128.0.230:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 18:54:19.351470 master-0 kubenswrapper[29097]: I0312 18:54:19.351373 29097 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-fa62f-volume-lvm-iscsi-0" podUID="91270577-6388-4208-afd7-bdeb3edc3d99" containerName="cinder-volume" probeResult="failure" output="Get \"http://10.128.0.231:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 18:54:19.627906 master-0 kubenswrapper[29097]: I0312 18:54:19.627748 29097 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-fa62f-backup-0" podUID="b6459537-fb89-4b67-8478-89b1dd4a397e" containerName="cinder-backup" probeResult="failure" output="Get \"http://10.128.0.232:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 18:54:21.228643 master-0 kubenswrapper[29097]: E0312 18:54:21.228446 29097 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33feec78_4592_4343_965b_aa1b7044fcf3.slice/crio-26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Error finding container 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Status 404 returned error can't find the container with id 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547 Mar 12 18:55:00.456707 master-0 kubenswrapper[29097]: I0312 18:55:00.456589 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-w4n8j/must-gather-w6z8w"] Mar 12 18:55:00.475535 master-0 kubenswrapper[29097]: I0312 18:55:00.474759 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w4n8j/must-gather-w6z8w" Mar 12 18:55:00.479611 master-0 kubenswrapper[29097]: I0312 18:55:00.479565 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-w4n8j/must-gather-r7vrr"] Mar 12 18:55:00.481779 master-0 kubenswrapper[29097]: I0312 18:55:00.481748 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w4n8j/must-gather-r7vrr" Mar 12 18:55:00.496748 master-0 kubenswrapper[29097]: I0312 18:55:00.494643 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-w4n8j/must-gather-w6z8w"] Mar 12 18:55:00.499449 master-0 kubenswrapper[29097]: I0312 18:55:00.498431 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-w4n8j"/"kube-root-ca.crt" Mar 12 18:55:00.505867 master-0 kubenswrapper[29097]: I0312 18:55:00.505738 29097 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-w4n8j"/"openshift-service-ca.crt" Mar 12 18:55:00.538261 master-0 kubenswrapper[29097]: I0312 18:55:00.538200 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-w4n8j/must-gather-r7vrr"] Mar 12 18:55:00.600295 master-0 kubenswrapper[29097]: I0312 18:55:00.600221 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/85abf49c-7bd3-425f-bda7-2abb95241eff-must-gather-output\") pod \"must-gather-r7vrr\" (UID: \"85abf49c-7bd3-425f-bda7-2abb95241eff\") " pod="openshift-must-gather-w4n8j/must-gather-r7vrr" Mar 12 18:55:00.600617 master-0 kubenswrapper[29097]: I0312 18:55:00.600352 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/af1a627c-4b6c-4878-b1a8-ce2a30e0da56-must-gather-output\") pod \"must-gather-w6z8w\" (UID: \"af1a627c-4b6c-4878-b1a8-ce2a30e0da56\") " pod="openshift-must-gather-w4n8j/must-gather-w6z8w" Mar 12 18:55:00.600617 master-0 kubenswrapper[29097]: I0312 18:55:00.600417 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssh2w\" (UniqueName: \"kubernetes.io/projected/af1a627c-4b6c-4878-b1a8-ce2a30e0da56-kube-api-access-ssh2w\") pod \"must-gather-w6z8w\" (UID: \"af1a627c-4b6c-4878-b1a8-ce2a30e0da56\") " pod="openshift-must-gather-w4n8j/must-gather-w6z8w" Mar 12 18:55:00.600617 master-0 kubenswrapper[29097]: I0312 18:55:00.600580 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhz4q\" (UniqueName: \"kubernetes.io/projected/85abf49c-7bd3-425f-bda7-2abb95241eff-kube-api-access-zhz4q\") pod \"must-gather-r7vrr\" (UID: \"85abf49c-7bd3-425f-bda7-2abb95241eff\") " pod="openshift-must-gather-w4n8j/must-gather-r7vrr" Mar 12 18:55:00.703422 master-0 kubenswrapper[29097]: I0312 18:55:00.703369 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/af1a627c-4b6c-4878-b1a8-ce2a30e0da56-must-gather-output\") pod \"must-gather-w6z8w\" (UID: \"af1a627c-4b6c-4878-b1a8-ce2a30e0da56\") " pod="openshift-must-gather-w4n8j/must-gather-w6z8w" Mar 12 18:55:00.703777 master-0 kubenswrapper[29097]: I0312 18:55:00.703755 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssh2w\" (UniqueName: \"kubernetes.io/projected/af1a627c-4b6c-4878-b1a8-ce2a30e0da56-kube-api-access-ssh2w\") pod \"must-gather-w6z8w\" (UID: \"af1a627c-4b6c-4878-b1a8-ce2a30e0da56\") " pod="openshift-must-gather-w4n8j/must-gather-w6z8w" Mar 12 18:55:00.704058 master-0 kubenswrapper[29097]: I0312 18:55:00.704010 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/af1a627c-4b6c-4878-b1a8-ce2a30e0da56-must-gather-output\") pod \"must-gather-w6z8w\" (UID: \"af1a627c-4b6c-4878-b1a8-ce2a30e0da56\") " pod="openshift-must-gather-w4n8j/must-gather-w6z8w" Mar 12 18:55:00.704227 master-0 kubenswrapper[29097]: I0312 18:55:00.704203 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhz4q\" (UniqueName: \"kubernetes.io/projected/85abf49c-7bd3-425f-bda7-2abb95241eff-kube-api-access-zhz4q\") pod \"must-gather-r7vrr\" (UID: \"85abf49c-7bd3-425f-bda7-2abb95241eff\") " pod="openshift-must-gather-w4n8j/must-gather-r7vrr" Mar 12 18:55:00.704380 master-0 kubenswrapper[29097]: I0312 18:55:00.704359 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/85abf49c-7bd3-425f-bda7-2abb95241eff-must-gather-output\") pod \"must-gather-r7vrr\" (UID: \"85abf49c-7bd3-425f-bda7-2abb95241eff\") " pod="openshift-must-gather-w4n8j/must-gather-r7vrr" Mar 12 18:55:00.705795 master-0 kubenswrapper[29097]: I0312 18:55:00.704773 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/85abf49c-7bd3-425f-bda7-2abb95241eff-must-gather-output\") pod \"must-gather-r7vrr\" (UID: \"85abf49c-7bd3-425f-bda7-2abb95241eff\") " pod="openshift-must-gather-w4n8j/must-gather-r7vrr" Mar 12 18:55:00.722915 master-0 kubenswrapper[29097]: I0312 18:55:00.722872 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhz4q\" (UniqueName: \"kubernetes.io/projected/85abf49c-7bd3-425f-bda7-2abb95241eff-kube-api-access-zhz4q\") pod \"must-gather-r7vrr\" (UID: \"85abf49c-7bd3-425f-bda7-2abb95241eff\") " pod="openshift-must-gather-w4n8j/must-gather-r7vrr" Mar 12 18:55:00.723311 master-0 kubenswrapper[29097]: I0312 18:55:00.723267 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssh2w\" (UniqueName: \"kubernetes.io/projected/af1a627c-4b6c-4878-b1a8-ce2a30e0da56-kube-api-access-ssh2w\") pod \"must-gather-w6z8w\" (UID: \"af1a627c-4b6c-4878-b1a8-ce2a30e0da56\") " pod="openshift-must-gather-w4n8j/must-gather-w6z8w" Mar 12 18:55:00.873269 master-0 kubenswrapper[29097]: I0312 18:55:00.873206 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w4n8j/must-gather-w6z8w" Mar 12 18:55:00.873504 master-0 kubenswrapper[29097]: I0312 18:55:00.873325 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w4n8j/must-gather-r7vrr" Mar 12 18:55:01.393795 master-0 kubenswrapper[29097]: I0312 18:55:01.393752 29097 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 18:55:01.399982 master-0 kubenswrapper[29097]: I0312 18:55:01.399905 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-w4n8j/must-gather-r7vrr"] Mar 12 18:55:01.415444 master-0 kubenswrapper[29097]: W0312 18:55:01.415381 29097 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf1a627c_4b6c_4878_b1a8_ce2a30e0da56.slice/crio-83b6e9d6d39cc71b94c9c7b71cee414ddd716aecd81c65d42afa010314d84bb8 WatchSource:0}: Error finding container 83b6e9d6d39cc71b94c9c7b71cee414ddd716aecd81c65d42afa010314d84bb8: Status 404 returned error can't find the container with id 83b6e9d6d39cc71b94c9c7b71cee414ddd716aecd81c65d42afa010314d84bb8 Mar 12 18:55:01.418711 master-0 kubenswrapper[29097]: I0312 18:55:01.418666 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-w4n8j/must-gather-w6z8w"] Mar 12 18:55:01.979303 master-0 kubenswrapper[29097]: I0312 18:55:01.979217 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w4n8j/must-gather-w6z8w" event={"ID":"af1a627c-4b6c-4878-b1a8-ce2a30e0da56","Type":"ContainerStarted","Data":"83b6e9d6d39cc71b94c9c7b71cee414ddd716aecd81c65d42afa010314d84bb8"} Mar 12 18:55:01.981033 master-0 kubenswrapper[29097]: I0312 18:55:01.980985 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w4n8j/must-gather-r7vrr" event={"ID":"85abf49c-7bd3-425f-bda7-2abb95241eff","Type":"ContainerStarted","Data":"283e5b1ded3083834fad848cde344b083022b640748c656e3103c01fb7c86887"} Mar 12 18:55:04.011041 master-0 kubenswrapper[29097]: I0312 18:55:04.010974 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w4n8j/must-gather-r7vrr" event={"ID":"85abf49c-7bd3-425f-bda7-2abb95241eff","Type":"ContainerStarted","Data":"da3227199834ccdb0150a8eb7e50659c347df78421c780ae0e26cfe50204f744"} Mar 12 18:55:04.011908 master-0 kubenswrapper[29097]: I0312 18:55:04.011030 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w4n8j/must-gather-r7vrr" event={"ID":"85abf49c-7bd3-425f-bda7-2abb95241eff","Type":"ContainerStarted","Data":"33279fa1f15255be48c6a79ad617dc03f02708439748537deb3d5f5fcc99e75c"} Mar 12 18:55:04.172683 master-0 kubenswrapper[29097]: I0312 18:55:04.169207 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-w4n8j/must-gather-r7vrr" podStartSLOduration=2.648035614 podStartE2EDuration="4.169181977s" podCreationTimestamp="2026-03-12 18:55:00 +0000 UTC" firstStartedPulling="2026-03-12 18:55:01.391787424 +0000 UTC m=+1540.945767531" lastFinishedPulling="2026-03-12 18:55:02.912933797 +0000 UTC m=+1542.466913894" observedRunningTime="2026-03-12 18:55:04.155428004 +0000 UTC m=+1543.709408101" watchObservedRunningTime="2026-03-12 18:55:04.169181977 +0000 UTC m=+1543.723162074" Mar 12 18:55:06.635466 master-0 kubenswrapper[29097]: I0312 18:55:06.635421 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-8c9c967c7-m885k_4048e453-a983-4708-89b6-a81af0067e29/cluster-version-operator/0.log" Mar 12 18:55:07.286044 master-0 kubenswrapper[29097]: I0312 18:55:07.285569 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-8c9c967c7-m885k_4048e453-a983-4708-89b6-a81af0067e29/cluster-version-operator/1.log" Mar 12 18:55:10.636600 master-0 kubenswrapper[29097]: I0312 18:55:10.634048 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-2hhd7_db7d78bb-1030-44b4-a4f1-b644a2ccb171/nmstate-console-plugin/0.log" Mar 12 18:55:10.685538 master-0 kubenswrapper[29097]: I0312 18:55:10.684621 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-wwfx4_cef2870a-7ede-4f27-8d32-0aaa58024d6d/nmstate-handler/0.log" Mar 12 18:55:10.704217 master-0 kubenswrapper[29097]: I0312 18:55:10.703810 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-qzdrp_0052e280-cd09-40a9-a843-edcf5051927e/nmstate-metrics/0.log" Mar 12 18:55:10.727549 master-0 kubenswrapper[29097]: I0312 18:55:10.723313 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-qzdrp_0052e280-cd09-40a9-a843-edcf5051927e/kube-rbac-proxy/0.log" Mar 12 18:55:10.757569 master-0 kubenswrapper[29097]: I0312 18:55:10.751817 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-8n4w5_5ecc5d29-ffb0-45b2-b157-e3ec2c6567fc/nmstate-operator/0.log" Mar 12 18:55:10.832109 master-0 kubenswrapper[29097]: I0312 18:55:10.832026 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-9bpjp_9f83c98b-f576-41f7-827a-65585172c452/nmstate-webhook/0.log" Mar 12 18:55:11.001994 master-0 kubenswrapper[29097]: I0312 18:55:11.001943 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-dh446_ffedbfc7-8b89-4bd0-8306-b5aa75878d42/controller/0.log" Mar 12 18:55:11.024616 master-0 kubenswrapper[29097]: I0312 18:55:11.024570 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-dh446_ffedbfc7-8b89-4bd0-8306-b5aa75878d42/kube-rbac-proxy/0.log" Mar 12 18:55:11.092536 master-0 kubenswrapper[29097]: I0312 18:55:11.081540 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rc6s5_735c0a7b-f9ed-40b4-92a2-fd05a3991503/controller/0.log" Mar 12 18:55:12.027560 master-0 kubenswrapper[29097]: I0312 18:55:12.018738 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rc6s5_735c0a7b-f9ed-40b4-92a2-fd05a3991503/frr/0.log" Mar 12 18:55:12.039035 master-0 kubenswrapper[29097]: I0312 18:55:12.038678 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rc6s5_735c0a7b-f9ed-40b4-92a2-fd05a3991503/reloader/0.log" Mar 12 18:55:12.050783 master-0 kubenswrapper[29097]: I0312 18:55:12.050744 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rc6s5_735c0a7b-f9ed-40b4-92a2-fd05a3991503/frr-metrics/0.log" Mar 12 18:55:12.061805 master-0 kubenswrapper[29097]: I0312 18:55:12.057056 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rc6s5_735c0a7b-f9ed-40b4-92a2-fd05a3991503/kube-rbac-proxy/0.log" Mar 12 18:55:12.072284 master-0 kubenswrapper[29097]: I0312 18:55:12.072222 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rc6s5_735c0a7b-f9ed-40b4-92a2-fd05a3991503/kube-rbac-proxy-frr/0.log" Mar 12 18:55:12.082318 master-0 kubenswrapper[29097]: I0312 18:55:12.082192 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rc6s5_735c0a7b-f9ed-40b4-92a2-fd05a3991503/cp-frr-files/0.log" Mar 12 18:55:12.089478 master-0 kubenswrapper[29097]: I0312 18:55:12.089307 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rc6s5_735c0a7b-f9ed-40b4-92a2-fd05a3991503/cp-reloader/0.log" Mar 12 18:55:12.099499 master-0 kubenswrapper[29097]: I0312 18:55:12.096579 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rc6s5_735c0a7b-f9ed-40b4-92a2-fd05a3991503/cp-metrics/0.log" Mar 12 18:55:12.132362 master-0 kubenswrapper[29097]: I0312 18:55:12.132318 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-hz8wl_bed3c813-ea3d-45fb-a830-59ad0830040a/frr-k8s-webhook-server/0.log" Mar 12 18:55:12.160165 master-0 kubenswrapper[29097]: I0312 18:55:12.160119 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6759bbdbf5-h458c_868ee97c-e6d1-48d8-9fd0-cf9b246480cb/manager/0.log" Mar 12 18:55:12.175954 master-0 kubenswrapper[29097]: I0312 18:55:12.175910 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-cff58f8c6-zmgcc_8f450cb2-6f8f-455f-9dce-db01d41482ad/webhook-server/0.log" Mar 12 18:55:12.654326 master-0 kubenswrapper[29097]: I0312 18:55:12.654274 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-49mjh_635ef9b0-f0bb-48be-a32c-99f2dc90e01f/speaker/0.log" Mar 12 18:55:12.661821 master-0 kubenswrapper[29097]: I0312 18:55:12.661467 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-49mjh_635ef9b0-f0bb-48be-a32c-99f2dc90e01f/kube-rbac-proxy/0.log" Mar 12 18:55:12.988414 master-0 kubenswrapper[29097]: I0312 18:55:12.988359 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcdctl/0.log" Mar 12 18:55:13.160787 master-0 kubenswrapper[29097]: I0312 18:55:13.160720 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w4n8j/must-gather-w6z8w" event={"ID":"af1a627c-4b6c-4878-b1a8-ce2a30e0da56","Type":"ContainerStarted","Data":"ac896886519b11beaa7563602b8a2c650dde284d66df6cdf8ff9399b3f4ad494"} Mar 12 18:55:13.319050 master-0 kubenswrapper[29097]: I0312 18:55:13.319004 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd/0.log" Mar 12 18:55:13.335467 master-0 kubenswrapper[29097]: I0312 18:55:13.335423 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-metrics/0.log" Mar 12 18:55:13.351043 master-0 kubenswrapper[29097]: I0312 18:55:13.350997 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-readyz/0.log" Mar 12 18:55:13.362796 master-0 kubenswrapper[29097]: I0312 18:55:13.362755 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-rev/0.log" Mar 12 18:55:13.392558 master-0 kubenswrapper[29097]: I0312 18:55:13.390139 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/setup/0.log" Mar 12 18:55:13.400212 master-0 kubenswrapper[29097]: I0312 18:55:13.399876 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-ensure-env-vars/0.log" Mar 12 18:55:13.415726 master-0 kubenswrapper[29097]: I0312 18:55:13.414777 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-resources-copy/0.log" Mar 12 18:55:13.453590 master-0 kubenswrapper[29097]: I0312 18:55:13.453485 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_e418d797-2c31-404b-9dc3-251399e42542/installer/0.log" Mar 12 18:55:13.486801 master-0 kubenswrapper[29097]: I0312 18:55:13.486755 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_30102cc9-45f8-46f8-bb34-eec48fdb297d/installer/0.log" Mar 12 18:55:14.172963 master-0 kubenswrapper[29097]: I0312 18:55:14.172893 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w4n8j/must-gather-w6z8w" event={"ID":"af1a627c-4b6c-4878-b1a8-ce2a30e0da56","Type":"ContainerStarted","Data":"6f2b6bfaf4cf4fd0be9811be7627b68f0728149593f78d843a6957317b1f6529"} Mar 12 18:55:14.210371 master-0 kubenswrapper[29097]: I0312 18:55:14.210296 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-w4n8j/must-gather-w6z8w" podStartSLOduration=2.8824542380000002 podStartE2EDuration="14.2102766s" podCreationTimestamp="2026-03-12 18:55:00 +0000 UTC" firstStartedPulling="2026-03-12 18:55:01.419654418 +0000 UTC m=+1540.973634535" lastFinishedPulling="2026-03-12 18:55:12.7474768 +0000 UTC m=+1552.301456897" observedRunningTime="2026-03-12 18:55:14.193874932 +0000 UTC m=+1553.747855029" watchObservedRunningTime="2026-03-12 18:55:14.2102766 +0000 UTC m=+1553.764256697" Mar 12 18:55:14.387231 master-0 kubenswrapper[29097]: I0312 18:55:14.387123 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5d5c7bc8d7-tldhf_25ded464-44eb-4070-83c7-245528c9ba11/oauth-openshift/0.log" Mar 12 18:55:14.750395 master-0 kubenswrapper[29097]: I0312 18:55:14.750267 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/assisted-installer_assisted-installer-controller-g257x_2a4a981c-9454-4e1f-951e-1a62737659cc/assisted-installer-controller/0.log" Mar 12 18:55:15.675379 master-0 kubenswrapper[29097]: I0312 18:55:15.675324 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ljw8b_062f1b21-2ffc-47da-8334-427c3b2a1a90/authentication-operator/4.log" Mar 12 18:55:15.717376 master-0 kubenswrapper[29097]: I0312 18:55:15.715657 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ljw8b_062f1b21-2ffc-47da-8334-427c3b2a1a90/authentication-operator/5.log" Mar 12 18:55:16.953534 master-0 kubenswrapper[29097]: I0312 18:55:16.953258 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-79f8cd6fdd-79bhf_518ffff8-8119-41be-8b76-ce49d5751254/router/2.log" Mar 12 18:55:16.963526 master-0 kubenswrapper[29097]: I0312 18:55:16.957114 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-79f8cd6fdd-79bhf_518ffff8-8119-41be-8b76-ce49d5751254/router/3.log" Mar 12 18:55:17.567668 master-0 kubenswrapper[29097]: I0312 18:55:17.567574 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-w4n8j/master-0-debug-4sjwq"] Mar 12 18:55:17.569349 master-0 kubenswrapper[29097]: I0312 18:55:17.569327 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w4n8j/master-0-debug-4sjwq" Mar 12 18:55:17.636130 master-0 kubenswrapper[29097]: I0312 18:55:17.636070 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2bf92d4a-4c47-4229-a53b-eec0cb0e0b04-host\") pod \"master-0-debug-4sjwq\" (UID: \"2bf92d4a-4c47-4229-a53b-eec0cb0e0b04\") " pod="openshift-must-gather-w4n8j/master-0-debug-4sjwq" Mar 12 18:55:17.636339 master-0 kubenswrapper[29097]: I0312 18:55:17.636218 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htgp2\" (UniqueName: \"kubernetes.io/projected/2bf92d4a-4c47-4229-a53b-eec0cb0e0b04-kube-api-access-htgp2\") pod \"master-0-debug-4sjwq\" (UID: \"2bf92d4a-4c47-4229-a53b-eec0cb0e0b04\") " pod="openshift-must-gather-w4n8j/master-0-debug-4sjwq" Mar 12 18:55:17.740041 master-0 kubenswrapper[29097]: I0312 18:55:17.738875 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2bf92d4a-4c47-4229-a53b-eec0cb0e0b04-host\") pod \"master-0-debug-4sjwq\" (UID: \"2bf92d4a-4c47-4229-a53b-eec0cb0e0b04\") " pod="openshift-must-gather-w4n8j/master-0-debug-4sjwq" Mar 12 18:55:17.740041 master-0 kubenswrapper[29097]: I0312 18:55:17.739017 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htgp2\" (UniqueName: \"kubernetes.io/projected/2bf92d4a-4c47-4229-a53b-eec0cb0e0b04-kube-api-access-htgp2\") pod \"master-0-debug-4sjwq\" (UID: \"2bf92d4a-4c47-4229-a53b-eec0cb0e0b04\") " pod="openshift-must-gather-w4n8j/master-0-debug-4sjwq" Mar 12 18:55:17.740310 master-0 kubenswrapper[29097]: I0312 18:55:17.740269 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2bf92d4a-4c47-4229-a53b-eec0cb0e0b04-host\") pod \"master-0-debug-4sjwq\" (UID: \"2bf92d4a-4c47-4229-a53b-eec0cb0e0b04\") " pod="openshift-must-gather-w4n8j/master-0-debug-4sjwq" Mar 12 18:55:17.784414 master-0 kubenswrapper[29097]: I0312 18:55:17.784353 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htgp2\" (UniqueName: \"kubernetes.io/projected/2bf92d4a-4c47-4229-a53b-eec0cb0e0b04-kube-api-access-htgp2\") pod \"master-0-debug-4sjwq\" (UID: \"2bf92d4a-4c47-4229-a53b-eec0cb0e0b04\") " pod="openshift-must-gather-w4n8j/master-0-debug-4sjwq" Mar 12 18:55:17.833410 master-0 kubenswrapper[29097]: I0312 18:55:17.817638 29097 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-w4n8j/perf-node-gather-daemonset-skhtv"] Mar 12 18:55:17.833410 master-0 kubenswrapper[29097]: I0312 18:55:17.820706 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w4n8j/perf-node-gather-daemonset-skhtv" Mar 12 18:55:17.846552 master-0 kubenswrapper[29097]: I0312 18:55:17.845656 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-w4n8j/perf-node-gather-daemonset-skhtv"] Mar 12 18:55:17.848287 master-0 kubenswrapper[29097]: I0312 18:55:17.848178 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4537af4-382f-43df-a731-39abe1a02223-lib-modules\") pod \"perf-node-gather-daemonset-skhtv\" (UID: \"d4537af4-382f-43df-a731-39abe1a02223\") " pod="openshift-must-gather-w4n8j/perf-node-gather-daemonset-skhtv" Mar 12 18:55:17.848364 master-0 kubenswrapper[29097]: I0312 18:55:17.848333 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d4537af4-382f-43df-a731-39abe1a02223-sys\") pod \"perf-node-gather-daemonset-skhtv\" (UID: \"d4537af4-382f-43df-a731-39abe1a02223\") " pod="openshift-must-gather-w4n8j/perf-node-gather-daemonset-skhtv" Mar 12 18:55:17.848725 master-0 kubenswrapper[29097]: I0312 18:55:17.848597 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/d4537af4-382f-43df-a731-39abe1a02223-podres\") pod \"perf-node-gather-daemonset-skhtv\" (UID: \"d4537af4-382f-43df-a731-39abe1a02223\") " pod="openshift-must-gather-w4n8j/perf-node-gather-daemonset-skhtv" Mar 12 18:55:17.848725 master-0 kubenswrapper[29097]: I0312 18:55:17.848636 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/d4537af4-382f-43df-a731-39abe1a02223-proc\") pod \"perf-node-gather-daemonset-skhtv\" (UID: \"d4537af4-382f-43df-a731-39abe1a02223\") " pod="openshift-must-gather-w4n8j/perf-node-gather-daemonset-skhtv" Mar 12 18:55:17.848885 master-0 kubenswrapper[29097]: I0312 18:55:17.848858 29097 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdb8t\" (UniqueName: \"kubernetes.io/projected/d4537af4-382f-43df-a731-39abe1a02223-kube-api-access-wdb8t\") pod \"perf-node-gather-daemonset-skhtv\" (UID: \"d4537af4-382f-43df-a731-39abe1a02223\") " pod="openshift-must-gather-w4n8j/perf-node-gather-daemonset-skhtv" Mar 12 18:55:17.886275 master-0 kubenswrapper[29097]: I0312 18:55:17.886220 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w4n8j/master-0-debug-4sjwq" Mar 12 18:55:17.917391 master-0 kubenswrapper[29097]: I0312 18:55:17.917350 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-6cb976c975-4sxlg_fb529297-b3de-4167-a91e-0a63725b3b0f/oauth-apiserver/0.log" Mar 12 18:55:17.933442 master-0 kubenswrapper[29097]: I0312 18:55:17.928090 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-6cb976c975-4sxlg_fb529297-b3de-4167-a91e-0a63725b3b0f/fix-audit-permissions/0.log" Mar 12 18:55:17.952466 master-0 kubenswrapper[29097]: I0312 18:55:17.952421 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4537af4-382f-43df-a731-39abe1a02223-lib-modules\") pod \"perf-node-gather-daemonset-skhtv\" (UID: \"d4537af4-382f-43df-a731-39abe1a02223\") " pod="openshift-must-gather-w4n8j/perf-node-gather-daemonset-skhtv" Mar 12 18:55:17.952612 master-0 kubenswrapper[29097]: I0312 18:55:17.952511 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d4537af4-382f-43df-a731-39abe1a02223-sys\") pod \"perf-node-gather-daemonset-skhtv\" (UID: \"d4537af4-382f-43df-a731-39abe1a02223\") " pod="openshift-must-gather-w4n8j/perf-node-gather-daemonset-skhtv" Mar 12 18:55:17.952714 master-0 kubenswrapper[29097]: I0312 18:55:17.952701 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/d4537af4-382f-43df-a731-39abe1a02223-podres\") pod \"perf-node-gather-daemonset-skhtv\" (UID: \"d4537af4-382f-43df-a731-39abe1a02223\") " pod="openshift-must-gather-w4n8j/perf-node-gather-daemonset-skhtv" Mar 12 18:55:17.952752 master-0 kubenswrapper[29097]: I0312 18:55:17.952720 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/d4537af4-382f-43df-a731-39abe1a02223-proc\") pod \"perf-node-gather-daemonset-skhtv\" (UID: \"d4537af4-382f-43df-a731-39abe1a02223\") " pod="openshift-must-gather-w4n8j/perf-node-gather-daemonset-skhtv" Mar 12 18:55:17.952874 master-0 kubenswrapper[29097]: I0312 18:55:17.952806 29097 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdb8t\" (UniqueName: \"kubernetes.io/projected/d4537af4-382f-43df-a731-39abe1a02223-kube-api-access-wdb8t\") pod \"perf-node-gather-daemonset-skhtv\" (UID: \"d4537af4-382f-43df-a731-39abe1a02223\") " pod="openshift-must-gather-w4n8j/perf-node-gather-daemonset-skhtv" Mar 12 18:55:17.952952 master-0 kubenswrapper[29097]: I0312 18:55:17.952935 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/d4537af4-382f-43df-a731-39abe1a02223-proc\") pod \"perf-node-gather-daemonset-skhtv\" (UID: \"d4537af4-382f-43df-a731-39abe1a02223\") " pod="openshift-must-gather-w4n8j/perf-node-gather-daemonset-skhtv" Mar 12 18:55:17.953273 master-0 kubenswrapper[29097]: I0312 18:55:17.953253 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/d4537af4-382f-43df-a731-39abe1a02223-podres\") pod \"perf-node-gather-daemonset-skhtv\" (UID: \"d4537af4-382f-43df-a731-39abe1a02223\") " pod="openshift-must-gather-w4n8j/perf-node-gather-daemonset-skhtv" Mar 12 18:55:17.953332 master-0 kubenswrapper[29097]: I0312 18:55:17.953293 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d4537af4-382f-43df-a731-39abe1a02223-sys\") pod \"perf-node-gather-daemonset-skhtv\" (UID: \"d4537af4-382f-43df-a731-39abe1a02223\") " pod="openshift-must-gather-w4n8j/perf-node-gather-daemonset-skhtv" Mar 12 18:55:17.953375 master-0 kubenswrapper[29097]: I0312 18:55:17.953341 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4537af4-382f-43df-a731-39abe1a02223-lib-modules\") pod \"perf-node-gather-daemonset-skhtv\" (UID: \"d4537af4-382f-43df-a731-39abe1a02223\") " pod="openshift-must-gather-w4n8j/perf-node-gather-daemonset-skhtv" Mar 12 18:55:17.971541 master-0 kubenswrapper[29097]: I0312 18:55:17.968797 29097 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdb8t\" (UniqueName: \"kubernetes.io/projected/d4537af4-382f-43df-a731-39abe1a02223-kube-api-access-wdb8t\") pod \"perf-node-gather-daemonset-skhtv\" (UID: \"d4537af4-382f-43df-a731-39abe1a02223\") " pod="openshift-must-gather-w4n8j/perf-node-gather-daemonset-skhtv" Mar 12 18:55:18.186883 master-0 kubenswrapper[29097]: I0312 18:55:18.186774 29097 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w4n8j/perf-node-gather-daemonset-skhtv" Mar 12 18:55:18.259960 master-0 kubenswrapper[29097]: I0312 18:55:18.259905 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w4n8j/master-0-debug-4sjwq" event={"ID":"2bf92d4a-4c47-4229-a53b-eec0cb0e0b04","Type":"ContainerStarted","Data":"fe572235cde06a4960ff2bec6d13d460041da3655f3ea09ee611790f538ff312"} Mar 12 18:55:19.446681 master-0 kubenswrapper[29097]: I0312 18:55:19.446197 29097 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-w4n8j/perf-node-gather-daemonset-skhtv"] Mar 12 18:55:19.700902 master-0 kubenswrapper[29097]: I0312 18:55:19.700777 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-hkfnq_aee40f88-83e4-45c8-8331-969943f9f9aa/kube-rbac-proxy/0.log" Mar 12 18:55:19.741915 master-0 kubenswrapper[29097]: I0312 18:55:19.739402 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-hkfnq_aee40f88-83e4-45c8-8331-969943f9f9aa/cluster-autoscaler-operator/0.log" Mar 12 18:55:19.754163 master-0 kubenswrapper[29097]: I0312 18:55:19.754067 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-2psgb_e5fb0152-3efd-4000-bce3-fa90b75316ae/cluster-baremetal-operator/2.log" Mar 12 18:55:19.754844 master-0 kubenswrapper[29097]: I0312 18:55:19.754804 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-2psgb_e5fb0152-3efd-4000-bce3-fa90b75316ae/cluster-baremetal-operator/3.log" Mar 12 18:55:19.775573 master-0 kubenswrapper[29097]: I0312 18:55:19.771381 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-2psgb_e5fb0152-3efd-4000-bce3-fa90b75316ae/baremetal-kube-rbac-proxy/0.log" Mar 12 18:55:19.799296 master-0 kubenswrapper[29097]: I0312 18:55:19.799252 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-zd9gm_34cbf061-4c76-476e-bed9-0a133c744862/control-plane-machine-set-operator/1.log" Mar 12 18:55:19.799485 master-0 kubenswrapper[29097]: I0312 18:55:19.799346 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-zd9gm_34cbf061-4c76-476e-bed9-0a133c744862/control-plane-machine-set-operator/0.log" Mar 12 18:55:19.831489 master-0 kubenswrapper[29097]: I0312 18:55:19.830400 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-gnrzd_4687cf53-55d7-42b7-b24d-e57da3989fd6/kube-rbac-proxy/0.log" Mar 12 18:55:19.853115 master-0 kubenswrapper[29097]: I0312 18:55:19.853040 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-gnrzd_4687cf53-55d7-42b7-b24d-e57da3989fd6/machine-api-operator/0.log" Mar 12 18:55:20.087586 master-0 kubenswrapper[29097]: I0312 18:55:20.083234 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-db-create-vwzjx_86a5b88e-2b06-40db-b90e-e027e9876bfa/mariadb-database-create/0.log" Mar 12 18:55:20.103552 master-0 kubenswrapper[29097]: I0312 18:55:20.103502 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-eac7-account-create-update-tvr2h_857a7bc9-0e2c-48b8-bc16-d1c0d409049e/mariadb-account-create-update/0.log" Mar 12 18:55:20.214867 master-0 kubenswrapper[29097]: I0312 18:55:20.214819 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-fa62f-api-0_f34dc271-d884-440c-bb41-6ddf5ca8d2c2/cinder-fa62f-api-log/0.log" Mar 12 18:55:20.232297 master-0 kubenswrapper[29097]: I0312 18:55:20.232218 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-fa62f-api-0_f34dc271-d884-440c-bb41-6ddf5ca8d2c2/cinder-api/0.log" Mar 12 18:55:20.327532 master-0 kubenswrapper[29097]: I0312 18:55:20.325890 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w4n8j/perf-node-gather-daemonset-skhtv" event={"ID":"d4537af4-382f-43df-a731-39abe1a02223","Type":"ContainerStarted","Data":"aea77d36c15ec2d755b018536ee218ccc134b4c3f5b885c82b3bf582452fd9a1"} Mar 12 18:55:20.327532 master-0 kubenswrapper[29097]: I0312 18:55:20.325959 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w4n8j/perf-node-gather-daemonset-skhtv" event={"ID":"d4537af4-382f-43df-a731-39abe1a02223","Type":"ContainerStarted","Data":"ec8939800fa9c45c6c087ac45b5c7d6a617fa1fbafe460712653798823826ffd"} Mar 12 18:55:20.327532 master-0 kubenswrapper[29097]: I0312 18:55:20.327384 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-must-gather-w4n8j/perf-node-gather-daemonset-skhtv" Mar 12 18:55:20.335044 master-0 kubenswrapper[29097]: I0312 18:55:20.334949 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-fa62f-backup-0_b6459537-fb89-4b67-8478-89b1dd4a397e/cinder-backup/0.log" Mar 12 18:55:20.350404 master-0 kubenswrapper[29097]: I0312 18:55:20.350303 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-fa62f-backup-0_b6459537-fb89-4b67-8478-89b1dd4a397e/probe/0.log" Mar 12 18:55:20.355779 master-0 kubenswrapper[29097]: I0312 18:55:20.355726 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-w4n8j/perf-node-gather-daemonset-skhtv" podStartSLOduration=3.355710123 podStartE2EDuration="3.355710123s" podCreationTimestamp="2026-03-12 18:55:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 18:55:20.341867599 +0000 UTC m=+1559.895847696" watchObservedRunningTime="2026-03-12 18:55:20.355710123 +0000 UTC m=+1559.909690210" Mar 12 18:55:20.359291 master-0 kubenswrapper[29097]: I0312 18:55:20.359264 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-fa62f-db-sync-xn4dx_d7af9e78-07ea-42c2-8d0a-d73fe46d8d36/cinder-fa62f-db-sync/0.log" Mar 12 18:55:20.457974 master-0 kubenswrapper[29097]: I0312 18:55:20.457922 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-fa62f-scheduler-0_eae648d4-64f1-4b96-aec2-dd0410be0ffd/cinder-scheduler/0.log" Mar 12 18:55:20.467796 master-0 kubenswrapper[29097]: I0312 18:55:20.467714 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-fa62f-scheduler-0_eae648d4-64f1-4b96-aec2-dd0410be0ffd/probe/0.log" Mar 12 18:55:20.541501 master-0 kubenswrapper[29097]: I0312 18:55:20.541462 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-fa62f-volume-lvm-iscsi-0_91270577-6388-4208-afd7-bdeb3edc3d99/cinder-volume/0.log" Mar 12 18:55:20.552651 master-0 kubenswrapper[29097]: I0312 18:55:20.552579 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-fa62f-volume-lvm-iscsi-0_91270577-6388-4208-afd7-bdeb3edc3d99/probe/0.log" Mar 12 18:55:20.563987 master-0 kubenswrapper[29097]: I0312 18:55:20.563951 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-94597dfc-f5psj_3c8759f5-06c8-4292-ac0b-13fae5fe1b3b/dnsmasq-dns/0.log" Mar 12 18:55:20.571212 master-0 kubenswrapper[29097]: I0312 18:55:20.571175 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-94597dfc-f5psj_3c8759f5-06c8-4292-ac0b-13fae5fe1b3b/init/0.log" Mar 12 18:55:20.689541 master-0 kubenswrapper[29097]: I0312 18:55:20.689432 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-16afb-default-external-api-0_884251a4-69be-405c-90a2-b75d9970b52e/glance-log/0.log" Mar 12 18:55:20.702751 master-0 kubenswrapper[29097]: I0312 18:55:20.702647 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-16afb-default-external-api-0_884251a4-69be-405c-90a2-b75d9970b52e/glance-httpd/0.log" Mar 12 18:55:20.795398 master-0 kubenswrapper[29097]: I0312 18:55:20.795334 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-16afb-default-internal-api-0_b0c9885f-3ca4-4031-b894-899b09eb5b91/glance-log/0.log" Mar 12 18:55:20.809378 master-0 kubenswrapper[29097]: I0312 18:55:20.809310 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-16afb-default-internal-api-0_b0c9885f-3ca4-4031-b894-899b09eb5b91/glance-httpd/0.log" Mar 12 18:55:20.821958 master-0 kubenswrapper[29097]: I0312 18:55:20.821784 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-1b8b-account-create-update-6trjd_df47b1d0-5d7e-4e88-ab52-4936dcfa4e19/mariadb-account-create-update/0.log" Mar 12 18:55:20.830154 master-0 kubenswrapper[29097]: I0312 18:55:20.830054 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-db-create-kvbg4_70840f07-cd5a-450b-ad71-40f71f42d2ae/mariadb-database-create/0.log" Mar 12 18:55:20.847737 master-0 kubenswrapper[29097]: I0312 18:55:20.847622 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-db-sync-cmn4b_2557ae75-2d67-4831-ace5-a6e46d581c7f/glance-db-sync/0.log" Mar 12 18:55:20.862922 master-0 kubenswrapper[29097]: I0312 18:55:20.862872 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-0e3d-account-create-update-pb9cg_22939625-8570-4e99-9070-5031a539e183/mariadb-account-create-update/0.log" Mar 12 18:55:20.878712 master-0 kubenswrapper[29097]: I0312 18:55:20.877758 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-6b755b479c-nl884_734222f4-1b3b-4cb5-9ad3-029a54640f81/ironic-api-log/0.log" Mar 12 18:55:20.900380 master-0 kubenswrapper[29097]: I0312 18:55:20.900292 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-6b755b479c-nl884_734222f4-1b3b-4cb5-9ad3-029a54640f81/ironic-api/0.log" Mar 12 18:55:20.922160 master-0 kubenswrapper[29097]: I0312 18:55:20.918001 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-6b755b479c-nl884_734222f4-1b3b-4cb5-9ad3-029a54640f81/init/0.log" Mar 12 18:55:20.948675 master-0 kubenswrapper[29097]: I0312 18:55:20.948545 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_7fb8fbc7-949d-4526-8456-fbf8277cee2f/ironic-conductor/0.log" Mar 12 18:55:20.964542 master-0 kubenswrapper[29097]: I0312 18:55:20.961103 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_7fb8fbc7-949d-4526-8456-fbf8277cee2f/httpboot/0.log" Mar 12 18:55:20.969711 master-0 kubenswrapper[29097]: I0312 18:55:20.969676 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_7fb8fbc7-949d-4526-8456-fbf8277cee2f/dnsmasq/0.log" Mar 12 18:55:20.977603 master-0 kubenswrapper[29097]: I0312 18:55:20.977567 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_7fb8fbc7-949d-4526-8456-fbf8277cee2f/init/0.log" Mar 12 18:55:20.987896 master-0 kubenswrapper[29097]: I0312 18:55:20.987841 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_7fb8fbc7-949d-4526-8456-fbf8277cee2f/ironic-python-agent-init/0.log" Mar 12 18:55:21.295392 master-0 kubenswrapper[29097]: E0312 18:55:21.295335 29097 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33feec78_4592_4343_965b_aa1b7044fcf3.slice/crio-26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Error finding container 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547: Status 404 returned error can't find the container with id 26207cd3d77e1d6fa98adf0c1b990f36671e7d4189e28f842823a331e489c547 Mar 12 18:55:21.634736 master-0 kubenswrapper[29097]: I0312 18:55:21.634628 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-z9srl_ee4c1949-96b4-4444-9675-9df1d46f681e/cluster-cloud-controller-manager/1.log" Mar 12 18:55:21.635406 master-0 kubenswrapper[29097]: I0312 18:55:21.634989 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-z9srl_ee4c1949-96b4-4444-9675-9df1d46f681e/cluster-cloud-controller-manager/0.log" Mar 12 18:55:21.658691 master-0 kubenswrapper[29097]: I0312 18:55:21.658644 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-z9srl_ee4c1949-96b4-4444-9675-9df1d46f681e/config-sync-controllers/0.log" Mar 12 18:55:21.662382 master-0 kubenswrapper[29097]: I0312 18:55:21.662336 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-z9srl_ee4c1949-96b4-4444-9675-9df1d46f681e/config-sync-controllers/1.log" Mar 12 18:55:21.693245 master-0 kubenswrapper[29097]: I0312 18:55:21.693194 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-z9srl_ee4c1949-96b4-4444-9675-9df1d46f681e/kube-rbac-proxy/0.log" Mar 12 18:55:21.927841 master-0 kubenswrapper[29097]: I0312 18:55:21.927724 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_7fb8fbc7-949d-4526-8456-fbf8277cee2f/pxe-init/0.log" Mar 12 18:55:21.939400 master-0 kubenswrapper[29097]: I0312 18:55:21.939359 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-create-4v6wn_784b2b8e-d340-4f65-8abb-ad196b08ed6f/mariadb-database-create/0.log" Mar 12 18:55:21.965781 master-0 kubenswrapper[29097]: I0312 18:55:21.965727 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-sync-mzfh7_64b3a2fa-455e-45a6-a3b4-9763b68a8faa/ironic-db-sync/0.log" Mar 12 18:55:21.978061 master-0 kubenswrapper[29097]: I0312 18:55:21.978019 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-sync-mzfh7_64b3a2fa-455e-45a6-a3b4-9763b68a8faa/init/0.log" Mar 12 18:55:22.023004 master-0 kubenswrapper[29097]: I0312 18:55:22.021884 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_78880e0c-4006-4417-be0e-5dc39d5bf43f/ironic-inspector-httpd/0.log" Mar 12 18:55:22.044469 master-0 kubenswrapper[29097]: I0312 18:55:22.044417 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_78880e0c-4006-4417-be0e-5dc39d5bf43f/ironic-inspector/0.log" Mar 12 18:55:22.064987 master-0 kubenswrapper[29097]: I0312 18:55:22.064951 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_78880e0c-4006-4417-be0e-5dc39d5bf43f/inspector-httpboot/0.log" Mar 12 18:55:22.075963 master-0 kubenswrapper[29097]: I0312 18:55:22.075916 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_78880e0c-4006-4417-be0e-5dc39d5bf43f/ramdisk-logs/0.log" Mar 12 18:55:22.086938 master-0 kubenswrapper[29097]: I0312 18:55:22.086906 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_78880e0c-4006-4417-be0e-5dc39d5bf43f/inspector-dnsmasq/0.log" Mar 12 18:55:22.096787 master-0 kubenswrapper[29097]: I0312 18:55:22.096749 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_78880e0c-4006-4417-be0e-5dc39d5bf43f/ironic-python-agent-init/0.log" Mar 12 18:55:22.111381 master-0 kubenswrapper[29097]: I0312 18:55:22.111333 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_78880e0c-4006-4417-be0e-5dc39d5bf43f/inspector-pxe-init/0.log" Mar 12 18:55:22.121746 master-0 kubenswrapper[29097]: I0312 18:55:22.121718 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-cea2-account-create-update-8lqsg_5d607db3-da18-458f-b364-187dc0ddc676/mariadb-account-create-update/0.log" Mar 12 18:55:22.133600 master-0 kubenswrapper[29097]: I0312 18:55:22.133549 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-db-create-d4xb7_07e88358-2e50-4143-beec-5f4a0698dcca/mariadb-database-create/0.log" Mar 12 18:55:22.144450 master-0 kubenswrapper[29097]: I0312 18:55:22.144397 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-db-sync-pq2f7_15059f12-694b-4ece-9321-0d18e7c95c04/ironic-inspector-db-sync/0.log" Mar 12 18:55:22.160704 master-0 kubenswrapper[29097]: I0312 18:55:22.160653 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-neutron-agent-7cb69d965b-d79tc_87a19dc7-5415-4d3d-a22e-9e2524a67e38/ironic-neutron-agent/3.log" Mar 12 18:55:22.161919 master-0 kubenswrapper[29097]: I0312 18:55:22.161874 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-neutron-agent-7cb69d965b-d79tc_87a19dc7-5415-4d3d-a22e-9e2524a67e38/ironic-neutron-agent/2.log" Mar 12 18:55:22.178272 master-0 kubenswrapper[29097]: I0312 18:55:22.178167 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-409f-account-create-update-8s7vs_90e68749-42ba-42d0-8ead-4517f6aae601/mariadb-account-create-update/0.log" Mar 12 18:55:22.231717 master-0 kubenswrapper[29097]: I0312 18:55:22.231671 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-798795c956-754f2_e8872d55-c0fd-45fd-9060-2f29e85e8f5d/keystone-api/0.log" Mar 12 18:55:22.253939 master-0 kubenswrapper[29097]: I0312 18:55:22.253884 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-bootstrap-t9v2r_9c46351d-ae56-4f9f-ba28-1389bc23a289/keystone-bootstrap/0.log" Mar 12 18:55:22.263455 master-0 kubenswrapper[29097]: I0312 18:55:22.263398 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-db-create-psnrd_054314ce-7598-4127-bde5-a98ceeeae7f5/mariadb-database-create/0.log" Mar 12 18:55:22.275288 master-0 kubenswrapper[29097]: I0312 18:55:22.275241 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-db-sync-bb2r7_26884ddb-c62f-429a-a6e6-0c7cb20ffc8d/keystone-db-sync/0.log" Mar 12 18:55:24.858116 master-0 kubenswrapper[29097]: I0312 18:55:24.858050 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-55d85b7b47-vk7lr_b2c6cd11-b1ed-4fed-a4ce-4eee0af20868/kube-rbac-proxy/0.log" Mar 12 18:55:24.946267 master-0 kubenswrapper[29097]: I0312 18:55:24.946221 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-55d85b7b47-vk7lr_b2c6cd11-b1ed-4fed-a4ce-4eee0af20868/cloud-credential-operator/0.log" Mar 12 18:55:27.038262 master-0 kubenswrapper[29097]: I0312 18:55:27.038214 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-tjp2j_37cd9c0a-697e-4e67-932b-b331ff77c8c0/openshift-config-operator/2.log" Mar 12 18:55:27.051654 master-0 kubenswrapper[29097]: I0312 18:55:27.051398 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-tjp2j_37cd9c0a-697e-4e67-932b-b331ff77c8c0/openshift-config-operator/3.log" Mar 12 18:55:27.070385 master-0 kubenswrapper[29097]: I0312 18:55:27.070333 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-tjp2j_37cd9c0a-697e-4e67-932b-b331ff77c8c0/openshift-api/0.log" Mar 12 18:55:28.236158 master-0 kubenswrapper[29097]: I0312 18:55:28.236088 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-6c7fb6b958-jq5c9_4cd31e59-6cb6-42b7-8384-56a1d9d8a482/console-operator/0.log" Mar 12 18:55:28.249864 master-0 kubenswrapper[29097]: I0312 18:55:28.249827 29097 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-w4n8j/perf-node-gather-daemonset-skhtv" Mar 12 18:55:29.188099 master-0 kubenswrapper[29097]: I0312 18:55:29.188053 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-54dfb9c5c7-4blxh_8a23285d-7a0d-4bf6-9e97-80a59a271486/console/0.log" Mar 12 18:55:29.224505 master-0 kubenswrapper[29097]: I0312 18:55:29.224451 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-84f57b9877-lsc92_60ba51da-3daf-4608-9269-b10211a184e9/download-server/0.log" Mar 12 18:55:29.640430 master-0 kubenswrapper[29097]: I0312 18:55:29.640184 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_3c377976-30da-4335-b35e-e2e65789e21d/memcached/0.log" Mar 12 18:55:29.654610 master-0 kubenswrapper[29097]: I0312 18:55:29.651628 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-54fb-account-create-update-bbqlw_2df983bf-3a2e-4d67-80e2-eb309ec03afd/mariadb-account-create-update/0.log" Mar 12 18:55:29.772020 master-0 kubenswrapper[29097]: I0312 18:55:29.771965 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8655bff577-lrzbz_ef9a9c97-1ce8-42ef-b4de-de87dbf5524a/neutron-api/0.log" Mar 12 18:55:29.793155 master-0 kubenswrapper[29097]: I0312 18:55:29.792982 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8655bff577-lrzbz_ef9a9c97-1ce8-42ef-b4de-de87dbf5524a/neutron-httpd/0.log" Mar 12 18:55:29.871211 master-0 kubenswrapper[29097]: I0312 18:55:29.871106 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-db-create-brvsm_6ff2681b-2ab1-4a46-a0d3-2bbcab0695dd/mariadb-database-create/0.log" Mar 12 18:55:29.994603 master-0 kubenswrapper[29097]: I0312 18:55:29.992293 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-db-sync-ptnfn_1cc37b4c-dc0e-4237-a3b5-5d4776b2a9f1/neutron-db-sync/0.log" Mar 12 18:55:30.082328 master-0 kubenswrapper[29097]: I0312 18:55:30.082271 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_bc6da5f0-599f-4be3-a369-5e4c452e1e8d/nova-api-log/0.log" Mar 12 18:55:30.149379 master-0 kubenswrapper[29097]: I0312 18:55:30.149302 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_bc6da5f0-599f-4be3-a369-5e4c452e1e8d/nova-api-api/0.log" Mar 12 18:55:30.163457 master-0 kubenswrapper[29097]: I0312 18:55:30.161647 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-9671-account-create-update-hzxlx_a9855cba-3217-439f-8c5e-32b3064a3330/mariadb-account-create-update/0.log" Mar 12 18:55:30.177630 master-0 kubenswrapper[29097]: I0312 18:55:30.176375 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-db-create-fkj7f_40cf0854-144e-474c-a1ad-e588b7df2c68/mariadb-database-create/0.log" Mar 12 18:55:30.187587 master-0 kubenswrapper[29097]: I0312 18:55:30.186391 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-703e-account-create-update-wl26r_62db1d8c-39c2-47ea-bac4-e9a0d7febb99/mariadb-account-create-update/0.log" Mar 12 18:55:30.200757 master-0 kubenswrapper[29097]: I0312 18:55:30.200708 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-cell-mapping-p89p7_ebb59236-a534-4ed8-9f62-1be13d1bdaf9/nova-manage/0.log" Mar 12 18:55:30.316356 master-0 kubenswrapper[29097]: I0312 18:55:30.316292 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_9d2f9a34-cf61-4193-a964-a4a2cbf0adb6/nova-cell0-conductor-conductor/0.log" Mar 12 18:55:30.336017 master-0 kubenswrapper[29097]: I0312 18:55:30.335971 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-db-sync-pz2nb_04fa4fda-ab43-4c4e-be26-48fc6ef9fc48/nova-cell0-conductor-db-sync/0.log" Mar 12 18:55:30.346715 master-0 kubenswrapper[29097]: I0312 18:55:30.346678 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-db-create-tx4kq_1b23c624-7392-41d0-a909-67f71b5e16ce/mariadb-database-create/0.log" Mar 12 18:55:30.359626 master-0 kubenswrapper[29097]: I0312 18:55:30.358836 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-67d1-account-create-update-pllmv_88b711ca-1327-4ce4-83c1-c5bf5b42cc5a/mariadb-account-create-update/0.log" Mar 12 18:55:30.381104 master-0 kubenswrapper[29097]: I0312 18:55:30.379467 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-cell-mapping-68m4m_cac30a96-44f3-4fdd-9fe5-e64e5c61686b/nova-manage/0.log" Mar 12 18:55:30.433424 master-0 kubenswrapper[29097]: I0312 18:55:30.433385 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-compute-ironic-compute-0_3ded41e3-0282-4ae3-871f-46803445326f/nova-cell1-compute-ironic-compute-compute/0.log" Mar 12 18:55:30.457524 master-0 kubenswrapper[29097]: I0312 18:55:30.457455 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-6fbfc8dc8f-88gzm_1287cbb9-c9f6-48d2-9fda-f4464074e41b/cluster-storage-operator/0.log" Mar 12 18:55:30.479336 master-0 kubenswrapper[29097]: I0312 18:55:30.479284 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-2ltx9_bce831df-c604-4608-a24e-b14d62c5287a/snapshot-controller/4.log" Mar 12 18:55:30.484617 master-0 kubenswrapper[29097]: I0312 18:55:30.484576 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-2ltx9_bce831df-c604-4608-a24e-b14d62c5287a/snapshot-controller/3.log" Mar 12 18:55:30.510667 master-0 kubenswrapper[29097]: I0312 18:55:30.510376 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5685fbc7d-649db_45aa4887-c913-4ece-ae34-fcde33832621/csi-snapshot-controller-operator/0.log" Mar 12 18:55:30.515470 master-0 kubenswrapper[29097]: I0312 18:55:30.515428 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5685fbc7d-649db_45aa4887-c913-4ece-ae34-fcde33832621/csi-snapshot-controller-operator/1.log" Mar 12 18:55:30.544381 master-0 kubenswrapper[29097]: I0312 18:55:30.541860 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_4b4f4e23-ac5a-40db-a2bd-e0c71a571fb8/nova-cell1-conductor-conductor/0.log" Mar 12 18:55:30.555408 master-0 kubenswrapper[29097]: I0312 18:55:30.555360 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-db-sync-lfv8d_b4177d20-cc30-4b7f-872e-2c8692ee6b8e/nova-cell1-conductor-db-sync/0.log" Mar 12 18:55:30.566420 master-0 kubenswrapper[29097]: I0312 18:55:30.566094 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-db-create-trlwx_8136b672-acef-4cb2-8316-358d20c26489/mariadb-database-create/0.log" Mar 12 18:55:30.584753 master-0 kubenswrapper[29097]: I0312 18:55:30.584717 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-host-discover-rdkpz_c9f3949a-aa88-40a4-b349-be8fb7106a61/nova-manage/0.log" Mar 12 18:55:30.646188 master-0 kubenswrapper[29097]: I0312 18:55:30.645800 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_13894bb3-4d17-4821-ba67-6563c3dc676c/nova-cell1-novncproxy-novncproxy/0.log" Mar 12 18:55:30.719544 master-0 kubenswrapper[29097]: I0312 18:55:30.719373 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_dadb86ed-dc28-4a31-b5f7-ca4333b09f52/nova-metadata-log/0.log" Mar 12 18:55:30.818170 master-0 kubenswrapper[29097]: I0312 18:55:30.818120 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_dadb86ed-dc28-4a31-b5f7-ca4333b09f52/nova-metadata-metadata/0.log" Mar 12 18:55:30.924571 master-0 kubenswrapper[29097]: I0312 18:55:30.924502 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_698cc24c-946c-44b7-8ca8-dad9377673d3/nova-scheduler-scheduler/0.log" Mar 12 18:55:30.950808 master-0 kubenswrapper[29097]: I0312 18:55:30.950754 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7ec3f557-015f-4fc3-b6cd-9d7f0f976e32/galera/0.log" Mar 12 18:55:30.960427 master-0 kubenswrapper[29097]: I0312 18:55:30.960376 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7ec3f557-015f-4fc3-b6cd-9d7f0f976e32/mysql-bootstrap/0.log" Mar 12 18:55:30.985415 master-0 kubenswrapper[29097]: I0312 18:55:30.981016 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_89da455f-45f0-4844-9a54-1ad46fe41d43/galera/0.log" Mar 12 18:55:30.990599 master-0 kubenswrapper[29097]: I0312 18:55:30.990560 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_89da455f-45f0-4844-9a54-1ad46fe41d43/mysql-bootstrap/0.log" Mar 12 18:55:30.999472 master-0 kubenswrapper[29097]: I0312 18:55:30.999435 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_c9a54b83-be30-4fdb-a23c-1ad2ce020453/openstackclient/0.log" Mar 12 18:55:31.011985 master-0 kubenswrapper[29097]: I0312 18:55:31.011754 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-jjwrg_a28d1a0b-dbf6-4b4f-b2d2-c774917032e4/openstack-network-exporter/0.log" Mar 12 18:55:31.025890 master-0 kubenswrapper[29097]: I0312 18:55:31.025848 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-8vq5w_4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2/ovsdb-server/0.log" Mar 12 18:55:31.034311 master-0 kubenswrapper[29097]: I0312 18:55:31.034268 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-8vq5w_4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2/ovs-vswitchd/0.log" Mar 12 18:55:31.043458 master-0 kubenswrapper[29097]: I0312 18:55:31.043425 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-8vq5w_4e1b56f5-1a0f-4a6c-b7bb-b0b3fc30e1f2/ovsdb-server-init/0.log" Mar 12 18:55:31.063052 master-0 kubenswrapper[29097]: I0312 18:55:31.062992 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-zrpnx_a5e5d447-ad0f-45a1-9613-8be6ff16ce62/ovn-controller/0.log" Mar 12 18:55:31.076055 master-0 kubenswrapper[29097]: I0312 18:55:31.075954 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_65dfbca3-130d-4d5a-bc07-4262fc4b4e50/ovn-northd/0.log" Mar 12 18:55:31.084678 master-0 kubenswrapper[29097]: I0312 18:55:31.084646 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_65dfbca3-130d-4d5a-bc07-4262fc4b4e50/openstack-network-exporter/0.log" Mar 12 18:55:31.104114 master-0 kubenswrapper[29097]: I0312 18:55:31.104052 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_636bd264-9a47-4480-8beb-f45a4b8c45fe/ovsdbserver-nb/0.log" Mar 12 18:55:31.114051 master-0 kubenswrapper[29097]: I0312 18:55:31.113779 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_636bd264-9a47-4480-8beb-f45a4b8c45fe/openstack-network-exporter/0.log" Mar 12 18:55:31.126454 master-0 kubenswrapper[29097]: I0312 18:55:31.126419 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6e9bd650-99fd-4a45-9742-0b23b242d8b6/ovsdbserver-sb/0.log" Mar 12 18:55:31.138450 master-0 kubenswrapper[29097]: I0312 18:55:31.137785 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6e9bd650-99fd-4a45-9742-0b23b242d8b6/openstack-network-exporter/0.log" Mar 12 18:55:31.145193 master-0 kubenswrapper[29097]: I0312 18:55:31.144629 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-1444-account-create-update-pd8f4_ed79e5de-c177-42ad-acf1-8b548f050262/mariadb-account-create-update/0.log" Mar 12 18:55:31.167738 master-0 kubenswrapper[29097]: I0312 18:55:31.167691 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5664b69d46-kq48m_8cd110a9-8d4a-4c00-9b94-1a2cb117d463/placement-log/0.log" Mar 12 18:55:31.179578 master-0 kubenswrapper[29097]: I0312 18:55:31.177816 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5664b69d46-kq48m_8cd110a9-8d4a-4c00-9b94-1a2cb117d463/placement-api/0.log" Mar 12 18:55:31.187192 master-0 kubenswrapper[29097]: I0312 18:55:31.187151 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-db-create-m74jd_5c788cc9-0232-4e2a-ac56-b52212c2d589/mariadb-database-create/0.log" Mar 12 18:55:31.195355 master-0 kubenswrapper[29097]: I0312 18:55:31.194102 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-db-sync-s5w8c_ae6f8a81-a597-4c6d-ae77-b60b36190af6/placement-db-sync/0.log" Mar 12 18:55:31.253927 master-0 kubenswrapper[29097]: I0312 18:55:31.253827 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b/rabbitmq/0.log" Mar 12 18:55:31.300706 master-0 kubenswrapper[29097]: I0312 18:55:31.300475 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cb1d09b6-a1d7-4ce5-acdb-a80c8765d70b/setup-container/0.log" Mar 12 18:55:31.352600 master-0 kubenswrapper[29097]: I0312 18:55:31.349627 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_49290c2f-177f-4a5e-8e1e-cf105e962c5b/rabbitmq/0.log" Mar 12 18:55:31.356925 master-0 kubenswrapper[29097]: I0312 18:55:31.356454 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_49290c2f-177f-4a5e-8e1e-cf105e962c5b/setup-container/0.log" Mar 12 18:55:31.365821 master-0 kubenswrapper[29097]: I0312 18:55:31.365781 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_root-account-create-update-8clsj_f9be4e8e-96b5-424a-9137-b468887ed037/mariadb-account-create-update/0.log" Mar 12 18:55:31.420226 master-0 kubenswrapper[29097]: I0312 18:55:31.417716 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7b649cbbbb-tkhcf_c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da/proxy-httpd/0.log" Mar 12 18:55:31.430627 master-0 kubenswrapper[29097]: I0312 18:55:31.430581 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7b649cbbbb-tkhcf_c7ccfb7f-ccfa-49fc-9f3a-a9eaf424f6da/proxy-server/0.log" Mar 12 18:55:31.438589 master-0 kubenswrapper[29097]: I0312 18:55:31.438538 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-wk79g_0bc3215a-a09f-49fe-a3f6-050665225137/swift-ring-rebalance/0.log" Mar 12 18:55:31.467540 master-0 kubenswrapper[29097]: I0312 18:55:31.465736 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_066d07a5-82a7-49a5-b345-203a1ee212f0/account-server/0.log" Mar 12 18:55:31.472750 master-0 kubenswrapper[29097]: I0312 18:55:31.472702 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-589895fbb7-jqj5k_8ad05507-e242-4ff8-ae80-c16ff9ee68e2/dns-operator/0.log" Mar 12 18:55:31.480766 master-0 kubenswrapper[29097]: I0312 18:55:31.479259 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_066d07a5-82a7-49a5-b345-203a1ee212f0/account-replicator/0.log" Mar 12 18:55:31.487572 master-0 kubenswrapper[29097]: I0312 18:55:31.487539 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-589895fbb7-jqj5k_8ad05507-e242-4ff8-ae80-c16ff9ee68e2/kube-rbac-proxy/0.log" Mar 12 18:55:31.491599 master-0 kubenswrapper[29097]: I0312 18:55:31.491390 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_066d07a5-82a7-49a5-b345-203a1ee212f0/account-auditor/0.log" Mar 12 18:55:31.498389 master-0 kubenswrapper[29097]: I0312 18:55:31.497327 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_066d07a5-82a7-49a5-b345-203a1ee212f0/account-reaper/0.log" Mar 12 18:55:31.509769 master-0 kubenswrapper[29097]: I0312 18:55:31.509713 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_066d07a5-82a7-49a5-b345-203a1ee212f0/container-server/0.log" Mar 12 18:55:31.524583 master-0 kubenswrapper[29097]: I0312 18:55:31.524194 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_066d07a5-82a7-49a5-b345-203a1ee212f0/container-replicator/0.log" Mar 12 18:55:31.529558 master-0 kubenswrapper[29097]: I0312 18:55:31.529288 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_066d07a5-82a7-49a5-b345-203a1ee212f0/container-auditor/0.log" Mar 12 18:55:31.539807 master-0 kubenswrapper[29097]: I0312 18:55:31.539731 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_066d07a5-82a7-49a5-b345-203a1ee212f0/container-updater/0.log" Mar 12 18:55:31.549139 master-0 kubenswrapper[29097]: I0312 18:55:31.549107 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_066d07a5-82a7-49a5-b345-203a1ee212f0/object-server/0.log" Mar 12 18:55:31.558256 master-0 kubenswrapper[29097]: I0312 18:55:31.558203 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_066d07a5-82a7-49a5-b345-203a1ee212f0/object-replicator/0.log" Mar 12 18:55:31.568607 master-0 kubenswrapper[29097]: I0312 18:55:31.568305 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_066d07a5-82a7-49a5-b345-203a1ee212f0/object-auditor/0.log" Mar 12 18:55:31.574898 master-0 kubenswrapper[29097]: I0312 18:55:31.574871 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_066d07a5-82a7-49a5-b345-203a1ee212f0/object-updater/0.log" Mar 12 18:55:31.586543 master-0 kubenswrapper[29097]: I0312 18:55:31.586469 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_066d07a5-82a7-49a5-b345-203a1ee212f0/object-expirer/0.log" Mar 12 18:55:31.592026 master-0 kubenswrapper[29097]: I0312 18:55:31.591988 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_066d07a5-82a7-49a5-b345-203a1ee212f0/rsync/0.log" Mar 12 18:55:31.603269 master-0 kubenswrapper[29097]: I0312 18:55:31.603181 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_066d07a5-82a7-49a5-b345-203a1ee212f0/swift-recon-cron/0.log" Mar 12 18:55:32.965548 master-0 kubenswrapper[29097]: I0312 18:55:32.961197 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-6h5tt_266b9f4f-3fb4-474d-84df-0a6c687c7e9a/dns/0.log" Mar 12 18:55:32.978687 master-0 kubenswrapper[29097]: I0312 18:55:32.978639 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-6h5tt_266b9f4f-3fb4-474d-84df-0a6c687c7e9a/kube-rbac-proxy/0.log" Mar 12 18:55:32.995496 master-0 kubenswrapper[29097]: I0312 18:55:32.995453 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-7lzgx_9717d467-af1a-4de0-88e0-c47ec4d12d6e/dns-node-resolver/0.log" Mar 12 18:55:33.905993 master-0 kubenswrapper[29097]: I0312 18:55:33.905941 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-bfq7b_e697746f-fb9e-4d10-ab61-33c68e62cc0d/etcd-operator/1.log" Mar 12 18:55:33.907956 master-0 kubenswrapper[29097]: I0312 18:55:33.907929 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-bfq7b_e697746f-fb9e-4d10-ab61-33c68e62cc0d/etcd-operator/2.log" Mar 12 18:55:34.902979 master-0 kubenswrapper[29097]: I0312 18:55:34.902932 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcdctl/0.log" Mar 12 18:55:35.235068 master-0 kubenswrapper[29097]: I0312 18:55:35.234943 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd/0.log" Mar 12 18:55:35.248359 master-0 kubenswrapper[29097]: I0312 18:55:35.248301 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-metrics/0.log" Mar 12 18:55:35.263952 master-0 kubenswrapper[29097]: I0312 18:55:35.263899 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-readyz/0.log" Mar 12 18:55:35.277668 master-0 kubenswrapper[29097]: I0312 18:55:35.277407 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-rev/0.log" Mar 12 18:55:35.300090 master-0 kubenswrapper[29097]: I0312 18:55:35.298913 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/setup/0.log" Mar 12 18:55:35.320225 master-0 kubenswrapper[29097]: I0312 18:55:35.317985 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-ensure-env-vars/0.log" Mar 12 18:55:35.331495 master-0 kubenswrapper[29097]: I0312 18:55:35.331454 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-resources-copy/0.log" Mar 12 18:55:35.386400 master-0 kubenswrapper[29097]: I0312 18:55:35.386355 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_e418d797-2c31-404b-9dc3-251399e42542/installer/0.log" Mar 12 18:55:35.432922 master-0 kubenswrapper[29097]: I0312 18:55:35.432872 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_30102cc9-45f8-46f8-bb34-eec48fdb297d/installer/0.log" Mar 12 18:55:36.465087 master-0 kubenswrapper[29097]: I0312 18:55:36.465017 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_cluster-image-registry-operator-86d6d77c7c-l4krq_e22c7035-4b7a-48cb-9abb-db277b387842/cluster-image-registry-operator/0.log" Mar 12 18:55:36.469810 master-0 kubenswrapper[29097]: I0312 18:55:36.469777 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_cluster-image-registry-operator-86d6d77c7c-l4krq_e22c7035-4b7a-48cb-9abb-db277b387842/cluster-image-registry-operator/1.log" Mar 12 18:55:36.484161 master-0 kubenswrapper[29097]: I0312 18:55:36.484121 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-sdr9x_f5fdd831-61b8-4134-a15b-41a2794d7794/node-ca/0.log" Mar 12 18:55:36.570074 master-0 kubenswrapper[29097]: I0312 18:55:36.570029 29097 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w4n8j/master-0-debug-4sjwq" event={"ID":"2bf92d4a-4c47-4229-a53b-eec0cb0e0b04","Type":"ContainerStarted","Data":"dba8062991dbb1c6b5a7ae7ce7ffc0bbf502ac03212f1d6e0b32afdfbe9e3b87"} Mar 12 18:55:36.585429 master-0 kubenswrapper[29097]: I0312 18:55:36.585369 29097 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-w4n8j/master-0-debug-4sjwq" podStartSLOduration=1.8823525700000001 podStartE2EDuration="19.58535428s" podCreationTimestamp="2026-03-12 18:55:17 +0000 UTC" firstStartedPulling="2026-03-12 18:55:17.953452466 +0000 UTC m=+1557.507432563" lastFinishedPulling="2026-03-12 18:55:35.656454176 +0000 UTC m=+1575.210434273" observedRunningTime="2026-03-12 18:55:36.583034632 +0000 UTC m=+1576.137014729" watchObservedRunningTime="2026-03-12 18:55:36.58535428 +0000 UTC m=+1576.139334377" Mar 12 18:55:37.230056 master-0 kubenswrapper[29097]: I0312 18:55:37.229978 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4527l_d94dc349-c5cb-4f12-8e48-867030af4981/ingress-operator/4.log" Mar 12 18:55:37.240019 master-0 kubenswrapper[29097]: I0312 18:55:37.239979 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4527l_d94dc349-c5cb-4f12-8e48-867030af4981/ingress-operator/5.log" Mar 12 18:55:37.249616 master-0 kubenswrapper[29097]: I0312 18:55:37.249380 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4527l_d94dc349-c5cb-4f12-8e48-867030af4981/kube-rbac-proxy/0.log" Mar 12 18:55:38.015051 master-0 kubenswrapper[29097]: I0312 18:55:38.015006 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-n2tfc_a0b73b25-16e0-4a96-99fa-c50a127bed68/serve-healthcheck-canary/0.log" Mar 12 18:55:38.628445 master-0 kubenswrapper[29097]: I0312 18:55:38.628402 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-8f89dfddd-m6z6d_f5e09875-4445-4584-94f0-243148307bb0/insights-operator/0.log" Mar 12 18:55:40.577256 master-0 kubenswrapper[29097]: I0312 18:55:40.577190 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_221d9bf3-99e3-4397-994b-0bef619f6177/alertmanager/0.log" Mar 12 18:55:40.607911 master-0 kubenswrapper[29097]: I0312 18:55:40.607845 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_221d9bf3-99e3-4397-994b-0bef619f6177/config-reloader/0.log" Mar 12 18:55:40.626156 master-0 kubenswrapper[29097]: I0312 18:55:40.626110 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_221d9bf3-99e3-4397-994b-0bef619f6177/kube-rbac-proxy-web/0.log" Mar 12 18:55:40.641894 master-0 kubenswrapper[29097]: I0312 18:55:40.641778 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_221d9bf3-99e3-4397-994b-0bef619f6177/kube-rbac-proxy/0.log" Mar 12 18:55:40.656741 master-0 kubenswrapper[29097]: I0312 18:55:40.656697 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_221d9bf3-99e3-4397-994b-0bef619f6177/kube-rbac-proxy-metric/0.log" Mar 12 18:55:40.678129 master-0 kubenswrapper[29097]: I0312 18:55:40.677901 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_221d9bf3-99e3-4397-994b-0bef619f6177/prom-label-proxy/0.log" Mar 12 18:55:40.694366 master-0 kubenswrapper[29097]: I0312 18:55:40.694305 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_221d9bf3-99e3-4397-994b-0bef619f6177/init-config-reloader/0.log" Mar 12 18:55:40.782613 master-0 kubenswrapper[29097]: I0312 18:55:40.780894 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_cluster-monitoring-operator-674cbfbd9d-fz79c_e94d098b-fbcc-4e85-b8ad-42f3a21c822c/cluster-monitoring-operator/0.log" Mar 12 18:55:40.805493 master-0 kubenswrapper[29097]: I0312 18:55:40.805425 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-68b88f8cb5-sqdhq_3d77a98a-0176-4924-81d3-8e9890852b38/kube-state-metrics/0.log" Mar 12 18:55:40.818813 master-0 kubenswrapper[29097]: I0312 18:55:40.818765 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-68b88f8cb5-sqdhq_3d77a98a-0176-4924-81d3-8e9890852b38/kube-rbac-proxy-main/0.log" Mar 12 18:55:40.835408 master-0 kubenswrapper[29097]: I0312 18:55:40.835176 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-68b88f8cb5-sqdhq_3d77a98a-0176-4924-81d3-8e9890852b38/kube-rbac-proxy-self/0.log" Mar 12 18:55:40.851130 master-0 kubenswrapper[29097]: I0312 18:55:40.851091 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_metrics-server-799595bb6c-b9xsw_c7e1bc75-30e4-418d-a685-70a2a5d80472/metrics-server/0.log" Mar 12 18:55:40.870134 master-0 kubenswrapper[29097]: I0312 18:55:40.869847 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_monitoring-plugin-59b47fdff8-c7z2l_033d99f9-d059-4be7-b091-e8696d6a735b/monitoring-plugin/0.log" Mar 12 18:55:40.889673 master-0 kubenswrapper[29097]: I0312 18:55:40.887194 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-6v462_adb0dbbf-458d-46f5-b236-d4904e125418/node-exporter/0.log" Mar 12 18:55:40.903463 master-0 kubenswrapper[29097]: I0312 18:55:40.903380 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-6v462_adb0dbbf-458d-46f5-b236-d4904e125418/kube-rbac-proxy/0.log" Mar 12 18:55:40.921908 master-0 kubenswrapper[29097]: I0312 18:55:40.921877 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-6v462_adb0dbbf-458d-46f5-b236-d4904e125418/init-textfile/0.log" Mar 12 18:55:40.947296 master-0 kubenswrapper[29097]: I0312 18:55:40.947248 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-74cc79fd76-f59x9_78c13011-7a79-445f-807c-4f5e75643549/kube-rbac-proxy-main/0.log" Mar 12 18:55:40.961029 master-0 kubenswrapper[29097]: I0312 18:55:40.960998 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-74cc79fd76-f59x9_78c13011-7a79-445f-807c-4f5e75643549/kube-rbac-proxy-self/0.log" Mar 12 18:55:40.984660 master-0 kubenswrapper[29097]: I0312 18:55:40.984613 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-74cc79fd76-f59x9_78c13011-7a79-445f-807c-4f5e75643549/openshift-state-metrics/0.log" Mar 12 18:55:41.026683 master-0 kubenswrapper[29097]: I0312 18:55:41.026634 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_4b7a4ad7-3732-4c75-a6f6-57e83d6db837/prometheus/0.log" Mar 12 18:55:41.041057 master-0 kubenswrapper[29097]: I0312 18:55:41.040998 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_4b7a4ad7-3732-4c75-a6f6-57e83d6db837/config-reloader/0.log" Mar 12 18:55:41.051570 master-0 kubenswrapper[29097]: I0312 18:55:41.051510 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_4b7a4ad7-3732-4c75-a6f6-57e83d6db837/thanos-sidecar/0.log" Mar 12 18:55:41.066814 master-0 kubenswrapper[29097]: I0312 18:55:41.066769 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_4b7a4ad7-3732-4c75-a6f6-57e83d6db837/kube-rbac-proxy-web/0.log" Mar 12 18:55:41.080456 master-0 kubenswrapper[29097]: I0312 18:55:41.078913 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_4b7a4ad7-3732-4c75-a6f6-57e83d6db837/kube-rbac-proxy/0.log" Mar 12 18:55:41.106608 master-0 kubenswrapper[29097]: I0312 18:55:41.106437 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_4b7a4ad7-3732-4c75-a6f6-57e83d6db837/kube-rbac-proxy-thanos/0.log" Mar 12 18:55:41.136419 master-0 kubenswrapper[29097]: I0312 18:55:41.136361 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_4b7a4ad7-3732-4c75-a6f6-57e83d6db837/init-config-reloader/0.log" Mar 12 18:55:41.167847 master-0 kubenswrapper[29097]: I0312 18:55:41.167784 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-5ff8674d55-qs7tx_41c1bd85-369e-4341-9e80-8b4b248b5572/prometheus-operator/0.log" Mar 12 18:55:41.188581 master-0 kubenswrapper[29097]: I0312 18:55:41.188470 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-5ff8674d55-qs7tx_41c1bd85-369e-4341-9e80-8b4b248b5572/kube-rbac-proxy/0.log" Mar 12 18:55:41.205913 master-0 kubenswrapper[29097]: I0312 18:55:41.205869 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-admission-webhook-8464df8497-tzgs9_52d3dc87-0bf8-4a62-a6fc-0ffa6060c6a8/prometheus-operator-admission-webhook/0.log" Mar 12 18:55:41.228201 master-0 kubenswrapper[29097]: I0312 18:55:41.228124 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-d597fb65b-4ls97_402126ab-fc17-48bc-ab20-7f4d1f6868ee/telemeter-client/0.log" Mar 12 18:55:41.270539 master-0 kubenswrapper[29097]: I0312 18:55:41.270484 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-d597fb65b-4ls97_402126ab-fc17-48bc-ab20-7f4d1f6868ee/reload/0.log" Mar 12 18:55:41.439990 master-0 kubenswrapper[29097]: I0312 18:55:41.439854 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-d597fb65b-4ls97_402126ab-fc17-48bc-ab20-7f4d1f6868ee/kube-rbac-proxy/0.log" Mar 12 18:55:41.604888 master-0 kubenswrapper[29097]: I0312 18:55:41.604834 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-99547cb8-z9gq2_988abf9f-43bc-4440-865e-b60d248eeaaa/thanos-query/0.log" Mar 12 18:55:41.638565 master-0 kubenswrapper[29097]: I0312 18:55:41.638501 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-99547cb8-z9gq2_988abf9f-43bc-4440-865e-b60d248eeaaa/kube-rbac-proxy-web/0.log" Mar 12 18:55:41.670882 master-0 kubenswrapper[29097]: I0312 18:55:41.670835 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-99547cb8-z9gq2_988abf9f-43bc-4440-865e-b60d248eeaaa/kube-rbac-proxy/0.log" Mar 12 18:55:41.691433 master-0 kubenswrapper[29097]: I0312 18:55:41.691137 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-99547cb8-z9gq2_988abf9f-43bc-4440-865e-b60d248eeaaa/prom-label-proxy/0.log" Mar 12 18:55:41.711968 master-0 kubenswrapper[29097]: I0312 18:55:41.711916 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-99547cb8-z9gq2_988abf9f-43bc-4440-865e-b60d248eeaaa/kube-rbac-proxy-rules/0.log" Mar 12 18:55:41.732132 master-0 kubenswrapper[29097]: I0312 18:55:41.732067 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-99547cb8-z9gq2_988abf9f-43bc-4440-865e-b60d248eeaaa/kube-rbac-proxy-metrics/0.log" Mar 12 18:55:43.615143 master-0 kubenswrapper[29097]: I0312 18:55:43.615073 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-dh446_ffedbfc7-8b89-4bd0-8306-b5aa75878d42/controller/0.log" Mar 12 18:55:43.634770 master-0 kubenswrapper[29097]: I0312 18:55:43.634725 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-dh446_ffedbfc7-8b89-4bd0-8306-b5aa75878d42/kube-rbac-proxy/0.log" Mar 12 18:55:43.653128 master-0 kubenswrapper[29097]: I0312 18:55:43.652936 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rc6s5_735c0a7b-f9ed-40b4-92a2-fd05a3991503/controller/0.log" Mar 12 18:55:44.658348 master-0 kubenswrapper[29097]: I0312 18:55:44.658303 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rc6s5_735c0a7b-f9ed-40b4-92a2-fd05a3991503/frr/0.log" Mar 12 18:55:44.846103 master-0 kubenswrapper[29097]: I0312 18:55:44.846023 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rc6s5_735c0a7b-f9ed-40b4-92a2-fd05a3991503/reloader/0.log" Mar 12 18:55:44.864527 master-0 kubenswrapper[29097]: I0312 18:55:44.864235 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rc6s5_735c0a7b-f9ed-40b4-92a2-fd05a3991503/frr-metrics/0.log" Mar 12 18:55:44.883803 master-0 kubenswrapper[29097]: I0312 18:55:44.883742 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rc6s5_735c0a7b-f9ed-40b4-92a2-fd05a3991503/kube-rbac-proxy/0.log" Mar 12 18:55:44.931496 master-0 kubenswrapper[29097]: I0312 18:55:44.928061 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rc6s5_735c0a7b-f9ed-40b4-92a2-fd05a3991503/kube-rbac-proxy-frr/0.log" Mar 12 18:55:45.026398 master-0 kubenswrapper[29097]: I0312 18:55:45.026339 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rc6s5_735c0a7b-f9ed-40b4-92a2-fd05a3991503/cp-frr-files/0.log" Mar 12 18:55:45.066610 master-0 kubenswrapper[29097]: I0312 18:55:45.066564 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rc6s5_735c0a7b-f9ed-40b4-92a2-fd05a3991503/cp-reloader/0.log" Mar 12 18:55:45.115536 master-0 kubenswrapper[29097]: I0312 18:55:45.112221 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rc6s5_735c0a7b-f9ed-40b4-92a2-fd05a3991503/cp-metrics/0.log" Mar 12 18:55:45.260616 master-0 kubenswrapper[29097]: I0312 18:55:45.260046 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-hz8wl_bed3c813-ea3d-45fb-a830-59ad0830040a/frr-k8s-webhook-server/0.log" Mar 12 18:55:45.322294 master-0 kubenswrapper[29097]: I0312 18:55:45.321613 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6759bbdbf5-h458c_868ee97c-e6d1-48d8-9fd0-cf9b246480cb/manager/0.log" Mar 12 18:55:45.341528 master-0 kubenswrapper[29097]: I0312 18:55:45.341253 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-cff58f8c6-zmgcc_8f450cb2-6f8f-455f-9dce-db01d41482ad/webhook-server/0.log" Mar 12 18:55:45.359077 master-0 kubenswrapper[29097]: I0312 18:55:45.358467 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-dh446_ffedbfc7-8b89-4bd0-8306-b5aa75878d42/controller/0.log" Mar 12 18:55:45.366524 master-0 kubenswrapper[29097]: I0312 18:55:45.365829 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-dh446_ffedbfc7-8b89-4bd0-8306-b5aa75878d42/kube-rbac-proxy/0.log" Mar 12 18:55:45.398548 master-0 kubenswrapper[29097]: I0312 18:55:45.398010 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rc6s5_735c0a7b-f9ed-40b4-92a2-fd05a3991503/controller/0.log" Mar 12 18:55:46.121618 master-0 kubenswrapper[29097]: I0312 18:55:46.121062 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-49mjh_635ef9b0-f0bb-48be-a32c-99f2dc90e01f/speaker/0.log" Mar 12 18:55:46.146327 master-0 kubenswrapper[29097]: I0312 18:55:46.146268 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-49mjh_635ef9b0-f0bb-48be-a32c-99f2dc90e01f/kube-rbac-proxy/0.log" Mar 12 18:55:46.982788 master-0 kubenswrapper[29097]: I0312 18:55:46.982688 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rc6s5_735c0a7b-f9ed-40b4-92a2-fd05a3991503/frr/0.log" Mar 12 18:55:46.993136 master-0 kubenswrapper[29097]: I0312 18:55:46.992224 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rc6s5_735c0a7b-f9ed-40b4-92a2-fd05a3991503/reloader/0.log" Mar 12 18:55:46.999390 master-0 kubenswrapper[29097]: I0312 18:55:46.999359 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rc6s5_735c0a7b-f9ed-40b4-92a2-fd05a3991503/frr-metrics/0.log" Mar 12 18:55:47.012903 master-0 kubenswrapper[29097]: I0312 18:55:47.012841 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rc6s5_735c0a7b-f9ed-40b4-92a2-fd05a3991503/kube-rbac-proxy/0.log" Mar 12 18:55:47.023468 master-0 kubenswrapper[29097]: I0312 18:55:47.023422 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rc6s5_735c0a7b-f9ed-40b4-92a2-fd05a3991503/kube-rbac-proxy-frr/0.log" Mar 12 18:55:47.045715 master-0 kubenswrapper[29097]: I0312 18:55:47.043629 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rc6s5_735c0a7b-f9ed-40b4-92a2-fd05a3991503/cp-frr-files/0.log" Mar 12 18:55:47.051432 master-0 kubenswrapper[29097]: I0312 18:55:47.051390 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rc6s5_735c0a7b-f9ed-40b4-92a2-fd05a3991503/cp-reloader/0.log" Mar 12 18:55:47.066650 master-0 kubenswrapper[29097]: I0312 18:55:47.066586 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rc6s5_735c0a7b-f9ed-40b4-92a2-fd05a3991503/cp-metrics/0.log" Mar 12 18:55:47.078136 master-0 kubenswrapper[29097]: I0312 18:55:47.078085 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-hz8wl_bed3c813-ea3d-45fb-a830-59ad0830040a/frr-k8s-webhook-server/0.log" Mar 12 18:55:47.103028 master-0 kubenswrapper[29097]: I0312 18:55:47.102988 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6759bbdbf5-h458c_868ee97c-e6d1-48d8-9fd0-cf9b246480cb/manager/0.log" Mar 12 18:55:47.113166 master-0 kubenswrapper[29097]: I0312 18:55:47.112595 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-cff58f8c6-zmgcc_8f450cb2-6f8f-455f-9dce-db01d41482ad/webhook-server/0.log" Mar 12 18:55:47.681786 master-0 kubenswrapper[29097]: I0312 18:55:47.681233 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-49mjh_635ef9b0-f0bb-48be-a32c-99f2dc90e01f/speaker/0.log" Mar 12 18:55:47.692765 master-0 kubenswrapper[29097]: I0312 18:55:47.692730 29097 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-49mjh_635ef9b0-f0bb-48be-a32c-99f2dc90e01f/kube-rbac-proxy/0.log"